a t e v a n s . c o m

(╯°□°)╯︵ <ǝlqɐʇ/>

That's a wrap! I miss it already. RailsConf is a wonderful conference, and I'd encourage any Ruby and/or Rails engineer to go.

I don't think I went to any conferences before joining Hired. We've been a sponsor for all three RailsConf's since I joined in 2014, and I've gone every year. The company values career development, and going to conferences is part of that. We are a Rails shop, our founders are hardcore Ruby junkies, and we believe in giving back to the community for all the things it's done for us. Of course, as a tech hiring marketplace, it makes business sense as well.

I gave Hired's sponsor talk this year - my first time speaking at a conference, and a big 'ole check off the bucket list. I'd love to do it again. I'd love to give the same talk again for meetup groups or something - I learned a lot from having a "real" audience who are neither coworkers nor the mirror at home. It'd probably be a much better talk with a few iterations.

I went through two of our open source libraries from a teamwork and technical perspective. This post will get a link to it once ConFreaks puts it up.

Developer Anxiety

This seemed to be a major theme of the conf overall. DHH's keynote talked about the FUD and the shiny-new-thing treadmill that prevents us from putting roots down in the community of a language & ecosystem. Searls' keynote talked about how many of his coding decisions are driven by fear of familiar problems. There was a panel on spoon theory - which applies more generally than Christine Miserandino's personal example.

Studies of anxiety and stress in development seem to indicate that anxiety is bad for productivity. Anxiety and stress impair focus, shrink working memory, and hurt creativity - which are all necessary for doing good work. These studies are marred by small sample sizes, poor methodology, and the fact that we generally don't know what the hell "productivity" even means for developers. But the outcomes seem obvious intuitively.

It would behoove us to figure out how to reduce the overall anxiety in our industry. May is Mental Health Awareness Month in 2017. I've seen a lot of folks talking about Open Source Mental Illness, which seems like a great organization. There's not going to be a silver bullet, it'll take a lot of effort to educate, de-stigmatize, and work toward solutions. At least talking about it is a good start.

Working Together

Lots of talks dealt with empathy, teamwork, witholding judgement, and team dynamics. Searls had a quotable line - I'll paraphrase it as: "When I see people code in what I think is a bad way, I try to have empathy - I would hate for someone else to tell me I couldn't code my favorite way, so I can put myself in their shoes."

The Beyond SOLID talk discussing the continued divide between "business" and development. Haseeb Qureshi countered DHH's pro-tribalism, saying it's a barrier to communication that prevents developers from converging on "optimal" development. Joe Mastey's talk on legacy code discussed ways to build team momentum and reduce fear of codebase dragons. Several talks covered diversity, where implicit bias can shut down communication and empathy right quick.

Working together to build things is a huge and complex process, and there's no overall takeaway to be had here. Training ourselves in empathy and improving our communications are key developments that seemed to be a common thread.

I didn't see a lot of talk about the organizations or structures affecting how we work together. Definitely something I'd like to hear more about - particularly with examples of structural change in organizations, what worked, and what didn't. How do you balance PMs vs EMs? Are those even the right roles? How does it affect an org to have a C-level exec who can code?

Some Technical Stuff

There were way fewer "do X in Y minutes" talk this year, for which I am greatful. That sort of thing can be summed up better in a blog post, and frankly hypes up new tech without actually teaching much. There were more "deep dive" talks, a few "tips for beginners" talks, and some valuable-looking workshops. I didn't go to many of these, but it seemed like a good mix.

Wrap-Up

It was a great conference, and I'd love to go back next year. I'd like to qualify for a non-sponsor talk some time, but I should probably act more locally first - perhaps having a few live iterations beforehand would improve the big-audience presentation.

If you're a Rails developer, or a Ruby-ist of any sort, I'd say it's worth the trip. There may be scholarships available if you can't go on a company's dime - worth a shot.

I was listening to the Liftoff podcast episode about the Voyager missions. They pointed out that it launched in 1977 - well, NASA put it best:

This layout of Jupiter, Saturn, Uranus and Neptune, which occurs about every 175 years, allows a spacecraft on a particular flight path to swing from one planet to the next without the need for large onboard propulsion systems.

That's a hard deadline. "If your code isn't bug-free by August 20, we'll have to wait 175 years for the next chance." No pressure.

In startups, I often hear of "hard" deadlines like "we have to ship by Friday so we can keep on schedule." Or worse, "we promised the customer we'd have this by next Tuesday." To meet these deadlines, managers and teams will push hard. Engineers will work longer hours in crunch mode. Code reviews will be lenient. Testing may suffer. New code might be crammed into existing architecture because it's faster that way, in the short term. Coders will burn out.

These are not deadlines, they're bullshit. Companies are generally not staring down a literal once-a-century event which, if missed, will wind down the entire venture.

If you're consistently rushed and not writing your best code, you're not learning and improving. The company is getting a crappier codebase that will slow down and demoralize the team, and engineers are stagnating. Demand a reason for rush deadlines, and don't accept "...well, because the sprint's almost over..."

Stumbled on something interesting - checking a class to see if it "is a" or is "kind of" a parent class didn't work. Checking using the inheritence operator did work, as well as looking at the list of ancestors.

irb(main)> class MyMailer < ActionMailer::Base ; end
=> nil
irb(main)> MyMailer.is_a? Class
=> true
irb(main)> MyMailer.kind_of? Class
=> true
irb(main)> MyMailer.is_a? ActionMailer::Base
=> false
irb(main)> MyMailer.kind_of? ActionMailer::Base
=> false

irb(main)> a = MyMailer.new
=> #<MyMailer:0x007fa6d4ce9938>

irb(main)> a.is_a? ActionMailer::Base
=> true
irb(main)> a.kind_of? ActionMailer::Base
=> true

irb(main)> !!(MyMailer < ActionMailer::Base)
=> true
irb(main)> !!(MyMailer < ActiveRecord::Base)
=> false
irb(main)> MyMailer.ancestors.include? ActionMailer::Base
=> true

I suppose .is_a? and .kind_of? are designed as instance-level methods on Object. Classes inherit from Module, which inherits from Object, so a class is technically an instance. These methods will look at what the class is an instance of - pretty much always Class - and then check the ancestors of that.

tl;dr when trying to find out what kind of thing a class is, use the inheritence operator or look at the array of ancestors. Don't use the methods designed for checking the type of an instance of something.

I love Mike Gunderloy's "Double Shot" posts on his blog A Fresh Cup. Inspired by that, here's a link roundup of some stuff I've read lately, mainly from my Pinboard.

How to monitor Redis performance metrics - Guess what I've been up to for the last week or two?

redis-rdb-tools - Python tool to parse Redis dump.rdb files, analyze memory, and export data to JSON. For some advanced analysis, load the exported JSON into Postgres and run some queries.

redis-memory-analyzer - Redis memory profiler to find the RAM bottlenecks through scaning key space in real time and aggregate RAM usage statistic by patterns.

LastPass password manager suffers ‘major’ security problem - They've had quite a few of these, recently.

0.30000000000000004.com - Cheat sheet for floating point stuff. Also, a hilarious domain name.

Subgraph OS - An entire OS built on TOR, advanced firewalls, and containerizing everything. Meant to be secure.

When I get paged, my first step is to calmly(-ish) asses the situation. What is the problem? Our app metrics have, in many cases, disappeared. Identify and confirm it: yep, bunch of dashboards are gone.

Usually I start debugging at this point. What are the possible reasons for that? Did someone deploy a change? Maybe an update to the metrics libraries? Nope, too early: today's deploy logs are empty. Did app servers get scaled up, which might cause rate-limiting? Nah, all looks normal. Did our credentials get changed? Doesn't look like it, none of our tokens have been revoked.

All of that would have been a waste of time. Our stats aggregation & dashboard service, Librato, was affected by a wide-scale DNS outage. Somebody DDoS'd Dyn, one of the largest DNS providers in the US. Librato had all kinds of problems, because their DNS servers were unavailable.

We figured that out almost immediately, without having to look for any potential problems with our system. It's easy for me to forget to check status pages before diving into an incident, but I've found a way to make it easier. I made a channel in our Slack called #statuspages . Slack has a nifty slash command for subscribing to RSS feeds within a channel. Just type

/feed subscribe http://status.whatever.com/feed-url.rss

and boom! Any incident updates will appear as public posts in the channel.

Lots of services we rely on use StatusPage.io, and they provide RSS and Atom feeds for incidents and updates. The status pages for Heroku and AWS also offer RSS feeds - one for each service and region in AWS' case. I subscribed to everything that might affect site and app functionality, as well as development & business operations - Github, npm, rubygems, Atlassian (Jira / Confluence / etc), Customer.io etc.

Every time one of these services reports an issue, it appears almost immediately in the channel. When something's up with our app, a quick check in #statuspages can abort the whole debugging process. It can also be an early warning system: when a hosted service says they're experiencing "delayed connections" or "intermittent issues," you can be on guard in case that service goes down entirely.

Unfortunately not all status pages have an RSS feed. Salesforce doesn't provide one. Any status page powered by Pingdom doesn't either: it's not a feature they provide. I can't add Optimize.ly because they use Pingdom. C'mon y'all - get on it!

I've "pinned" links to these dashboards in #statuspages so they're at least easy to find. Theoretically, I could use a service like IFTTT to get notified whenever the page changes - I haven't tried, but I'm betting that would be too noisy to be worth it. Some quick glue code in our chat bot to scrape the page would work, but then the code has to be maintained, and who has time?

We currently have 45 feeds in #statuspages . It's kind of a disaster today with all the DNS issues, but it certainly keeps us up-to-date. Thankfully Slack isn't down for us - that's a whole different dumpster fire. But I could certainly use an RSS service as an alternative, such as my favorite Feedbin. That's the great things about RSS: the old-school style of blogging really represented the open, decentralized web.

I'm not the first person to think of this, I'm sure, but hopefully it will help & inspire some of you fine folks out there.

[Updated May 24, 2016: now with more salt]

There's been some chatter about how Ruby on Rails is dying / doomed / awful.

Make Ruby Great Again

...can a programming language continue to thrive even after its tools and core libraries are mostly finished? What can the community do to foster continued growth in such an environment? Whose job will it be?

Rails is Yesterday's Software

We need new thinking, not just repackaging of the same old failures. I should be spending time writing code, not debugging a haystack of mutable, dependency-wired mess.

My Time with Rails is Up

As a result of 9 freaking years of working with Rails and contributing like hell to many ruby OSS projects, I’ve given up. I don’t believe anything good can happen with Rails. This is my personal point of view, but many people share the same feelings.

Ruby in decline...

The significance of the drop in May and the leveling of is that "Ruby does not become the 'next big programming language'".

Whoops! That last one is from 2007. To summarize the various comments I've seen on Reddit, Hacker News, blogs, articles, and talks lately, here are some examples stuffed with the finest straw:

Rails Doesn't Scale

This canard hasn't aged well. My uninformed college-student argument in 2006 was: "You don't scale your language or framework, you scale your application." I had never scaled anything back then, but over ten years later I still agree with young-dumb-and-loud college me.

Your language or framework is rarely the bottleneck: your database, network structure, or background services are where things will slow down. ActionCable may or may not get two million plus connections per box like a trivial Elixir app, but you can always deploy more app servers and workers. And I'd worry those Elixir connections will end up waiting on Redis or Postgres or something else - your language or framework isn't a magic bullet.

You can also extract services when Ruby and Rails aren't the right tools for the job. Many Ruby gems do this by binding to native C extensions. Extracting a service for machine learning models can take advantage of Python's different memory model and better ML libraries. Your CRUD app or mobile API doesn't have to do it all.

Ruby is a Messy Language

In Ruby you can re-open any class from anywhere. You can access any object's private methods. Duck typing is everywhere. People make weird DSLs like RSpec. Rails is chock-full of magic in routing, database table names, callbacks, and mysterious inheritence. Ruby and Rails are easy, but they're not clear or simple.

This blast from the past is a 2007 argument. Since then we've learned tons about how to write expressive-but-clear Ruby. Rails isn't going to keep you safe by default, but you can and should try to write clear Ruby on top of it. I've found the following super helpful when refactoring and designing code:

Solnic in particular felt like his attempts to reduce complexity, tight coupling, and shenanigans in Rails were actively mocked by DHH and others in the community. If true, that's awful, and I would hope most of our community isn't the ActiveMocking type. DHH has certainly said Rails won't trade convienence for cleanliness by default. I don't think that precludes cleanliness and safety when the code and team get big enough to need it, though.

A programming language isn't a magic bullet here, either. I've seen people hold up Java as an example of explicit, clear code with sanity checks and few surprises. Well, here's some Java code for an Elasticsearch plugin, with comments stripped, spacing cut down, and truncated for brevity:

public class LookupScript extends AbstractSearchScript {
    public static class Factory extends AbstractComponent implements NativeScriptFactory {
        private final Node node;
        private final Cache<Tuple<String, String>, Map<String, Object>> cache;

        @SuppressWarnings("unchecked")
        @Inject
        public Factory(Node node, Settings settings) {
            super(settings);
            this.node = node;

            ByteSizeValue size = settings.getAsBytesSize("examples.nativescript.lookup.size", null);
            TimeValue expire = settings.getAsTime("expire", null);
            CacheBuilder<Tuple<String, String>, Map<String, Object>> cacheBuilder = CacheBuilder.builder();

Now here's a "magical" plugin for ActiveRecord, Paperclip. Truncated for brevity, but nothing stripped out:

module Paperclip
  require 'rails'

  class Railtie < Rails::Railtie
    initializer 'paperclip.insert_into_active_record' do |app|
      ActiveSupport.on_load :active_record do
        Paperclip::Railtie.insert
      end

      if app.config.respond_to?(:paperclip_defaults)
        Paperclip::Attachment.default_options.merge!(app.config.paperclip_defaults)
      end
    end

    rake_tasks { load "tasks/paperclip.rake" }
  end

The latter seems vastly more readable to me. You may need to read the docs on what some of the methods do, but I had to read an awful lot more to grok the Elasticsearch plugin. Since we're in the era of musty old arguments, I'll bring up the amazing AbstractSingletonProxyFactoryBean again.

Ruby gives you enough rope to hang yourself. Javascript helpfully ties the noose and sets up a gallows for you. Java locks you in a padded room for your own good. Elixir uses a wireless shock colla... wait, sorry. That metaphor got weird when a functional language jumped in. Point is, you can write terrible code in any language, and some languages make that a little easier or harder depending on your definition of "terrible." Ruby strikes a balance that is my favorite so far.

Rails Jobs are Drying Up

Here's some data I found by asking colleagues, Googling and poking around:

Rails isn't achieving the thunderous growth percentage that Javascript and Elixir are, but percentage growth is much harder to achieve when your base is already huge.

On further discussion and analysis, this is actually insanely hard to measure. Javascript is on the rise, for sure - but how many Javascript StackOverflow questions are for front ends that exist inside a Rails, Django, or Phoenix app? How many Node.js apps are tiny background services giving data to a Java dumpster fire? Where do you file a full-stack job?

Languages and frameworks don't just disappear. You can still find a decent job in COBOL. Ruby and Rails will be with us for a long time to come.

The Cool Kids™ Have Moved On

These are people I look up to, and significantly contributed to my understanding & development in Ruby.

Of course, many of the "movers-on" are still part of the Ruby + Rails communities. Sidekiq for Ruby isn't going anywhere - Sidekiq Enterprise is pretty recent, and Mike has a commendable goal of making his business sustainable. Yehuda has been at RailsConf the last two years, and works at Skylight - a super cool performance monitoring tool with great Rails defaults. Searls has also been at RailsConf the last two years, and as he said to me on Twitter:

...my favorite thing about programming is that we can use multiple languages and share in more than one community 😀

~ @searls

Sanid Metz and Aaron Patterson have been fixtures in the community, and they don't seem to be abandoning it. Nick Quaranto is still at RailsConf. And of course, Matz is still on board.

Plus, a mature community always has new Cool Kids™ and Thought Leaderers™. I've found lots of new people to follow over the last few years. I can't be sure if they're "new" or "up and coming" or "how the heck am I just finding out now." Point is: the number of smart Ruby & Rails people I follow is going up, not down.

The above-mentioned Katrina Owen, Sarah Allen, Godfrey Chan, Richard Schneeman and others are all people I wish I'd known about earlier in my career.

I'm Still Pro-Skub

Ruby and Rails make a fantastic environment for application development. Rails is great at rapid prototyping. It's great at web apps. It's great at APIs - even better now that Rails 5 has an API-only interface. The Ruby community encourages learning, thoughtfulness, and trying to get things right. Resources like books, tutorials, screencasts, and boot camps abound. Extensions to Rails to help keep code clean, DRY, and organized are out there. Integrations with SaaS products are plentiful.

If you're looking to learn a new language or just getting started in web dev, I'd still recommend Ruby and Rails before anything else. They're easy enough to get started in, and the more you read and work with them, the better you'll get at OOP and interface design.

I've been working a lot with Hubot, which our company is using to manage our chat bot. We subscribe to the ChatOps mantra, which has a lot of value: operational changes are public, searchable, backed up, and repeatable. We also use Hubot for workflows and glue code - shortcuts for code review in Github, delivering stories in Pivotal Tracker when they are deployed to a demo environment, various alerts in PagerDuty, etc.

Hubot is written in CoffeeScript, a transpiles-to-javascript language that is still the default in Rails 5. CoffeeScript initially made it easy and obvious how to write classes, inheritence, and bound functions in your Javascript. Now that ES6 has stolen most of the good stuff from CoffeeScript, I think it's lost most of its value. But migrating legacy code to a new language is low ROI, and a giant pain even with tools like Decaffeinate. Besides, most of the hubot plugins and ecosystem are in CoffeeScript, so there's probably some advantage to maintaining compatibility there.

Hubot has a relatively simple abstraction over responding to messages in Slack, and has an Express server built-in. It's basically writing a Node application.

Writing Clean Code

A chatbot is not external, and is often not super-critical functionality. It's easy to just throw in some hacks, write very minimal tests (if any), and call it a day. At Hired we have tests for our Hubot commands, but we've never emphasized high-quality code the way we have in our main application. I'm changing that. Any app worth making is worth making well.

I've been trying to figure out how to break hubot scripts into clean modules. OO design is a hard enough problem in Ruby, where people actually care about clean code. Patterns and conventions like MVC provide helpful guidelines. None of that in JS land: it's an even split whether a library will be functional, object-oriented, or function-objects. Everything's just a private variable - no need for uppercase letters, or even full words.

While Github's docs only talk about throwing things in /scripts, sometimes you want commands in different scripts to be able to use the same functionality. Can you totally separate these back-end libraries from the server / chat response scripts? How do you tease apart the control flow?

Promises (and they still feel all so wasted)

Promises are a critical piece of the JS puzzle. To quote Domenic Denicola:

The point of promises is to give us back functional composition and error bubbling in the async world.

~ You're Missing the Point of Promises

I started by upgrading our app from our old library to Bluebird. The coolest thing Bluebird does is .catch(ErrorType), which allows you to catch only for specific errors. Combine that with the common-errors library from Shutterstock, and you get a great way to exactly classify error states.

I'm still figuring out how to use promises as a clean abstraction. Treating them like delayed try/catch blocks seems to produce clean separations. The Bluebird docs have a section on anti-patterns that was a good start. In our code I found many places people had nested promises inside other promises, resulting in errors not reaching the original caller (or our test framework). I also saw throwing exceptions used as a form of flow control, and using the error message of the exception as a Slack reply value. Needless to say, that's not what exceptions are for.

Events

NodeJS comes with eventing built in. The process object is an EventEmitter, meaning you can use it like a global message bus. Hubot also acts as a global event handler, so you can track things there as well. And in CoffeeScript you can class MyClass extends EventEmitter. If you've got a bunch of async tasks that other scripts might need to refer to, you can have them fire off an event that other objects can respond to.

For example, our deploy process has a few short steps early on that might interfere with each other if multiple deploys happen simultaneously. We can set our queueing object to listen for a "finished all blocking calls" event on deploys, and kick off the next one while the current deploy does the rest of its steps. We don't have to hook into the promise chain - a Deploy doesn't even have to know about the DeployQueue, which is great decoupling. It can just do its waterfall of async operations, and fire off events at each step.

Storage

Hubot comes with a Brain built-in for persistent storage. For most users, this will be based on Redis. You can treat it like a big object full of whatever data you want, and it will be there when Hubot gets restarted.

The catch is: Hubot's brain is a giant JS object, and the "persistence" is just dumping the whole thing to a JSON string and throwing it in one key in Redis. Good luck digging in from redis-cli or any interface beyond in-app code.

Someone (not me) added SQLite3 for some things that kind of had a relational-ish structure. If you are going to use SQL in your node app, for cryin' out loud use a bloody ORM. Sequelize seems to be a big player, but like any JS framework it could be dead tomorrow.

Frankly, MongoDB is a much bigger force in the NodeJS space, and seems perfect for a low-volume, low-criticality app like a chatbot. It's relational enough to get the job done and flexible enough with schema-less documents. You probably won't have to scale it and deal with the storage, clustering, and concurrency issues. With well-supported tools like Mongoose, it might be easier to organize and manage than the one-key-in-Redis brain.

We also have InfluxDB for tracking stats. I haven't dived deep into this, so I'm not sure how it compares to statsd or Elasticsearch aggregations. I'm not even sure if they cover the same use cases or not.

Testing

Whooboy. Testing. The testing world in JS leaves much to be desired. I'm spoiled on rspec and ruby test frameworks, which have things like mocks and stubs built in.

In JS, everything is "microframeworks," i.e. things that don't work well together. Here's a quick rundown of libraries we're using:

  • Mocha, the actual test runner.
  • Chai, an assertion library.
  • Chai-as-promised, for testing against promises.
  • Supertest-as-promsied, to test webhooks in your app by sending actual http requests to 127.0.0.1. Who needs integration testing? Black-box, people!
  • Nock, for expectations around calling external APIs. Of course, it doesn't work with Mocha's promise interface.
  • Rewire, for messing with private variables and functions inside your scripts.
  • Sinon for stubbing out methods.
  • Hubot-test-helper, for setting up and tearing down a fake Hubot.

I mean, I don't know why you'd want assertions, mocks, stubs, dependency injection and a test runner all bundled together. It's much better to have femto-frameworks that you have to duct tape together yourself.

Suffice to say, there's a lot of code to glue it all together. I had to dive into the source code for every single one of these libraries to make them play nice -- neither the README nor the documentation sufficed in any instance. But in the end we get to test syntax that looks like this:

describe 'PING module', ->
  beforeEach ->
    mockBot('scripts/ping').then (robot) =>
      @bot  = robot

  describe 'bot ping', ->
    it 'sends "PONG" to the channel', ->
      @bot.receive('bot ping').then =>
        expect(@bot).to.send('PONG')

The bot will shut itself down after each test, stubs and dependency injections will be reverted automatically, Nock expectations cleaned up, etc. Had to write my own Chai plugin for expect(bot).to.send() . It's more magical than I'd like, but it's usable without knowledge of the underlying system.

When tests are easier to write, hopefully people will write more of them.

Wrapup

Your company's chatbot is probably more important than you think. When things break, even the unimportant stuff like karma tracking, it can lead to dozens of distractions and minor frustrations across the team. Don't make it a second-class citizen. It's an app - write it like one.

While I may have preferred something like Lita, the Ruby chatbot, or just writing a raw Node / Elixir / COBOL app without the wrapping layer of Hubot, I'm making the best of it. Refactor, don't rewrite. You can write terrible code in any language, and JS can certainly be clean and manageable if you're willing to try.

I was running through tutorials for Elixir + Phoenix, and got to the part where forms start showing validation failures. Specifically this code, from Pragmatic Programmers' "Programming Phoenix" book:

<%= form_for @changeset, user_path(@conn, :create), fn f -> %> 
  <%= if f.errors != [] do %>
    <div class="alert alert-danger">
      <p>Oops, something went wrong! Please check the errors below:</p>
      <ul>
        <%= for {attr, message} <- f.errors do %>
          <li><%= humanize(attr) %> <%= message %></li>
        <% end %> 
      </ul>
    </div>
  <!-- snip -->
<% end %>

I got this error: no function clause matching in Phoenix.HTML.Safe.Tuple.to_iodata/1

Couldn't find a bloody solution anywhere. Took a long time to find IO.inspect . The message thing turned out to be a tuple that looked made for sprintf - something like {"name can't be longer than %{count}", count: 1} , so I spent forever trying to figure out if Elixir has sprintf , looked like there might be something in :io.format() , then I had to learn about Erlang bindings, but that wasn't...

Ended up on #elixir-lang IRC channel, and the author of the book (Chris Mccord) pointed me to the Ecto "error helpers" in this upgrade guide. It's a breaking change in Phoenix 1.1 + Ecto 2.0. The book is (and I imagine many tutorials are) for Phoenix 1.0.x , and I had installed the latest at 1.1.0 .

Major thanks to Chris - I have literally never had a question actually get answered on IRC. It was a last-resort measure, and it really says something about the Elixir community that someone helped me figure this out.

Stretchy is an ActiveRecord-esque query builder for Elasticsearch. It's not stable yet (hence the <1.0 version number), and Elasticsearch has been moving so fast it's hard to keep up. The major change in 2.0 was eliminating the separation between queries and filters, a major source of complexity for the poor gem.

For now my machine needs Elasticsearch 1.7 for regular app development. To update the gem for 2.0 2.1, I'd need to have both versions of Elasticsearch installed. While I could potentially do that by making a bunch of changes to the config files set up by Homebrew, I thought it would be better to just run the specs on a virtual machine and solve the "upgrade problem" indefinitely.

Docker looked great because I wanted to avoid machine setup as much as possible. I've used Vagrant before, but it has its own configuration steps beyond just "here's the Dockerfile." I already have boot2docker docker-machine installed and running for using the CodeClimate Platform™ beta, and I didn't want to have multiple virtual machines running simultaneously, eating RAM and other resources. Here's the setup:

  • Docker lets you run virtual machines inside "containers," using a few different technologies similar to LXC
  • boot2docker docker-machine manages booting virtual machine instances which will run your Docker containers. I'm using it to keep one virtualbox machine around to run whatever containers I need at the moment
  • fig docker-compose lets you declare and link multiple containers in a docker-compose.yml file, so you don't need to manually run all the Docker commands
  • The official quickstart guide for rails gives a good run-down of the tools and setup involved.

It's a bit of tooling, but it really didn't take long to get started; maybe an hour or two. Once I had it up and running for the project, I just modified the docker-compose.yml on my new branch. I had to do a bit of fiddling to get Compose to update the elasticsearch image from 1.7 to 2.1:

# modify the docker-compose.yml to update the image version, then:
docker-compose stop elastic
docker-compose pull elastic
docker-compose rm -f -v elastic
docker-compose run web rpsec # boom! builds the machines and runs specs

Once there, the specs started exploding and I was in business. Let the updates begin! After that, just a matter of pestering our CI provider to update their available versions of Elasticsearch so the badge on the repo will look all nice and stuff.

I wanted to try out basscss. Ended up changing the fonts, color scheme, and fixing syntax highlighting all over the place. Now using highlightjs, which looks like the only well-supported syntax highlighter than can guess which language your snippet is in.

This blog's been around for 5 years now. It's mostly thanks to Jekyllrb -- whatever I want to change about the site, I can do it without having to migrate from one database or format to another. If I need to code something myself, I can do that with Ruby or Javascript or plain 'ole shell scripts.

atevans.com has run off at least 3 different servers. It's been on Github Pages, Heroku, Linode, and more. It will never be deleted because some blogging company with a great platform ran out of VC, or added some social dingus I didn't want.

The git repo has been hosted four different places. When I got pwned earlier this year, the git repo was on the infected server. I just deleted it since my local copy had everything. Benefits of decentralized version control.

I had blogs all over the place before this, but this one has stuck around. I think even if Jekyll dies off somehow, a parser & renderer for a bunch of flat files should be easy in any language. I wonder what this will look like in another 5 years?


atevans.com in basscss