I've been working a lot with Hubot, which our company is using to manage our chat bot. We subscribe to the ChatOps mantra, which has a lot of value: operational changes are public, searchable, backed up, and repeatable. We also use Hubot for workflows and glue code - shortcuts for code review in Github, delivering stories in Pivotal Tracker when they are deployed to a demo environment, various alerts in PagerDuty, etc.
Hubot has a relatively simple abstraction over responding to messages in Slack, and has an Express server built-in. It's basically writing a Node application.
Writing Clean Code
A chatbot is not external, and is often not super-critical functionality. It's easy to just throw in some hacks, write very minimal tests (if any), and call it a day. At Hired we have tests for our Hubot commands, but we've never emphasized high-quality code the way we have in our main application. I'm changing that. Any app worth making is worth making well.
I've been trying to figure out how to break hubot scripts into clean modules. OO design is a hard enough problem in Ruby, where people actually care about clean code. Patterns and conventions like MVC provide helpful guidelines. None of that in JS land: it's an even split whether a library will be functional, object-oriented, or function-objects. Everything's just a private variable - no need for uppercase letters, or even full words.
While Github's docs only talk about throwing things in
/scripts, sometimes you want commands in different scripts to be able to use the same functionality. Can you totally separate these back-end libraries from the server / chat response scripts? How do you tease apart the control flow?
Promises (and they still feel all so wasted)
The point of promises is to give us back functional composition and error bubbling in the async world.
I started by upgrading our app from our old library to Bluebird. The coolest thing Bluebird does is
.catch(ErrorType), which allows you to catch only for specific errors. Combine that with the common-errors library from Shutterstock, and you get a great way to exactly classify error states.
I'm still figuring out how to use promises as a clean abstraction. Treating them like delayed
try/catch blocks seems to produce clean separations. The Bluebird docs have a section on anti-patterns that was a good start. In our code I found many places people had nested promises inside other promises, resulting in errors not reaching the original caller (or our test framework). I also saw throwing exceptions used as a form of flow control, and using the error message of the exception as a Slack reply value. Needless to say, that's not what exceptions are for.
NodeJS comes with eventing built in. The process object is an
EventEmitter, meaning you can use it like a global message bus. Hubot also acts as a global event handler, so you can track things there as well. And in CoffeeScript you can
class MyClass extends EventEmitter. If you've got a bunch of async tasks that other scripts might need to refer to, you can have them fire off an event that other objects can respond to.
For example, our deploy process has a few short steps early on that might interfere with each other if multiple deploys happen simultaneously. We can set our queueing object to listen for a "finished all blocking calls" event on deploys, and kick off the next one while the current deploy does the rest of its steps. We don't have to hook into the promise chain - a
Deploy doesn't even have to know about the
DeployQueue, which is great decoupling. It can just do its waterfall of async operations, and fire off events at each step.
Hubot comes with a Brain built-in for persistent storage. For most users, this will be based on Redis. You can treat it like a big object full of whatever data you want, and it will be there when Hubot gets restarted.
The catch is: Hubot's brain is a giant JS object, and the "persistence" is just dumping the whole thing to a JSON string and throwing it in one key in Redis. Good luck digging in from
redis-cli or any interface beyond in-app code.
Someone (not me) added SQLite3 for some things that kind of had a relational-ish structure. If you are going to use SQL in your node app, for cryin' out loud use a bloody ORM. Sequelize seems to be a big player, but like any JS framework it could be dead tomorrow.
Frankly, MongoDB is a much bigger force in the NodeJS space, and seems perfect for a low-volume, low-criticality app like a chatbot. It's relational enough to get the job done and flexible enough with schema-less documents. You probably won't have to scale it and deal with the storage, clustering, and concurrency issues. With well-supported tools like Mongoose, it might be easier to organize and manage than the one-key-in-Redis brain.
We also have InfluxDB for tracking stats. I haven't dived deep into this, so I'm not sure how it compares to statsd or Elasticsearch aggregations. I'm not even sure if they cover the same use cases or not.
Whooboy. Testing. The testing world in JS leaves much to be desired. I'm spoiled on rspec and ruby test frameworks, which have things like mocks and stubs built in.
In JS, everything is "microframeworks," i.e. things that don't work well together. Here's a quick rundown of libraries we're using:
- Mocha, the actual test runner.
- Chai, an assertion library.
- Chai-as-promised, for testing against promises.
- Supertest-as-promsied, to test webhooks in your app by sending actual http requests to
127.0.0.1. Who needs integration testing? Black-box, people!
- Nock, for expectations around calling external APIs. Of course, it doesn't work with Mocha's promise interface.
- Rewire, for messing with private variables and functions inside your scripts.
- Sinon for stubbing out methods.
- Hubot-test-helper, for setting up and tearing down a fake Hubot.
I mean, I don't know why you'd want assertions, mocks, stubs, dependency injection and a test runner all bundled together. It's much better to have femto-frameworks that you have to duct tape together yourself.
Suffice to say, there's a lot of code to glue it all together. I had to dive into the source code for every single one of these libraries to make them play nice -- neither the README nor the documentation sufficed in any instance. But in the end we get to test syntax that looks like this:
describe 'PING module', -> beforeEach -> mockBot('scripts/ping').then (robot) => @bot = robot describe 'bot ping', -> it 'sends "PONG" to the channel', -> @bot.receive('bot ping').then => expect(@bot).to.send('PONG')
The bot will shut itself down after each test, stubs and dependency injections will be reverted automatically, Nock expectations cleaned up, etc. Had to write my own Chai plugin for
expect(bot).to.send() . It's more magical than I'd like, but it's usable without knowledge of the underlying system.
When tests are easier to write, hopefully people will write more of them.
Your company's chatbot is probably more important than you think. When things break, even the unimportant stuff like karma tracking, it can lead to dozens of distractions and minor frustrations across the team. Don't make it a second-class citizen. It's an app - write it like one.
While I may have preferred something like Lita, the Ruby chatbot, or just writing a raw Node / Elixir / COBOL app without the wrapping layer of Hubot, I'm making the best of it. Refactor, don't rewrite. You can write terrible code in any language, and JS can certainly be clean and manageable if you're willing to try.