a t e v a n s . c o m

(╯°□°)╯︵ <ǝlqɐʇ/>

You plug your shiny new eGPU into your MacBook Pro, and expect a game like Borderlands 3 from the Epic Games Store to run smoothly (30+ FPS) at medium settings on your 4k monitor. It's a good graphics card, but when running the Benchmark animation, you're still seeing drops into 10-20fps range, and it stutters and becomes a slideshow. Is it a crappy port? Is your graphics card broken? No! Take heart, and follow these steps:

  1. Use a disk space calculator like DaisyDisk, and notice that there's a huge folder at /Users/Shared/Epic\ Games/Borderlands3/ , where the Epic Games Store puts all of its games
  2. Open it in finder by running: open /Users/Shared/Epic\ Games/Borderlands3/ on the terminal
  3. ⌃+click or on the Borderlands3 app in that folder and select "Get Info" from the menu
  4. If your eGPU is plugged in, you should see a checkbox that says "Prefer External GPU" , as explained in this Apple Support article . Check that box!
  5. Re-launch the game via the Epic Store. Your benchmarks should be markedly improved.

For some reason the game was trying to run using the MacBook Pro's built-in mobile GPU instead of the honking, loud, industrial-strength graphics pipes of the eGPU. Checking that setting fixed it.

There's a few things I don't know about this setting:

  • will it get clobbered if Epic Games Store updates the game files?
  • is it retained after unplugging and re-plugging the eGPU?
  • is there any way to set it via command-line instead of in Finder?
  • can it be passed in as a command-line argument via Epic Game Store's per-game "Additional Command Line Arguments" setting?

But at least this has solved my slideshow problem for now.

Sometimes, you're testing a chat bot dingus, and you need a couple images so it can respond to @bot boop . I wanted more than 10, quickly, preferably without having to click through Google Image search and manually download them.

Giphy and Imgur both require you to sign up and make an oAuth app and blah blah blah before you can start using their API. Not worth it for a trivial one-off.

Turns out Reddit exposes any endpoint in JSON as well as for web browsers, and that's publicly accessible. And Reddit has a whole community called /r/BoopableSnoots .

So I threw this command together:

curl "https://www.reddit.com/r/BoopableSnoots.json" | \
jq -c '.data.children | .[].data.url' | \
xargs -n 1 curl -O

What is this doing?

curl - sends a GET request to the specified URL and forwards the response body to stdout .

| - take the output of the previous command on stdout and feed it into the next command as stdin

jq -c '...' - jq is a command-line JSON parser and editor. This command drills one level down the object structure returned by reddit and returns the data.url field. The -c flag removes quoting from the output, and the .[] iterates across an array, outputting one element per line

xargs - is a meta-command; it says "run the following command for each line of input on stdin." It can run in parallel, or in a pool of workers, etc.

curl -O - sends a GET request to the specified URL and saves the response to the filesystem using the filename contained in the URL

Putting it all together:

  1. Get the ~25 most recent posts off Reddit's BoopableSnoots community
  2. Filter down to just the images that people posted
  3. download those images to the current directory

This quickly got me some snoots for my bot to boop, and I could move on with my work.

<rant>

I tried to compile our mapbox-java sdk on my Macbook, and ran into a versioning error:

$ make build-config
./gradlew compileBuildConfig
Starting a Gradle Daemon (subsequent builds will be faster)

> Task :samples:compileBuildConfig FAILED
/Users/username/Workspace/mapbox-java/samples/build/gen/buildconfig/src/main/com/mapbox/sample/BuildConfig.java:4: error: cannot access Object
public final class BuildConfig
             ^
  bad class file: /modules/java.base/java/lang/Object.class
    class file has wrong version 56.0, should be 53.0
    Please remove or make sure it appears in the correct subdirectory of the classpath.
1 error

I had installed Java via Homebrew Cask, the normal way to install developer things on macOS. Running brew cask install java gets the java command all set up for you, but what version is that?

$ java -v
Unrecognized option: -v
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
# c'mon java, really :/ smh
$ java --version
openjdk version "12" 2019-03-19
OpenJDK Runtime Environment (build 12+33)
OpenJDK 64-Bit Server VM (build 12+33, mixed mode, sharing)

$ brew cask info java
java: 12.0.1,69cfe15208a647278a19ef0990eea691
https://jdk.java.net/
/usr/local/Caskroom/java/10.0.1,10:fb4372174a714e6b8c52526dc134031e (396.4MB)
/usr/local/Caskroom/java/12,33 (64B)
From: https://github.com/Homebrew/homebrew-cask/blob/master/Casks/java.rb
==> Name
OpenJDK Java Development Kit
==> Artifacts
jdk-12.0.1.jdk -> /Library/Java/JavaVirtualMachines/openjdk-12.0.1.jdk (Generic Artifact)

Which is cool. 12.0.1 , 12+33 and 56.0 are basically the same number.

So I guess I need a lower version of Java. No idea what version of Java will get me this 53.0 "class file," but let's try the last release. Multiple versions means you need a version manager, and it looks like jenv is Java's version manager manager.

$ brew install jenv
$ eval "$(jenv init - zsh)"
$ jenv enable-plugin export
$ jenv add $(/usr/libexec/java_home)
$ jenv versions
* system (set by /Users/andrewevans/.jenv/version)
12
openjdk64-12

Jenv can't build or install Java / OpenJDK versions for you, so you have to do that separately via Homebrew, then "add" those versions via jenv add /Some/System/Directory , because java. Also, the oh-my-zsh plugin doesn't seem to quite work, as it doesn't set the JAVA_HOME env var. I had to manually add the "jenv init" and "enable-plugin" to my shell init scripts.

Anyway, let's try Java 11, as 11 is slightly less than 12 and 53 is slightly less than 56.

$ brew tap homebrew/cask-versions
$ brew cask install java11
$ jenv add /Library/Java/JavaVirtualMachines/openjdk-11.0.2.jdk/Contents/Home
$ jenv local 11.0
$ jenv shell 11.0

Had to add both of the latter jenv commands, as I guess jenv local only creates the .java-version file and doesn't actually set JAVA_HOME . Sadly 11.0 is not 53.0 , so I still got basically the same error when I ran make build-config .

After asking my coworkers, Android and our mapbox-java repo uses JDK 8. You could install this via a cask called, funnily enough, java8 . Except Oracle torpedoed it. Sounds like they successfully ran the "embrace, extend, extinguish" playbook on the "open" OpenJDK, though I am not a Java and thus do not fully understand the insanity of these versions and licensing issues). tl;dr Homebrewers had to remove the java8 cask.

Homebrewers seemed to prefer AdoptOpenJDK, which is a perfectly cromulent name and doesn't at all add to the confusion of the dozens of things named "Java." So let's get that installed:

$ brew cask install homebrew/cask-versions/adoptopenjdk8
$ jenv add /Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home
$ cd ~/Workspace/mapbox-java
$ jenv local 1.8
$ jenv shell 1.8 # apparently 'jenv local' wasn't enough??
$ jenv version
1.8 (set by /Directory/.java-version)
$ java -v
Unrecognized option: -v
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
# right, forgot about that, jeebus java please suck less
$ java --version
Unrecognized option: --version
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
# wtf java srsly?!
$ java -version
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_212-b03)
OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.212-b03, mixed mode)
# farking thank you finally
$ make build-config

This seemed to be the right version, and the make build-config command succeeded this time. JDK 8 and 1.8.0 and 53.0 are pretty similar numbers, so in retrospect this should've been obvious. And AdoptOpenJDK has more prefixes before "Java," so I probably should've realized that was the "real" Java.

Anyway, now I can compile the SDK without having to have installed IntelliJ IDEA or Android Studio, which both seemed kinda monstrous and who knows what the hell they'd leave around my system. Goooooood times.

</rant>

I joined Mapbox roughly three months ago as a security engineer. I'd been a full-stack engineer for a little over ten years, and it was time for a change. Just made one CRUD form too many, I guess. I've always been mildly paranoid and highly interested in security, so I was delighted when I was offered the position.

It's been an interesting switch. I was already doing a bunch of ops and internal tool engineering at Hired, and that work is similar to what our security team does. We are primarily a support team - we work to help the rest of the engineers keep everything secure by default. We're collaborators and consultants, evangelists and educators. From the industry thought leaderers and thinkfluencers I read, that seems to be how good security teams operate.

That said, tons of things are just a little bit different. My coworkers come from a variety of backgrounds; full-stack dev, sysadmin, security researcher; and they tend to come at things from a slightly different angle than I do. Right when I joined, we made an app for our intranet and I thought "It's an intranet app, no scaling needed!" My project lead corrected me, though: "Oh, some pentester is going to scan this, get a thousand 404 errors per hour and DDoS it." That kinda thing has been really neat.

I thought it'd be good to list out some of the differences I've noticed:

code quality

I've worked on a lot of multi-year-old Rails monoliths. You read stuff like POODR and watch talks by Searls because it's important your code scales with the team. Features change, plans change, users do something unexpected and now your architecture needs to turn upside-down. It's worth refactoring regularly to keep velocity high.

In security, sure, it's still important the rest of the team can read and maintain your code. But a lot of your work is one-off scripts or simple automations. Whole projects might be less than 200 LoC, and deployed as a single AWS lambda. Even if you write spaghetti code, at that point it's simple enough to understand, and rewrite from scratch if necessary.

Fixes to larger & legacy projects are usually tweaks rather than overhauls, so there's not much need to consider the overall architecture there, either.

scaling

Definitely less important than in full-stack development. Internal tooling rarely has to scale beyond the size of the company, so you're not going to need HBase. You might need to monitor logs and metrics, but those are likely going to be handled already.

teamwork

Teamwork is, if anything, more important in security than it was in full-stack development. Previously I might have to chat with data science, front-end, or a specialist for some features. In security we need to be able to jump in anywhere and quickly get enough understanding to push out fixes. Even if we end up pushing all the coding to the developers, having a long iteration cycle and throwing code "over the wall" between teams is crappy. It's much better if we can work more closely, pair program, and code review to make sure a fix is complete rather than catch-this-special-case.

You also need a lot of empathy and patience. Sometimes you end up being the jerk who's blocking code or a deploy. Often you are dumping work on busy people. It can be very difficult to communicate with someone who doesn't have a lot of security experience, about a vulnerability in a legacy project written in a language you don't know.

technical challenge

I'm used to technical challenges like "how do we make the UI do this shiny animation?" Or "how do we scale this page to 100k reads / min?" The technical challenges I've faced so far have been nothing like that. They've been more like: "how do we encode a request to this server, such that when it makes a request to that server it hits this weird parsing bug in Node?" and subsequently "how do we automate that test?" Or "how do we dig through all our git repos for high-entropy strings?"

It's not all fun and games, though. There's plenty of work in filling out compliance forms, resetting passwords and permissions, and showing people how & why to use password managers. While not exactly rocket surgery, these are important things that improve the overall security posture of the company, so it's satisfying that way.

deadlines

Most deadlines in consumer-facing engineering are fake. "We want to ship this by the end of the sprint" is not a real deadline. I often referred folks to the Voyager mission's use of a once-per-175-years planetary alignment for comparison. In operations, you get some occasional "the site is slow / down," but even then the goal was to slowly & incrementally improve systems such that the urgent things can be failed over and dealt with in good time.

In security the urgency feels a bit more real. Working with outside companies for things like penetration tests, compliance audits, and live hacking events means real legal and financial risk for running behind. "New RCE vulnerability in Nginx found" means a scramble to identify affected systems and see how quickly we can get a patch out. We have no idea how long we have before somebody starts causing measurable damage either for malicious purposes or just for the lulz.

learning

In full-stack engineering & ops, I would occasionally need to jump into a different language or framework to get something working. Usually I could get by with pretty limited knowledge: patching Redis into a data science system for caching, or fixing an unhandled edge-case in a frontend UI. I felt like I had a pretty deep knowledge of Ruby and some other core tools, and I could pick up whatever else I needed.

There's a ton of learning any time you start at a new company: usually a new language, a new stack, legacy code and conventions. But throwing that out, I've been learning a ton about the lower-level functioning of computers and networks. Node and Python's peculiar handling of Unicode, how to tack an EHLO request onto an HTTP GET request, and how to pick particular requests out of a flood of recorded network traffic.

Also seeing some of the methods of madness that hackers use in the real world: that thing you didn't think would be a big deal because it's rate-limited? They'll script a cron job and let it run for months until it finds something and notifies them.

wrap-up

It's been a blast, and I look forward to seeing what the next three months brings. I'm hopeful for more neat events, more learning, and maybe pulling some cool tricks of my own one of these days.

Security@ Notes

I went to HackerOne's Security@ conference last week, and can vouch that it was pretty cool! Thanks to HackerOne for the invite and to Hired for leave to go around the corner from our building for the day.

My notes and main take-aways:

The name of the conference comes from an email inbox that every company should theoretically have. Ideally you'd have a real vulnerability disclosure program with a policy that lets hackers safely report vulnerabilities in your software. But not every company has the resources to manage that, so at least having a security@ email inbox can give you some warning.

As a company, you probably should not have a bug bounty program unless you are willing to dedicate the resources to managing it. To operate a successful bug bounty, you need to respond quickly to all reports and at least get them triaged. You should have a process in place to quickly fix vulnerabilities and get bounties paid. If hackers have reports sitting out there forever, it frustrates both parties and discourages working with the greater bounty community.

I was surprised during the panel with three of HackerOne's top hackers (by bounty and reputation on their site). Two of them had full-time jobs in addition to pursuing bug bounties. They seemed to treat their hacking like a freelance gig on the side - pursue the quickest & most profitable bounties, and skip over low-rep or slow companies. Personally I find it difficult to imagine having the energy to research other companies for vulnerabilities after a full day's work. But hey, if that's your thing, awesome!

Natalie Silvanovich from Google's Project Zero had a really interesting talk on how to reduce attack surface. It had a lot of similar themes to good product management in general: consider security when plotting a product's roadmap, have a process for allocating time to security fixes, and spend time cleaning up code and keeping dependencies up to date. It's easy to think that old code and old features aren't hurting anyone: the support burden is low, the code isn't getting in the way, and 3% of users love this feature, so why take time & get rid of it? Lowering your attack surface is a pretty good reason.

Coinbase's CSO had an interesting note: the max payout from your bug bounty program is a proxy marker for how mature your program is. If your max bounty is $500, you're probably paying enough bounties that $500 is all you can afford. They had recently raised their max bounty to $50,000 because they did not expect to be paying out a lot high-risk bounties.

Fukuoka Ruby Night

Last Friday I also went to the Fukuoka Ruby Night. I guess the Fukuoka Prefecture is specifically taking an interest in fostering a tech and startup scene, which is pretty cool. They had talks from some interesting developers from Japan and SF, and they also brought in Matz for a talk. Overall a pretty cool evening.

Matz and another developer talked a bunch about mruby - the lightweight, fast, embeddable version of Ruby. It runs on an ultra-lightweight VM compiled for any architecture, and libraries are linked in rather than interpreted at runtime. I hadn't heard much about it, and figured it was a thing for arduinos or whatever. Turns out it's seen some more impressive use:

  • Yes, arduinos, Raspberry Pi's, and other IoT platforms
  • MItamae - a lightweight Chef replacement distributed as a single binary
  • Nier Automata, a spiffy game for PS4 and PC

Matz didn't have as much to say about Ruby 3. He specifically called out that if languages don't get shiny new things, developers get bored and move on. I guess Ruby 3 will be a good counter-point to the "Ruby is Dead" meme. Ruby 3 will be largely backwards-compatible to avoid getting into quagmires like Ruby 1.9, Python 3, and PHP 6. They are shooting for 3x the performance of Ruby 2.x - the Ruby 3x3 project.

One way the core Ruby devs see for the language to evolve without breaking changes or fundamental shifts (such as a type system) is to focus on developer happiness. Building a language server into the core of Ruby 3 is one example that could drastically improve the tooling for developers.

He also talked about Duck Inference - an "80% compile time type checking" system. This could potentially catch a lot more type errors at compile time without requiring type hints, strict typing or other code-boilerplate rigamarole. Bonus: it would be fully backwards-compatible.

I'm a little skeptical - I personally find CTAGs and other auto-complete tools get in the way about as often as they help. For duck inferencing Matz mentioned saving type definitions and message trees into a separate file in the project directory, for manual tweaking as needed. Sounds like it could end up being pretty frustrating.

Guess we'll see! Matz said the team's goal is "before the end of this decade," but to take that with a grain of salt. Good to see progress in the language and that Ruby continues to have a solid future.

Curses is a C library for terminal-based apps. If you are writing a screen-based app that runs in the terminal, curses (or the "newer" version, ncurses ) can be a huge help. There used to be an adapter for Ruby in the standard library, but since 2.1.0 it's been moved into its own gem.

I took a crack at writing a small app with curses, and found the documentation and tutorials somewhat lacking. But after a bit of learning, and combining with the Verse and TTY gems, I think it came out kinda nice.

Here's a screenshot of the app, which basically stays open and monitors a logfile:

logwatch demo gif

There are three sections - the left side is a messages pane, where the app will post "traffic alert" and "alert cleared" messages. The user can scroll that pane up and down with the arrow keys (or h/j if they've a vim addict). On the right are two tables - the top one shows which sections of a web site are being hit most frequently. The bottom shows overall stats from the logs.

Here's the code for it, and I'll step through below and explain what does what:

require "curses"
require "tty-table"
require "logger"

module Logwatch
  class Window

    attr_reader :main, :messages, :top_sections, :stats

    def initialize
      Curses.init_screen
      Curses.curs_set 0 # invisible cursor
      Curses.noecho # don't echo keys entered

      @lines = []
      @pos = 0

      half_height = Curses.lines / 2 - 2
      half_width = Curses.cols / 2 - 3

      @messages = Curses::Window.new(Curses.lines, half_width, 0, 0)
      @messages.keypad true # translate function keys to Curses::Key constants
      @messages.nodelay = true # don't block waiting for keyboard input with getch
      @messages.refresh

      @top_sections = Curses::Window.new(half_height, half_width, 0, half_width)
      @top_sections.refresh

      @stats = Curses::Window.new(half_height, half_width, half_height, half_width)
      @stats << "Stats:"
      @stats.refresh
    end

    def handle_keyboard_input
      case @messages.getch
      when Curses::Key::UP, 'k'
        @pos -= 1 unless @pos <= 0
        paint_messages!
      when Curses::Key::DOWN, 'j'
        @pos += 1 unless @pos >= @lines.count - 1
        paint_messages!
      when 'q'
        exit(0)
      end
    end

    def print_msg(msg)
      @lines += Verse::Wrapping.new(msg).wrap(@messages.maxx - 10).split("\n")
      paint_messages!
    end

    def paint_messages!
      @pos ||= 0
      @messages.clear
      @messages.setpos(0, 0)
      @lines.slice(@pos, Curses.lines - 1).each { |line| @messages << "#{line}\n" }
      @messages.refresh
    end

    def update_top_sections(sections)
      table = TTY::Table.new header: ['Top Section', 'Hits'], rows: sections.to_a
      @top_sections.clear
      @top_sections.setpos(0, 0)
      @top_sections.addstr(table.render(:ascii, width: @top_sections.maxx - 2, resize: true))
      @top_sections.addstr("\nLast refresh: #{Time.now.strftime('%b %d %H:%M:%S')}")
      @top_sections.refresh
    end

    def update_stats(stats)
      table = TTY::Table.new header: ['Stats', ''], rows: stats.to_a
      @stats.clear
      @stats.setpos(0, 0)
      @stats.addstr(table.render(:ascii, width: @stats.maxx - 2, resize: true))
      @stats.addstr("\nLast refresh: #{Time.now.strftime('%b %d %H:%M:%S')}")
      @stats.refresh      
    end

    def teardown
      Curses.close_screen
    end

  end
end

Initialize

On initialize, we do some basic initialization of the curses gem - this will set up curses to handle all rendering to the terminal window.

Curses sets up a default Curses::Window object to handle rendering and listening for keyboard input, accessible from the stdscr method. This is where Curses.lines and Curses.cols come from, and represent the whole terminal.

I initially tried using the default window's subwin method to set up the panes used by the app, but that proved to add a whole bunch of complication for no actual benefit. Long ago it may have provided a performance boost, but we're well past that, I think.

Also tried using the Curses::Pad class so I wouldn't have to handle scrolling myself, but that also had tons of wonky behavior. Rendering yourself isn't that hard; save the trouble.

To handle keyboard input, we set keypad(true) on the messages window. We also set nodelay = true (yes, one is a method call, the other is assignment, no idea why) so we can call .getch but still update the screen while waiting for input.

The two stats windows, we initialize mostly empty. Then call refresh on all three to get them set up on the active terminal.

Main Render Loop

The class that loops and takes actions is not the window manager; but the interface is pretty simple. There's a loop that checks for updates from the log file, updates the stats data store, then calls the two render methods for the stat windows. It also tells the window manager to handle any keyboard input, and will call print_msg() if it needs to add an alert or anything to the main panel.

The main way to get text onto the screen is to call addstr() or << on a Curses::Window , then call refresh() to paint the buffer to the screen.

The Window has a cursor, and it will add each character from the string and advance that, just like in a text editor. It tries to do a lot of other stuff; if you add characters beyond what the screen can show, it will scroll right and hide the first n columns. If you draw too many lines it will scroll down and provide no way to scroll back up. I tried dealing with scrl() and scroll() methods and such, but could never get the behavior working well. In the end, I did it manually.

I used the verse gem to wrap lines of text so that we never wrote past the window boundaries. The window manager keeps an array of all lines that have been printed during the program, and a position variable representing how far we've scrolled down in the buffer. On each update it:

  1. clears the Curses::Window buffer
  2. moves the cursor back to (0,0)
  3. prints the lines within range to the Curses::Window
  4. calls refresh() to paint the Curses::Window buffer to the screen

The stats windows are basically the same. I used the TTY::Table gem from the tty-gems collection to handle rendering the calculated stats into pretty ASCII tables.

Teardown

The teardown method clears the screen, which resets the terminal to non-visual mode. The handle_keyboard_input method calls exit(0) when a user wants to quit, but the larger program handles the interrupt signal and ensure 's the teardown method gets called.

Wrap

Hope that's helpful! I had the wrong model of how all this stuff worked in my head for most of the development of this simple app. Maybe having what I came to laid out here will be useful.

I wanted to make a simple key-value server in Elixir - json in, json out, GET, POST, with an in-memory map. The point is to reinvent the wheel, and learn me some Elixir. My questions were: a) how do I build this without Phoenix and b) how do I persist state between requests in a functional language?

Learning new stuff is always painful, so this was frustrating at points and harder than I expected. But I want to emphasize that I did get it working, and do understand a lot more about how Elixir does things - the community posts and extensive documentation were great, and I didn't have to bug anyone on StackOverflow or IRC or anything to figure all this out.

Here's what the learning & development process sounded like from inside my head.

  1. First, how do I make a simple JSON API without Phoenix? I tried several tutorials using Plug alone. Several of them were out of date / didn't work. Finally found this one, which was up-to-date and got me going. Ferret was born!

  2. How do I reload code when I change things without manually restarting? Poked around and found the remix app.

  3. Now I can take JSON in, but how do I persist across requests? I think we need a subprocess or something? That's what CodeShip says, anyhow.

  4. Okay, I've got an Agent. So, where do I keep the agent PID so it's reusable across requests?

  5. Well, where the heck does Plug keep session data? [3] That should be in-memory by default, right? Quickly, to the source code!

  6. Hrm, well, that doesn't tell me a lot. Guess it's abstracted out, and in a language I'm still learning.

  7. Maybe I'll make a separate plug to initialize the agent, then dump it into the request bag-of-data?

    Pretty sure plug MyPlug, agent_thing: MyAgent.start_link will work. Can store that in my Plug's options, then add it to Conn so it's accessible inside requests

  8. Does a plug's init/1 get called on every request, or just once? What about my Router's init/1 ? Are things there memoized?

  9. Guess I'll assume the results are stored and passed in as the 2nd arg to call/2 in my plug.

  10. Wait, what does start_link return?

    14:15:15.422 [error] Ranch listener Ferret.Router.HTTP had connection process started with :cowboy_protocol:start_link/4 at #PID<0.335.0> exit with reason: \{\{\%MatchError{term: [iix: {:ok, #PID<0.328.0>}]}
    
  11. WHY DO I KEEP GETTING THIS?!

    ** (MatchError) no match of right hand side value: {:ok, #PID<0.521.0>}
    (ferret) lib/plug/load_index.ex:10: Ferret.Plug.LoadIndex.init/1
    
  12. figures out how to assign arguments

    turns out [:ok, pid] and {:ok, pid} and %{"ok" => pid} are different things

  13. futzes about trying various things to make that work

  14. How do I log stuff, anyway? Time to learn Logger.

  15. THE ROUTE IS RIGHT THERE WHAT THE HELL?!

    14:29:45.127 [info]  GET /put
    
    14:29:45.129 [error] Ranch listener Ferret.Router.HTTP had connection process started with :cowboy_protocol:start_link/4 at #PID<0.716.0> exit with reason: \{\{\%FunctionClauseError{arity: 4,
    
  16. half an hour later - Oh, I'm doing a GET request when I routed it as POST. I'm good at programmering! I swear! I'm smrt!

  17. Turns out Conn.assign/3 and conn.assigns are how you put things in a request - not Conn.put_private/3 like plug/session uses.

  18. Okay, I've got my module in the request, and the pid going into my KV calls

  19. WTF does this mean?!?!

    Ranch listener Ferret.Router.HTTP had connection process started with :cowboy_protocol:start_link/4 at #PID<0.298.0> exit with reason: {\{:noproc, {GenServer, :call, [#PID<0.292.0>,
    
  20. bloody hours pass

  21. The pid is right bloody there! Logger.debug shows it's passing in the same pid for every request!

  22. Maybe it's keeping the pid around, but the process is actually dead? How do I figure that out? tries various things

  23. Know what'd be cool? Agent.is_alive? . Things that definitely don't work:

    1. Process.get(pid_of_dead_process)
    2. Process.get(__MODULE__)
    3. Process.alive?(__MODULE__)

    Which is weird, since an Agent is a GenServer is a Process (as far as I can tell). This article on "Process ping-pong" was helpful.

  24. Finally figured out to use GenServer.whereis/1 , passing in __MODULE__ , and that will return nil if the proc is dead, and info if it's alive.

  25. Turns out I don't need my own plug at all: just init the Agent with the __MODULE__ name, and I can reference it by that, just like a GenServer.

  26. IT'S STILL SAYING :noproc ! JEEBUS!

  27. Okay, I guess remix doesn't re-run Ferret.Router.init/1 when it reloads the code for the server. So when my Agent dies due to an ArgumentError or whatever, it never restarts and I get this :noproc crap.

  28. I'll just manually restart the server - I don't want to figure out supervisors right now.

  29. This seems like it should work, why doesn't it work? Agent.get_and_update __MODULE__, &Map.merge(&1, dict)

  30. Is it doing a javascript async thing? Do I need to tell Plug to wait on the response to get_and_update ?

  31. Would using Agent.update and then Agent.get work? Frick, I dunno, how async are we getting here? All. the. examples. use a pid instead of a module name to reference the agent.

  32. How would I even tell plug to wait on an async call?

  33. Oh, frickin'! get_and_update/3 has to return a tuple , and there's no function that does single-return-value-equals-new-state.

    I need a function that takes the new map, merges it with the existing state, then duplicates the new map to return, but get_and_update/3 's function argument only receives the current state and doesn't get the arguments.

    get_and_update/4 supposedly passes args, but you have to pass a Module & atom instead of a function. I couldn't make that work, either.

  34. Does Elixir have closures? I mean, that wouldn't make a lot of sense from a "pure functions only" perspective, but in Ruby it'd be like

    new_params = conn.body_params
    Agent.get_and_update do |state|
      new_state = Map.merge(state, new_params)
      [new_state, new_state]
    end
    

    ...errr, whelp, no, that doesn't work.

  35. The Elixir crash-course guide doesn't mention closures, and I'm not getting how to do this from the examples.

  36. hours of fiddling

  37. uuuuugggghhhhhhhh functional currying monad closure pipe recursions are breaking my effing brain. You have to make your own curry, or use a library. This seems unnecessary for such a simple dang thing.

  38. Is there a difference between Tuple.duplicate(Map.merge(&1, dict), 2) and Map.merge(&1, dict) |> Tuple.duplicate(2) ? I dunno, neither one of those are working.

  39. What's the difference between?????

    1. def myfunc do ... end ; &myfunc
    2. f = fn args -> stuff end ; &f
    3. &(do_stuff)
  40. Okay, this is what I want: ​ &(Map.merge(&1, dict) |> Tuple.duplicate 2)

    Why is dict available inside this captured function definition? I dunno.

  41. BOOM OMG IT'S WORKING! Programming is so cool and I'm awesome at it and this is the best!

  42. Let's git commit!

  43. Jeebus, I better write this crap down so I don't forget it. Maybe someone else will find it useful. Wish I coulda Google'd this while I was futzing around.

  44. I'm gonna go murder lots of monsters with my necromancer while my brain cools off. Then hopefully come back and figure out:

    1. functions and captures
    2. pipe operator's inner workings
    3. closures???
    4. supervisors

Links I used:

Elixir Getting-Started Guide

Maps: elixir-lang

Logging with Logger

Processes & State

Statefulness in a Stateful Language (CodeShip)

Processes to Hold State

When to use Processes in Elixir

Elixir Process Ping-Pong

Using Agents in Elixir

Agent - elixir-lang

Concurrency Abstractions in Elixir (CodeShip)

GenServer name registration (hexdocs)

GenServer.whereis - for named processes

Agent.get_and_update (hexdocs) - hope you are good with currying: no way to pass args into the update function unless you can pass a module & atom (and that didn't work for me)

Plug

How to build a lightweight webhook endpoint with Elixir

Plug (Elixir School) - intro / overview

Plug body_params - StackOverflow

plug/session.ex - how do they get / store session state?

Plug.Conn.assign/3 (hexdocs)

Plug repo on Github

Function Composition

Currying and Partial Application in Elixir

Composing Elixir Functions

Breaking Up is Hard To Do

Function Currying in Elixir

Elixir Crash Course - partial function applications

Partial Function Application in Elixir

Elixir vs Ruby vs JS: closures

Was looking at our AWS configuration audit in Threat Stack today. One issue it highlighted was that some of our security groups had too many ports open. My first guess was that there were vestigial "default" groups created from hackier days of adding things from the console.

But before I could go deleting them all, I wanted to see if any were in use. I'm a lazy, lazy man, so I'm not going to click around and read stuff to figure it out. Scripting to the rescue!

#!/usr/bin/env ruby

require 'bundler/setup'
require 'aws-sdk'
require 'json'

client = Aws::EC2::Client.new

groups = client.describe_security_groups

SecGroup = Struct.new(:open_count, :group_name, :group_id) do
  def to_json(*a)
    self.to_h.to_json(*a)
  end
end

open_counts = groups.security_groups.map do |group|
  counts = group.ip_permissions.map {|ip| ip.to_port.to_i - ip.from_port.to_i + 1 }
  SecGroup.new counts.inject(&:+), group.group_name, group.group_id
end

wide_opens = open_counts.select {|oc| oc.open_count > 1000 }

if wide_opens.empty?
  puts "No wide-open security groups! Yay!"
  exit(0)
end

puts "Found some wide open security groups:"
puts JSON.pretty_generate(wide_opens)

Boxen = Struct.new(:instance_id, :group, :tags) do
  def to_json(*a)
    self.to_h.to_json(*a)
  end
end

instances_coll = wide_opens.map do |group|

  resp = client.describe_instances(
    dry_run: false,
    filters: [
      {
        name: "instance.group-id",
        values: [group.group_id],
      }
    ]
  )

  resp.reservations.map do |r|
    r.instances.map do |i|
      Boxen.new(i.instance_id, group, i.tags)
    end
  end
end

instances = instances_coll.flatten

puts "Being used by the following instances:"
puts JSON.pretty_generate(instances)

Something to throw in the 'ole snippets folder. Maybe it'll help you, too!

Every now and then I have to explain why I like working remote, why I don't like offices, and why I hate open offices in particular.

I'm an introvert at heart. I can be social, I do like hanging out with people, and I get restless and depressed if I'm alone at home for a week or two. But I have to manage my extroversion -- make sure I allocate sufficient time to quiet, introverted activities. Reading books, single-player games, hacking on side projects, etc.

To do great work, I need laser-like focus. I need multi-hour uninterrupted blocks of time. Many engineers feel the same way - see Paul Graham's oft-cited "Maker's Schedule" essay.

Open offices are the worst possible fucking environment for me.

Loud noises at random intervals make me jump out of my skin - and I don't even have PTSD or anything. I need loud music delivered via headphones to get around the noise inherent to open offices.

Constant movement in my peripheral vision is a major distraction. I often have to double-check to see if someone is trying to talk to me because of the aforementioned headphones. I message people on Slack to see if they have time to chat, but plenty of people think random shoulder-taps are great.

Privacy is important to me. People looking over my shoulder at what I'm doing makes me itch. I feel like people are judging me in real-time based on my ugly, unfinished work. Even if they're talking to someone else, I get paranoid and want to know if they're looking.

If you follow Reddit, Hacker News, or any tech or programming related forums, you'll see hate-ons for open offices pop up every month or two. Here's a summary.

Link Roundup:

PeopleWare: Productive Projects and Teams (3rd Edition) (originally published: 1987)

Washington Post: Google got it wrong. The open-office trend is destroying the workplace.

Fast Company: Offices For All! Why Open-Office Layouts Are Bad For Employees, Bosses, And Productivity

BBC: Why open offices are bad for us | [Hacker News thread]

CNBC: 58% of high-performance employees say they need more quiet work spaces

Mental Floss: Working Remotely Makes You Happier and More Productive

Nathan Marz: The inexplicable rise of open floor plans in tech companies (creator of Apache Storm) [Hacker News thread]

Various. Reddit. Threads. Complaining.

Slashdot. Hates. Them. Too.

The hacking group that leaked NSA secrets claims it has data on foreign nuclear programs - The Washington Post - We are officially in the cyberpunk era of information warfare

URL Validation - A guide and example regex for one of those surprisingly difficult problems

Cookies are Not Accepted - New York Times - “Protecting Your Digital Life in 8 Easy Steps”, none of which is “keep your software updated” ~ @pinboard

adriancooney/console.image: The one thing Chrome Dev Tools didn't need. - console.image("http://i.imgur.com/hv6pwkb.png"); (yes, I added a stupid easter egg)

PG MatViews for better performance in Rails 🚀 - Postgresql Materialized Views for fast analytics queries

An Abridged Cartoon Introduction To WebAssembly – Smashing Magazine

Fixing Unicode for Ruby Developers – DaftCode Blog - Another surprisingly difficult problem: storing, reading, and interpreting text in files

Strength.js - Password strength indicator w/ jQuery

Obnoxious.css - Animations for the strong of heart, and weak of mind