I joined Mapbox roughly three months ago as a security engineer. I'd been a full-stack engineer for a little over ten years, and it was time for a change. Just made one CRUD form too many, I guess. I've always been mildly paranoid and highly interested in security, so I was delighted when I was offered the position.
It's been an interesting switch. I was already doing a bunch of ops and internal tool engineering at Hired, and that work is similar to what our security team does. We are primarily a support team - we work to help the rest of the engineers keep everything secure by default. We're collaborators and consultants, evangelists and educators. From the industry thought leaderers and thinkfluencers I read, that seems to be how good security teams operate.
That said, tons of things are just a little bit different. My coworkers come from a variety of backgrounds; full-stack dev, sysadmin, security researcher; and they tend to come at things from a slightly different angle than I do. Right when I joined, we made an app for our intranet and I thought "It's an intranet app, no scaling needed!" My project lead corrected me, though: "Oh, some pentester is going to scan this, get a thousand 404 errors per hour and DDoS it." That kinda thing has been really neat.
I thought it'd be good to list out some of the differences I've noticed:
I've worked on a lot of multi-year-old Rails monoliths. You read stuff like POODR and watch talks by Searls because it's important your code scales with the team. Features change, plans change, users do something unexpected and now your architecture needs to turn upside-down. It's worth refactoring regularly to keep velocity high.
In security, sure, it's still important the rest of the team can read and maintain your code. But a lot of your work is one-off scripts or simple automations. Whole projects might be less than 200 LoC, and deployed as a single AWS lambda. Even if you write spaghetti code, at that point it's simple enough to understand, and rewrite from scratch if necessary.
Fixes to larger & legacy projects are usually tweaks rather than overhauls, so there's not much need to consider the overall architecture there, either.
Definitely less important than in full-stack development. Internal tooling rarely has to scale beyond the size of the company, so you're not going to need HBase. You might need to monitor logs and metrics, but those are likely going to be handled already.
Teamwork is, if anything, more important in security than it was in full-stack development. Previously I might have to chat with data science, front-end, or a specialist for some features. In security we need to be able to jump in anywhere and quickly get enough understanding to push out fixes. Even if we end up pushing all the coding to the developers, having a long iteration cycle and throwing code "over the wall" between teams is crappy. It's much better if we can work more closely, pair program, and code review to make sure a fix is complete rather than catch-this-special-case.
You also need a lot of empathy and patience. Sometimes you end up being the jerk who's blocking code or a deploy. Often you are dumping work on busy people. It can be very difficult to communicate with someone who doesn't have a lot of security experience, about a vulnerability in a legacy project written in a language you don't know.
I'm used to technical challenges like "how do we make the UI do this shiny animation?" Or "how do we scale this page to 100k reads / min?" The technical challenges I've faced so far have been nothing like that. They've been more like: "how do we encode a request to this server, such that when it makes a request to that server it hits this weird parsing bug in Node?" and subsequently "how do we automate that test?" Or "how do we dig through all our git repos for high-entropy strings?"
It's not all fun and games, though. There's plenty of work in filling out compliance forms, resetting passwords and permissions, and showing people how & why to use password managers. While not exactly rocket surgery, these are important things that improve the overall security posture of the company, so it's satisfying that way.
Most deadlines in consumer-facing engineering are fake. "We want to ship this by the end of the sprint" is not a real deadline. I often referred folks to the Voyager mission's use of a once-per-175-years planetary alignment for comparison. In operations, you get some occasional "the site is slow / down," but even then the goal was to slowly & incrementally improve systems such that the urgent things can be failed over and dealt with in good time.
In security the urgency feels a bit more real. Working with outside companies for things like penetration tests, compliance audits, and live hacking events means real legal and financial risk for running behind. "New RCE vulnerability in Nginx found" means a scramble to identify affected systems and see how quickly we can get a patch out. We have no idea how long we have before somebody starts causing measurable damage either for malicious purposes or just for the lulz.
In full-stack engineering & ops, I would occasionally need to jump into a different language or framework to get something working. Usually I could get by with pretty limited knowledge: patching Redis into a data science system for caching, or fixing an unhandled edge-case in a frontend UI. I felt like I had a pretty deep knowledge of Ruby and some other core tools, and I could pick up whatever else I needed.
There's a ton of learning any time you start at a new company: usually a new language, a new stack, legacy code and conventions. But throwing that out, I've been learning a ton about the lower-level functioning of computers and networks. Node and Python's peculiar handling of Unicode, how to tack an EHLO request onto an HTTP GET request, and how to pick particular requests out of a flood of recorded network traffic.
Also seeing some of the methods of madness that hackers use in the real world: that thing you didn't think would be a big deal because it's rate-limited? They'll script a cron job and let it run for months until it finds something and notifies them.
It's been a blast, and I look forward to seeing what the next three months brings. I'm hopeful for more neat events, more learning, and maybe pulling some cool tricks of my own one of these days.