a t e v a n s . c o m

(╯°□°)╯︵ <ǝlqɐʇ/>

On BitTorrent's upcoming chat app:

To start, users will be able to choose how they use our chat app. If you are porting in contact lists, you have the convenience of signing up with email or with a phone number. You will also have the option to sign up in Incognito mode, using no such information at all.

What we are building for the Alpha will also address users communicating with a trusted source who prefer their communication to be device-to-device (decentralized). This means no hops through any 3rd party servers, and no chance of anything being intercepted.

For users who may prefer to have their metadata obscured, messages will be indirect and routed through a third node. It is all a matter of preference.

Showing users directly how their message is being routed is genius. Using relay servers to obscure metadata is cool, but how would users know which relay servers to trust? Or are they supposed to set up their own?

I'm also curious is mesh networking will play a role here. Really sets up some cool cyberpunk scenarios where a journalist and a source can "meet" by being on the same block or in the same highrise, but without knowing what the other person looks like, their real identity, or exactly where they're at, and being able to chat without the NSA et al being able to pick up the traffic.

The design integrity of your system is far more important than being able to test it any particular layer. Stop obsessing about unit tests, embrace backfilling of tests when you're happy with the design, and strive for overall system clarity as your principle pursuit.

I think it's hilarious that TDD has gone too far for DHH. And I like his thoughts around the matter - test what's important, at the important levels. Don't add dozens of gems and layers of indirection and library magic just for testing.

This series means to teach you everything you need to know to implement any different caching level inside your Rails application. It assumes you know nothing at all about caching in any of its forms. It takes from zero to knowledge to an intermediate level in all areas. If you can't implement caching in your app after reading this then I've failed.

Lovely reference. Thanks, Mr Hawkins!

So, how hard could it be to build my own music player backend? Seems like it would be a matter of solving these things:

  • Use a robust library for audio decoding. How about the same one that VLC uses?
  • Support adding and removing entries on a playlist for gapless playback.
  • Support pause, play, and seek.
  • Per-playlist-item gain adjustment so that perfect loudness compensation can be implemented.
  • Support loudness scanning to make it easy to implement for example ReplayGain.
  • Support playback to a sound device chosen at runtime.
  • Support transcoding audio into another format so a player can implement, for example, HTTP streaming.
  • Give raw access to decoded audio buffers just in case a player wants to do something other than one of the built-in things.
  • Try to get other projects to use it to benefit from code reuse.
  • Make the API generic enough to support other music players and other use cases.
  • Get it packaged into Debian and Ubuntu.
  • Make a blog post about it to increase awareness.

Playing music on a computer is almost as hard as reading text files. At this point, I'd be pretty happy with a player that:

  1. plays all tracks in a folder (since no one can get artist / album / compilation right)
  2. displays things in a list view (album art sucks if you download lots of independant or unpublished music)
  3. syncs locally on my devices - Mac, Windows, and iOS. Dropbox would be a great option here.
  4. plays your music through a browser

The loudness compensation and gain equalizing is not a big deal to me - I prefer to listen to DJ mixes and entire albums, so the next track is rarely going to be mixed differently than the previous, and it's too easy to introduce playback bugs there.

iTunes fails at all of these - syncing with iTunes Match blows, you have to make playlists for every folder, and it loooooves album art (even when most of it is missing). Also, it crashes constantly on windows.

Google Play Music (wtf branding) fails at 1 and 2, and I didn't get far enough to experiment with 3. Seems good enough at 4.

Rdio doesn't let you play your own music, and they sure as hell aren't gonna have the new (free!) album from Illectrix or Savoy's new album Self Predator

Spotify has terrible local vs cloud syncing, and pretty bad playlist management. Basically, if you're not getting all your music from their landfill of pop hits, get lost.

Amazon Music looks like it hits 2, 3 and 4. I have enough playlists at this point that maybe 1 won't be a problem.

Some random, disorganized thoughts after finishing "Good to Great" by Jim Collins. Here's a great slideshare with the Cliff Notes, though they're not going to mean much unless you've read through the examples.

This book seems to avoid just being a collection of survivor bias stories. Every company they researched had a comparison company from the same industry at the same time that didn't become great. They also had a collection of companies that turned great for a little while, but couldn't keep it up. The book is mostly about what differentiates the long-term great companies from the merely-good-to-terrible ones.

Leadership: it's not about strategy or style. Great leaders tend to be quiet, humble, self-effacing, and yet doggedly persistent and nigh-unstoppable once they've made a decision. They take their time, deliberate, have heated discussions with their trusted advisors & friends, then pursue a path like the Terminator.

The right people. You have to start with the right people. The right people agree with your philosophy, are excited about what they're doing, and have it in their character to do the kind of work they do. Skills can be learned; this is about character traits. Once you have the right people, you have to keep acquiring more of the right people. As soon as you hire the wrong people, you are playing a losing game. No compromises here.

You should never have to motivate your people. They should be intrinsically motivated. Your job is to prevent them from getting de-motivated.

Don't fire someone immediately if they don't seem right - they might just not be in the right position.

The Stockdale paradox: confront the brutal facts. Get unfiltered data. See if what you are doing is working. Maintain unwavering faith that your story ends with triumph; that you retire with your company on top of the world. But at the same time, do not translate this into short-term visions of big growth. If you say "we will be this much better by Christmas" you are setting yourself up for failure.

Find your hedgehog concept. This is something your company does that

  1. Really taps into the passions of your people
  2. Drives your economics, however you measure that (revenue per customer, per visit, per sale, etc)
  3. Is something you could plausibly be the best in the world at

This is the core of your company - make sure you hammer & refine this constantly, and say no to anything that is outside this concept. It could be a product or business: for Walgreens it was "the most convienent drugstores." Or it could be a process: for GE it was "building & training the best executive talent."

The flywheel - momentum is a real thing for groups of people. It's very slow to get going, but once it does, it builds and builds. It helps you attract the right people, build resources, and chase the right initiatives. If you change directions constantly, you are not turning the flywheel - you are not building momentum. If you think some new intitiative or acquisition will motivate people, quickly produce big results, and get that momentum started -- you're wrong. It won't. Momentum builds over time, exclusively.

Your company needs to have some reason for existing beyond making money. It needs to have core values that it holds to; only it doesn't matter what those values are. Phillip Morris has core values and holds to them strongly, even though those values including poisoning millions of people.

Overall summary: This book is great. It contains a lot of lessons we've heard before, but it puts them all together into a coherent structure, and brings in real qualitative data to back them up. It's reinforced a lot of the beliefs I had about running a good company, and I'm hoping that's not just selection bias.

webkit_server hangs periodically when run from Capybara in Ruby

I am having a problem where an instance of webkit_server with Capybara and capybara-webkit running headless connected to a local Xvfb screen hangs when visiting a URL. It seems to happen after several minutes of repeatedly visiting different URLs and executing finders. (I'm using capybara for a screen scraping application in vanilla Ruby, not for testing.)

I've confirmed that when it hangs the site is still accessible (for example, through curl or wget on the command line). I've also tried wrapping the Ruby code that invokes the visit and subsequent finders in a Timeout block so that after 60 seconds of waiting a new URL is visited, but any visit() attempt fails after the first time this occurs. The only way to fix the problem is to kill both the Ruby process invoking Capybara/capybara-webkit and the webkit_server process and restart.

When I strace the webkit_server process, I see output like this repeatedly:

clock_gettime(CLOCK_MONOTONIC, {5821, 680279627}) = 0
gettimeofday({1330890176, 712033}, {0, 33052112}) = 0
gettimeofday({1330890176, 712087}, {0, 140736435864256}) = 0
gettimeofday({1330890176, 712137}, {0, 33108640}) = 0
clock_gettime(CLOCK_MONOTONIC, {5821, 680486036}) = 0
clock_gettime(CLOCK_MONOTONIC, {5821, 680530091}) = 0
read(7, 0x1fac1b4, 4096)                = -1 EAGAIN (Resource temporarily unavailable)

And if I strace the Ruby process that invokes it, it is hung on a read():

Process 3331 attached - interrupt to quit
^C <unfinished ...>
Process 3331 detached

I know that the Ruby code hangs on the Capybara visit() method.

Any ideas on what I can do to troubleshoot or correct this is appreciated. I'm assuming the problem has something to do with some resource webkit_server needs to visit the URL but am not sure what to try next.


Selected Answer (from Grimmo)

Capybara.server do |app, port| require 'rack/handler/thin' Rack::Handler::Thin.run(app, :Port => port) end

Seems to work.

Oculus VR gets bought by Facebook for $2bil:

Most important, Facebook understands the potential for VR. Mark and his team share our vision for virtual reality’s potential to transform the way we learn, share, play, and communicate. Facebook is a company that believes that anything is possible with the right group of people, and we couldn’t agree more.

I never would have seen my hero John Carmack working for Mark Zuckerburg. In "Masters of Doom," he expressed interest in creating a metaverse as portrayed in Neal Stephenson's "Snow Crash" after completing Quake. But we kind of already had that - it was called Second Life. And it wasn't very good. Or useful. I don't see VR making the "virtual social space" any more engaging. The problem with 3D gaming-based social spaces is that they don't provide any real benefits beyond something like Google Hangouts besides better avatar customization, and narcissistic self-expression doesn't have a huge business case.

That's about the only thing I can think of that Facebook could do with this technology, besides maybe viewing 3D videos or photos from Instagram. Maybe they're just looking for a competitor to Google Glass, but that's definitely not the direction Oculus was going, and I think they would do better to stay focused as a gaming company for the next 10 years or so.

This could go well. Facebook bought Parse, and they still operate as a mostly independent entity doing great work. But I was more excited about Oculus when they were an independent company, and I'd have been happy for them had Valve bought them out.

Update Palmer Luckey answers some of the hate via Reddit. Seems like they definitely intend to go the Parse route, and the main driver was using Facebook's money to lower hardware costs, build their own components, do publishing deals with indies, and hire more people. Seems pretty reasonable. We'll see how it plays out; I just hope they don't stray from gaming. No one outside game devs have the 3D chops to do VR right now.

From Cult of Mac:

It’s a new kind of app because it uses an iOS feature unavailable until version 7: the Multipeer Connectivity Framework. The app was developed by the crowdsourced connectivity provider Open Garden and this is their first iOS app.

The Multipeer Connectivity Framework enables users to flexibly use WiFi and Bluetooth peer-to-peer connections to chat and share photos even without an Internet connection. Big deal, right?

But here’s the really big deal — it can enable two users to chat not only without an Internet connection, but also when they are far beyond WiFi and Bluetooth range from each other — connected with a chain of peer-to-peer users between one user and a far-away Internet connection.

It’s called wireless mesh networking. And Apple has mainstreamed it in iOS 7. It’s going to change everything.

Holy. Crap.

I've been getting excited about mesh networks lately, mostly due to reading too much sci-fi. I try to follow Apple developer news, as I'm always just on the edge of writing a mobile app. And yet, I hadn't heard about this.

Okay, okay. Forget the stuff about chatting during a marathon through the woods in Washington (as cool as that is). This could be a major, major fix for the internet as a whole. Let's take some examples.

Netflix has agreed to pay Comcast to stream videos across the pipes. That's the beginning of the end for network neutrality, and anyone who consistently sends a lot of data over the tubes is in danger of being put in the slow lane (or taken off completely) unless they pay carriers. That's horrible, but it's the state of things due to state-granted monopolies on fiber lines built in the 80's.

As of WWDC circa 2012, Apple had sent 1.5 trillion push notifications reaching rates of about 7 billion per day. I can only guess those numbers have gotten bigger. So what happens when Comcast or Time Warner (and God help us if that merger goes through) says to Apple, "Nice messaging service ya got there... shame if someone were to... mess with it" ? Well now, Apple can flip a switch and route straight around them.

Or what happens when the NSA plants a parasite on the backbone of the internet to siphon copies of all data running through it? Strong crypto is good enough for now, but when the NSA starts stocking quantum computers or builds enough qubits in house to start breaking that? You'd need to route around the internet. Especially if you're saying mean things about the NSA. Or, say, your ex-boyfriend who works there. Firechat, or iMessage + mesh networking could be answers.

Mesh networking may never be a substitute for real-time, bandwidth-hungry applications like Netflix or Twitch.tv. There are simply too many hops compared to the backbone-centric structure of the DNS. But for small-bandwidth systems like email, text messaging, deliveries, Bitcoin, algorithms and software, this could be a game changer.

The next steps are (hopefully) for Google and Microsoft to build cooperative, or even competing mesh-network solutions into their devices. Or for someone to release a cheap device that does mesh networking by default. The advantages of having this in phones are: a) lots of people already have phones and b) people have a reason to buy phones. I don't foresee a time when lots of people buy mesh networking repeaters that do little to nothing else - I doubt the mass market will ever understand the concept at that level. But if you package mesh networking with existing products, say wireless routers for example, we could start seeing a ton of coverage.

And that's a future I'm pretty excited about.

Make a rails app in a single file. Neat!

Interesting - James basically argues with replacing many versions of unit testing with assertions and the associated integration / system / whitebox tests. Any test that checks something redundant is probably worthless.

I tend to agree - tests should encapsulate business logic, and not really any level below that. You shouldn't test that a library or language does what it says it does; the library / language should have its own tests for that. Essentially, this looks like testing at the highest level that's reasonable.