a t e v a n s . c o m

(╯°□°)╯︵ <ǝlqɐʇ/>

Time comparison with ActiveSupport failed

now = Time.zone.now => Wed, 19 Feb 2014 21:30:56 UTC +00:00 Time.zone.at(now.to_i) => Wed, 19 Feb 2014 21:30:56 UTC +00:00 now == Time.zone.at(now.to_i) => false

How is it possible?


Time.zone.at(now.to_i).to_i == now.to_i => true

Selected Answer (from jvperrin)

Ruby tracks time down to the nanosecond:

now = Time.zone.now => Wed, 19 Feb 2014 21:30:56 UTC +00:00 Time.zone.at(now.to_f) => Wed, 19 Feb 2014 21:30:56 UTC +00:00 now == Time.zone.at(now.to_f) => false

But if you compare the nanoseconds, you will see they are not the same, even when creating the time object using the float value, because the float value used to create the new time object is not as accurate as the nanosecond value of the time:

now.nsec => 956134961 Time.zone.at(now.to_f).nsec => 956134796

Got hit with this while testing.

From the docs:

params = ActionController::Parameters.new({
  person: {
    name: 'Francesco',
    age:  22,
    role: 'admin'

permitted = params.require(:person).permit(:name, :age)
permitted            # => {"name"=>"Francesco", "age"=>22}
permitted.class      # => ActionController::Parameters
permitted.permitted? # => true

Chainable method calls are wrong here. It's okay for doing the most basic possible forms, using all the Rails defaults, but for anything else it's crap. See bullshit like this - and that's for sanctioned nested_attributes type calls.

Let's take a simple example:

params = ActionController::Parameters.new(first: true, second: {first_hash: 1, second_hash: 2, third_hash: 3})

How do we get the values of both :first and :second ? If you know in advance what all the possible keys for :second are, you can do this:

params.permit(:first, :second => [:first_hash, :second_hash, :third_hash])

If you don't know exactly which keys are there, or want to use logic based on what keys are present... well, no. Definitely not.

If you're doing doing something like tags, lists, or other things that don't nicely map to one parameter => single-level hash, you're in for a rough ride. If you only want to allow certain attributes to be edited by certain users, you need a hacky workaround like building up a giant array or hash before calling .permit . If you want to call params.permit in a before_filter, get lost. If you want to save Javascript logs or something else with a totally arbitrary structure, go die in a fire.

A better model would be to pass a schema to StrongParameters. Imagine if we could call something like this:

def user_params
    user: {
      email: String,
      tags: [Array, String],
      happiness_level: Numeric,
      preferences: {
        remember_me: Boolean,
        email_me: Boolean
      js_analytics: JSON,
      js_events: Array,
      js_logs: Hash

This would be way nicer - you could specify conversions so you don't get strings where you expect numbers or booleans. You could specify what sub-attributes to allow on a hash or array, or just use the raw class if you want to sort it out yourself. And, this theoretical schema method would just return a new Parameters instance with the schema applied - if you wanted to get different attributes earlier or later in the request, just call params.schema again.

I've been developing with Rails for ten years now. If I find this crap difficult, I can't imagine what people in coding boot camps must think of it.

Update: the rails_param looks like a good alternative.

Getting real sick of your shit, startups. These marketing emails are not cute. They are (in order) condescending, creepy, and insulting.


No, I didn't forget, but thanks for making me think for a second that I had.


Is my repo gonna commit suicide unless I stay with it? Does it need me to love it forever?


Forwaded from a friend, but I got this one too. No, Homejoy. Actually, I can take care of myself without you. You're convienent. When you're not insulting me.

On BitTorrent's upcoming chat app:

To start, users will be able to choose how they use our chat app. If you are porting in contact lists, you have the convenience of signing up with email or with a phone number. You will also have the option to sign up in Incognito mode, using no such information at all.

What we are building for the Alpha will also address users communicating with a trusted source who prefer their communication to be device-to-device (decentralized). This means no hops through any 3rd party servers, and no chance of anything being intercepted.

For users who may prefer to have their metadata obscured, messages will be indirect and routed through a third node. It is all a matter of preference.

Showing users directly how their message is being routed is genius. Using relay servers to obscure metadata is cool, but how would users know which relay servers to trust? Or are they supposed to set up their own?

I'm also curious is mesh networking will play a role here. Really sets up some cool cyberpunk scenarios where a journalist and a source can "meet" by being on the same block or in the same highrise, but without knowing what the other person looks like, their real identity, or exactly where they're at, and being able to chat without the NSA et al being able to pick up the traffic.

The design integrity of your system is far more important than being able to test it any particular layer. Stop obsessing about unit tests, embrace backfilling of tests when you're happy with the design, and strive for overall system clarity as your principle pursuit.

I think it's hilarious that TDD has gone too far for DHH. And I like his thoughts around the matter - test what's important, at the important levels. Don't add dozens of gems and layers of indirection and library magic just for testing.

This series means to teach you everything you need to know to implement any different caching level inside your Rails application. It assumes you know nothing at all about caching in any of its forms. It takes from zero to knowledge to an intermediate level in all areas. If you can't implement caching in your app after reading this then I've failed.

Lovely reference. Thanks, Mr Hawkins!

So, how hard could it be to build my own music player backend? Seems like it would be a matter of solving these things:

  • Use a robust library for audio decoding. How about the same one that VLC uses?
  • Support adding and removing entries on a playlist for gapless playback.
  • Support pause, play, and seek.
  • Per-playlist-item gain adjustment so that perfect loudness compensation can be implemented.
  • Support loudness scanning to make it easy to implement for example ReplayGain.
  • Support playback to a sound device chosen at runtime.
  • Support transcoding audio into another format so a player can implement, for example, HTTP streaming.
  • Give raw access to decoded audio buffers just in case a player wants to do something other than one of the built-in things.
  • Try to get other projects to use it to benefit from code reuse.
  • Make the API generic enough to support other music players and other use cases.
  • Get it packaged into Debian and Ubuntu.
  • Make a blog post about it to increase awareness.

Playing music on a computer is almost as hard as reading text files. At this point, I'd be pretty happy with a player that:

  1. plays all tracks in a folder (since no one can get artist / album / compilation right)
  2. displays things in a list view (album art sucks if you download lots of independant or unpublished music)
  3. syncs locally on my devices - Mac, Windows, and iOS. Dropbox would be a great option here.
  4. plays your music through a browser

The loudness compensation and gain equalizing is not a big deal to me - I prefer to listen to DJ mixes and entire albums, so the next track is rarely going to be mixed differently than the previous, and it's too easy to introduce playback bugs there.

iTunes fails at all of these - syncing with iTunes Match blows, you have to make playlists for every folder, and it loooooves album art (even when most of it is missing). Also, it crashes constantly on windows.

Google Play Music (wtf branding) fails at 1 and 2, and I didn't get far enough to experiment with 3. Seems good enough at 4.

Rdio doesn't let you play your own music, and they sure as hell aren't gonna have the new (free!) album from Illectrix or Savoy's new album Self Predator

Spotify has terrible local vs cloud syncing, and pretty bad playlist management. Basically, if you're not getting all your music from their landfill of pop hits, get lost.

Amazon Music looks like it hits 2, 3 and 4. I have enough playlists at this point that maybe 1 won't be a problem.

Some random, disorganized thoughts after finishing "Good to Great" by Jim Collins. Here's a great slideshare with the Cliff Notes, though they're not going to mean much unless you've read through the examples.

This book seems to avoid just being a collection of survivor bias stories. Every company they researched had a comparison company from the same industry at the same time that didn't become great. They also had a collection of companies that turned great for a little while, but couldn't keep it up. The book is mostly about what differentiates the long-term great companies from the merely-good-to-terrible ones.

Leadership: it's not about strategy or style. Great leaders tend to be quiet, humble, self-effacing, and yet doggedly persistent and nigh-unstoppable once they've made a decision. They take their time, deliberate, have heated discussions with their trusted advisors & friends, then pursue a path like the Terminator.

The right people. You have to start with the right people. The right people agree with your philosophy, are excited about what they're doing, and have it in their character to do the kind of work they do. Skills can be learned; this is about character traits. Once you have the right people, you have to keep acquiring more of the right people. As soon as you hire the wrong people, you are playing a losing game. No compromises here.

You should never have to motivate your people. They should be intrinsically motivated. Your job is to prevent them from getting de-motivated.

Don't fire someone immediately if they don't seem right - they might just not be in the right position.

The Stockdale paradox: confront the brutal facts. Get unfiltered data. See if what you are doing is working. Maintain unwavering faith that your story ends with triumph; that you retire with your company on top of the world. But at the same time, do not translate this into short-term visions of big growth. If you say "we will be this much better by Christmas" you are setting yourself up for failure.

Find your hedgehog concept. This is something your company does that

  1. Really taps into the passions of your people
  2. Drives your economics, however you measure that (revenue per customer, per visit, per sale, etc)
  3. Is something you could plausibly be the best in the world at

This is the core of your company - make sure you hammer & refine this constantly, and say no to anything that is outside this concept. It could be a product or business: for Walgreens it was "the most convienent drugstores." Or it could be a process: for GE it was "building & training the best executive talent."

The flywheel - momentum is a real thing for groups of people. It's very slow to get going, but once it does, it builds and builds. It helps you attract the right people, build resources, and chase the right initiatives. If you change directions constantly, you are not turning the flywheel - you are not building momentum. If you think some new intitiative or acquisition will motivate people, quickly produce big results, and get that momentum started -- you're wrong. It won't. Momentum builds over time, exclusively.

Your company needs to have some reason for existing beyond making money. It needs to have core values that it holds to; only it doesn't matter what those values are. Phillip Morris has core values and holds to them strongly, even though those values including poisoning millions of people.

Overall summary: This book is great. It contains a lot of lessons we've heard before, but it puts them all together into a coherent structure, and brings in real qualitative data to back them up. It's reinforced a lot of the beliefs I had about running a good company, and I'm hoping that's not just selection bias.

webkit_server hangs periodically when run from Capybara in Ruby

I am having a problem where an instance of webkit_server with Capybara and capybara-webkit running headless connected to a local Xvfb screen hangs when visiting a URL. It seems to happen after several minutes of repeatedly visiting different URLs and executing finders. (I'm using capybara for a screen scraping application in vanilla Ruby, not for testing.)

I've confirmed that when it hangs the site is still accessible (for example, through curl or wget on the command line). I've also tried wrapping the Ruby code that invokes the visit and subsequent finders in a Timeout block so that after 60 seconds of waiting a new URL is visited, but any visit() attempt fails after the first time this occurs. The only way to fix the problem is to kill both the Ruby process invoking Capybara/capybara-webkit and the webkit_server process and restart.

When I strace the webkit_server process, I see output like this repeatedly:

clock_gettime(CLOCK_MONOTONIC, {5821, 680279627}) = 0
gettimeofday({1330890176, 712033}, {0, 33052112}) = 0
gettimeofday({1330890176, 712087}, {0, 140736435864256}) = 0
gettimeofday({1330890176, 712137}, {0, 33108640}) = 0
clock_gettime(CLOCK_MONOTONIC, {5821, 680486036}) = 0
clock_gettime(CLOCK_MONOTONIC, {5821, 680530091}) = 0
read(7, 0x1fac1b4, 4096)                = -1 EAGAIN (Resource temporarily unavailable)

And if I strace the Ruby process that invokes it, it is hung on a read():

Process 3331 attached - interrupt to quit
^C <unfinished ...>
Process 3331 detached

I know that the Ruby code hangs on the Capybara visit() method.

Any ideas on what I can do to troubleshoot or correct this is appreciated. I'm assuming the problem has something to do with some resource webkit_server needs to visit the URL but am not sure what to try next.


Selected Answer (from Grimmo)

Capybara.server do |app, port| require 'rack/handler/thin' Rack::Handler::Thin.run(app, :Port => port) end

Seems to work.

Oculus VR gets bought by Facebook for $2bil:

Most important, Facebook understands the potential for VR. Mark and his team share our vision for virtual reality’s potential to transform the way we learn, share, play, and communicate. Facebook is a company that believes that anything is possible with the right group of people, and we couldn’t agree more.

I never would have seen my hero John Carmack working for Mark Zuckerburg. In "Masters of Doom," he expressed interest in creating a metaverse as portrayed in Neal Stephenson's "Snow Crash" after completing Quake. But we kind of already had that - it was called Second Life. And it wasn't very good. Or useful. I don't see VR making the "virtual social space" any more engaging. The problem with 3D gaming-based social spaces is that they don't provide any real benefits beyond something like Google Hangouts besides better avatar customization, and narcissistic self-expression doesn't have a huge business case.

That's about the only thing I can think of that Facebook could do with this technology, besides maybe viewing 3D videos or photos from Instagram. Maybe they're just looking for a competitor to Google Glass, but that's definitely not the direction Oculus was going, and I think they would do better to stay focused as a gaming company for the next 10 years or so.

This could go well. Facebook bought Parse, and they still operate as a mostly independent entity doing great work. But I was more excited about Oculus when they were an independent company, and I'd have been happy for them had Valve bought them out.

Update Palmer Luckey answers some of the hate via Reddit. Seems like they definitely intend to go the Parse route, and the main driver was using Facebook's money to lower hardware costs, build their own components, do publishing deals with indies, and hire more people. Seems pretty reasonable. We'll see how it plays out; I just hope they don't stray from gaming. No one outside game devs have the 3D chops to do VR right now.