We’ve been developing a PVC architecture for a little while now, and there were some things we’ve found which are less intuitive or less convenient than using an MVC architecture. For those not familiar, here’s a basic rundown of Model – View – Controller and Proxy – View – Controller .

The tl;dr version is that PVC is the same as MVC, except instead of having real model classes, you have a proxy that contacts an API with all the model logic. This ensures a complete separation of model code and business logic from presentation and controller code. As engineers, we like separation and modularity – so what are some problems with this architecture?

The most obvious is the extra work involved. Now instead of maintaining one application, you are maintaining two applications. You now have to pull two repos every day – one for the presentation layer and one for the api layer – any time you want to add features that affect both presentation and model. For web applications, that covers most important features. This violates loose coupling and adds extra work.

A less obvious issue is the lack of tools to implement this kind of architecture. Abstracting the interactions between application controllers and databases is a largely solved problem. Libraries such as ActiveRecord, Kohana ORM, Mongoid and others exists to provide robust interfaces between your data and your controllers. With APIs, however, assumptions about the nature of the API cannot be coded into a library. Twitter’s API is different from del.icio.us' API is different from Compete’s API; all of which are different from your API. So almost assuredly there are no robust interaction libraries for dealing with your internal API. This means things like auto-generating forms, error checking, batching and combining queries, and assigning relations between elements will have to be coded by you. We’re working on making this easier with libraries like ObjectStruct and HypertextClient but there’s quite a ways to go there.

Another less obvious issue is multiplicity in testing. Since you have to code your own API interactions to some degree, you have to test that code. This raises some additional questions – in testing your API client, do you mock out your API, or risk creating actual objects (which may have naming or other conflicts) on a staging API? If you mock out your API, how do you maintain and update your mocks? Also, when testing your front-end applications, do you use a mocked API or contact a staging API? Or do you set up more infrastructure to provide clean test databases for each application’s API layer?

In summary, if you’re going to build a PVC architecture, you’re going to be doing a lot of work to make sure all the components fit together in a way that’s tested and secure. This could be worth it – if you have many, many frontend applications, or if you want to allow direct access to an API for third-party clients, having robust individual layers may be to your organization’s advantage. Otherwise, think carefully about whether this is a good approach.

Saw this (http://cdixon.org/2009/10/22/the-ideal-startup-career-path/)
from this post and comment thread
(http://news.ycombinator.com/item?id=1860795).

I didn't realize "one and done" was an option. What does that entail?
Try founding a company, and if it fails go running back to the skirts
of a multi-billion dollar corporation? If that's the case, I'm
decidedly on the start-up career path.

The things I like about startups are not replicable at large
companies. The lack of bureaucracy, the ability to talk to anyone
without introduction, the flexibility to experiment, the fast pace and
the ability to use beta and cutting-edge technology are very difficult
to implement at large companies. Because of these things, I've found I
really enjoy working at small companies, and how many people can
honestly say they enjoy what they do? It's something I wouldn't give
up for all the money in the world. No amount of money can buy enough
happiness to make up for eight hours a day of misery.

The terms “casual games” and “hardcore games” appeared to define the differences between tiny, cheap, simple games for platforms like the iPhone, Facebook, or DS and huge, multi-year projects like Fallout 3 or Halo. Casual games have a longer history than most people account for, going back to Flash game portals like Newgrounds and Orisinal or text-based games like Nation States. “Hardcore” games also started from simpler, repetitive games like Galaga and Pac Man, and have grown in complexity with the people who play them.

There are a wide range of attitudes on casual gaming. I got to listen to the CEO of a casual game startup describe the existing console market as ‘absurd’ and ‘ridiculous’ compared to gaming on Facebook. I’ve heard game critics and developers say some pretty nasty things about casual games. I’ve been on both sides of the argument. I’ve pointed out, for example, that even simple games are still games. Getting people to play games is good, because it will lead to more people playing hardcore games. I’ve also mused that casual games are not games because they do not generally rely on learning a set of skills to overcome challenges, but merely putting in time.

Games on Facebook and the iPhone have been heavily constrained by processing power and memory. But that is improving rapidly – Flash is getting GPU rendering, the iPhone and its competitors have more than doubled in power, and these constraints are fading. We’ve seen some “hardcore” games on casual platforms – from the browser-based Quake Live to ports of Monkey Island or Final Fantasy to Flash and the iPhone. We’ve also seen more casual games on traditional platforms – Kinect is making its debut on 360 with Kinectimals, and games like Flower on the PS3 take simple controls and make them wonderful. Steam has a bevy of casual games like Plants vs. Zombies for PC Gamers.

I think going forward, rather than having a sharp divide between ‘casual’ and ‘hardcore’ games, we’ll see it as a continuum of complexity. We’ll see things like ‘casual open-world rpgs’ or ‘casual survival horror,’ and we’ll start to lose sight of where the boundaries are. Minecraft, for example, has simple mechanics that have spiraled into wonderfully creative art projects from its players. I think within a few years, talking about casual games vs hardcore games will be irrelevant, and gamers will have separated into even smaller, more specialized cliques.

We recently switched from Mysql to MongoDB, and it’s been a real learning experience. We’re using Mongoid as our object-document mapper, and we’re finding quite a few gotchas from assumptions we made dealing with Mysql. Here’s an interesting one.

Say you have a controller action like this:


    def add_part
      @car = Car.find(params[:car_id])
      @part = @car.parts.create!(params[:part])
      respond_with @part
    end

@part will save, provided it passes validation. @car may not, though. Say you updated the validation rules on the model, but some of the Car documents still have invalid values on them — @car will not save if those fields are invalid. Worse, it will fail silently, so @part will exist and be valid in your controller code, and @car will be accessible through it. Only if you call @car.reload will all that controller work disappear.

So, if your associations (relational or embedded) are disappearing mysteriously, popping open the rails console and checking for this situation with your models and making sure to call parent.reload is a good place to start.

http://skife.org/interviews/design/2010/10/27/favorite-interview-question.html

Saw this one on Hacker News, and figured I'd write up my favorite interview questions.

My favorite basic coding question is asking our applicant to write a log parser. Here at Quid we collect a ton of data, and have a huge number of logs to sort through and check for problems, so it's something we're pretty familiar with. It's also a short program that can be done in an interview, with test cases from live data, in whatever the candidate's preferred language is, and quickly demonstrates familiarity with basic development techniques. You can see if the candidate breaks out regular expressions, yacc, or tries to do it some other way. It also shows you what kind of code you can expect delivered in a live environment.

For design problems (that is, whiteboard or over-the-phone stuff, not something that produces actual code), I like asking candidates to design a web crawler to index text from web pages. It's a great exercise because it allows you to start simple (we only want these 100 sites, collected every day), and then expand in virtually any direction (we only want this text, we now have 100,000 sites to collect, we need to do text processing and indexing). You can expand the problem to encompass many machines, advanced logic, multiple databases, and other application-level design decisions that are realistic. Gives you a great sense of how a candidate responds to increasing challenges.

For brain teasers; well, I hate brain teasers. I don't think they're especially relevant to application development, and I think arbitrary puzzles most accurately test how many assumptions a candidate makes. I also don't like situations where I, as an interviewer, have a "right answer" in mind that the candidate needs to hit. In business, there's rarely a single right answer that a hacker needs to output; why should interviews behave that way? If I had to give a brain teaser, though, I'd probably go with the penny problem. You're blindfolded and handed ten pennies. Some number of them are tails-up. You have to divide the pennies into two piles with an equal number of tails-up coins. No tricks to determine which coins are which. Solution here: http://skepticsplay.blogspot.com/2007/12/blind-coin-sorting-solution.html

Anyway, that's all I've got for now. Hopefully we'll see more interesting interview questions around HN. Or maybe better interview methods that don't just check whether you've read the answers to the right questions on Hacker News.