The value of contractors and in-house software developers

In her essay “Why Work?”, Dorothy Sayers asks, what would labor look like if it were valued “not in terms of it’s cash returns, but in the value of the what the work produced”?

Here, I briefly attempt to apply that question to my year as a “hired gun” computer programmer. Instead of accounting for each hour, nay, each 15-minute chunk, I would instead have been paid based on the actual usefulness of what I produced. By this measurement, some days, 3 hours of careful work would yield $5000. At other times, slaving away for two steady weeks would be worth only 50 bucks.

The trouble is, WHO is qualified to make this judgement of a things worth? The answer to this question is often, truly: nobody. The client is often an idiot. They don’t know what they want. Now, in hindsight they are able to reasonably determine how valuable something turned out to be. Two years down the road they can look back and say “This thing was really useful and that thing over there turned out to be a big waste of time.” At the moment though, even the smartest ones are unable to make that call early on. It seems that with software, something’s worth is not well established up front. It’s value is imagined in dreams of the future. Clients picture in their mind’s eye how much time it will save them or how many customers it will attract. The reality though is always something different. So how can the software creator be compensated properly at the time the work is actually done?


It seems that it must be done through trust in his or her reputation. Has he consistently written lots of other useful things for other people? Then we will assume that he’ll write something good for us and we’ll pay him a lot of money for it. But there are so very many reasons why the thing is he building for you might be a piece of junk – not least of all the very nature of what YOU have asked him to build. And so everyone is working in the dark and exchanging money in the dark with hopes and dreams attached with strings to each feature request. What could possibly go wrong? Everything.

What is the cure for this situation? A long, long view – deep relationships that form over the years. There will be both bright years and dark years – predictable productive months and experimental risky months. At the end of the day, a good in-house developer (or very long-term contractor) is going to yield much better results for everyone involved that serially farming everything out to the lowest bidder or even the most reputable bidder at a given moment. The very best hired gun can only give you his best for a snapshot of time. Someone mediocre who has been brought in to the family over the years may ultimately be able to provide greater value in the long run. This requires greater patience, but that person has more potential to consistently produce valuable things over the years. The hireling on the other hand, is slave to the clock, slave to the invoice, and slave to the very temporary email account that was set up to interact with you. He promises big and may even deliver big, but only for a brief season.

In the end, my advice to those who need software written for them? Keep the forest in view. Don’t obsess over a couple of trees. Think 5 or even 10 years out, not just 6 months. It won’t get you on the cover of Fast Company, but it might mean that 5 years from now, your application doesn’t suck. An elite contractor may indeed be the solution to the problem you need to solve, but it’s likely that cultivating a long-term relationship with an in-house programmer or established shop will turn out better in the end. Don’t live like you’re going to die tomorrow. Lean toward investing in your local community.

The pedagogy of data corruption

When I was 8 years old, I received my first computer – a Commodore 64 purchased at a yard sale. Not long after, a friend of mine supplied me with a box of pirated software on 5 1/4 inch diskettes. Included in the batch was, to my delight, a port of the early Nintendo game Bubble Bobble. Not owning any sort of console machine and having been enamored with the quarter-eating arcade edition of this very game at the local bowling alley, my younger brother and I teamed up on many afternoons to try to race through the one hundred mini levels to the end.


Except we never got to the end, but not for lack of skill. The disk was corrupt. Somewhere around level 70, glitchy levels began to show up. Sometimes the glitch was just a funny artifact on the screen, but before long, it would be misplaced blocks in the level itself. When the faulty positions made it impossible to clear the screen of an enemy, then you were done for. You were permanently stuck in the level until you ran out of time.

We kept a log and figured out which levels were especially glitchy and found that by carefully timing a series of warp “umbrella” power ups (ignoring some, and getting others at the right moment), we were able to skip ahead to somewhere in the upper eighties. But sooner or later the inevitable would happen. An enemy would escape the boundary of the game and find himself hovering in the score dashboard where weapons could not reach him.

It seems funny to admit, but the truth is, to this day, I have yet to ever make it to the end of Bubble Bobble. I never owned a copy that was fully intact. But that maybe turned out to be a good thing.

On reflection, this experience played a significant early part in the wiring of the brain of a little boy. The mistakes on the screen weren’t random. They were not akin to someone sabotaging a chess game by bumping the table as they passed by. The chaos followed careful rules. Positions were off by one, or the color was off by one, or an entire array was shifted in one direction. The underlying data structure of maps was revealed on the screen to anyone paying attention. The draconian piece of code that continuously checked for the condition of all “enemies equals zero” to trigger an advancement was now no longer just an entertaining goal, but a cold, hard rule. Winning became not the crossing of a sparkly finish line, but the satisfying of an IF statement. When one realized how incredibly EASY it was to get stuck in a level when a single block was out of place, it became much more apparent how much care had been put into the design of what was there to begin with. The many hours of work behind the scenes was laid bare – not by a teacher, not by the source code, but by an error. In particular, an error that didn’t prevent execution. The fragile medium of the floppy disk and the early days when memory checksums and buffer overflows were allowed to slide – these enabled the impossible – a serendipitous pulling back of the curtain.

We are told that when we fall down, we need to get back up on our feet and keep going. Dusting yourself off is part of a healthy childhood. Helicopter parents try to keep everything so padded that this isn’t allowed to happen, or when it does, kids are rescued too quickly. We’ve learned to treat our machines the same way. (Or perhaps we treat our kids like machines?) Every exception is caught, every error handled, and anything unusual brings the whole train to a halt. We believe software to be rock solid today, but what it mostly is is all or nothing. Back in ancient times when corruption was not accounted for, things could go rogue and gallop off on the road less traveled, making all the difference – especially for those watching.

Getting your attention off the tools

I’d like to post this observation on an artist and his tools. It was posted a while back by an online acquaintance of mine here.

Great artists don’t talk much about their tools. The rest of us do, because we’re still discovering what our hands can do. Most of us never get to the end of this exploration of means – and that’s ok, because to create is its own reward, no matter what stage of development we find ourselves at.

But I think I see a shift in the biographies of great artists, a shift where they have found what their hands can do with the equipment that is available to them. Their attention is absorbed, then, not by means and tools but by creating an image truly conversant with the given images in the universe.

They finally, after a long brutal apprenticeship, lift their eyes.

Tim is a painter and has that foremost in his mind I think, but I see this working out with artists of all mediums and even computer programmers.

Mediocre amateur photographers talk endlessly about cameras and lenses. Many of the real masters talk mostly about light often shoot with surprisingly simple (and even cheap) gear.

I know guitar players who have $2000 reverb boxes and the latest amps and a whole closet full of high-end guitars, yet they still play terribly. They’ve become obsessed with their tools and haven’t yet discovered what their hands can do. Some of the best players I met in music school at the university also had unremarkable instruments. Not junk mind you, but certainly nothing fancy.

I think the novice programmer is typically too focused on his tools. He gets into arguments about the best Linux disto. He may be zealously committed to one programming language, constricting his imagination to its syntax and always drooling over the bleeding-edge release of some framework. Another variation is always trying new languages. They can write in Scala, Erlang, C, F#, and CoffeeScript, but haven’t made or maintained anything truly non-trivial projects. They are curious, which is good, but are still too focused on the tools. The master will use the right tool for the job and he’ll err on the side of just making SOMETHING substantial with whatever he has lying around, even if it isn’t the best. The master has learned not to procrastinate in this way.

Being distracted by tools and technology options is probably still one of the biggest productivity temptations I encounter. In a way, it’s easier to put your head down and straighten the bristles on your paintbrush than to look up, open your eyes wide, and just create. We must trust our hands and be brave.

Teaching programming with the Solarus Zelda engine

My oldest son recently turned seven and has been enthusiastic about building mazes and worlds for some time. Since we homeschool, I told him that if he could finish all his math for the year, I would begin teaching him how to make his own video games. This is what got me into computers originally, beginning with a Commadore 64. Well my wife informed me that he has spent the last couple of weeks burning through his math homework and was now complete. It was time to come up with something fun to introduce him to this weekend.


When I was first learning to code, the web didn’t exist yet and the few books on the subject were mostly about C and much to daunting for a young person. Today though, there are so many options. After poking around a bit, I discovered Solarus, an open-source Legend of Zelda clone framework that includes a pretty full-featured world/quest/level editor written in Java. The folder structure is simple to understand and everything is either just a text data file, PNG image, or OGG sound file. Enemies, items, and triggers are scripted with Lua. The engine itself is written in C++ and binaries for Windows, Mac, Linux, and Android are ready to go.


He has gone to town making his own maps and creating teleporters between the rooms. He’s not coding yet, but getting used to X,Y coordinates, image dimensions, unique identifier names, and some CLI. By assigning a few game variables, we were able to create locked doors that required keys found sprinkled elsewhere. We made some of our own graphics and were to set them up as usable sprites and tiles. Later this week I hope to start scripting a bit of the enemy AI. Right now the badies are stupid and just move back and forth in once place. I’m optimistic that this will be a great way to introduce him to a bunch of programming topics in a fun and motivating way where he’ll be able to see pretty immediate results without having to wrestle with tools all day.

I remember trying to program a Zelda-esque game too and how I spent most of my time trying to get the DirectX 3 libraries to compile properly with Borland and the double-buffering to look right. Ugg. I hope he can have a bit more of his hero lost in a castle in the clouds rather than debugging in the dark.


Kudos to Christopho for writing the bulk of the engine and editor. I hope to contribute to the project as well. One of the first things my son and I ran into is that it’s pretty easy to make illegal maps and events that crash the engine at runtime. Most of these could be checked for at design time though. A validation panel that gave warnings about missing references and such would be pretty handy, especially for beginners.

Dependency Injection as premature optimization


Every programming framework out there these days is touting their mad support for dependency injection. I think James Shore is right when he says:

“Dependency Injection is a 25-dollar term for a 5-cent concept… it means giving an object its instance variables. Really. That’s it.”

It often just comes down to a question of how far out you abstract configuration information.

Say you have a method that makes a query to a database. If you had no abstraction at all, it might look something like this:

// 1 Layer
public void QueryDatabase()
    SqlConnection sqlConnection = new SqlConnection(",username=app_user,password=secret");
    // Do something with sqlConnection...

Everything is hardcoded right there inline. Simple, but the ultimate in inflexibility.

// 2 Layers
public void QueryDatabase()
    SqlConnection sqlConnection = new SqlConnection(Configuration.DatabaseConnectionString);
    // Do something with sqlConnection...

public static class Configuration
    public static string DatabaseConnectionString = ",username=app_user,password=secret";

This is much better. Now you can have a thousand different database calls in your code, but they all point back to one configuration. So if you move the location of your database, you only have to change one line of code. Have you ever edited the wp-config.php file for WordPress? That is where they store this sort of thing.

// 3 Layers
public void QueryDatabase(sqlConnection)
    // Do something with sqlConnection...

public static SqlConnection DatabaseConnection
        SqlConnection sqlConnection = new SqlConnection();
        if (Configuration.Mode == ApplicationModes.Production)
            if (System.Net.Dns.GetHostName() == ApplicationServers.EastCoastBox)
                sqlConnection.ConnectionString = ",username=app_user,password=secret";
                sqlConnection.ConnectionString = ",username=app_user,password=secret";
        else if (Configuration.Mode == ApplicationModes.PreProduction)
            sqlConnection.ConnectionString = ",username=app_user,password=othersecret";
        else if ((Configuration.Mode == ApplicationModes.Testing))
            sqlConnection.ConnectionString = "localhost,username=app_user,password=othersecret";
        return sqlConnection;


Now when we are calling the query, we can pass it in a database connection of our own design. We even have some logic now in our configuration class that picks the right one for us. OR, we can make up our own on the spot and use that instead if were are working with some new case. That’s pretty handy in a lot of situations, though it does make helper functions more verbose to call.

// 4 Layers
public void QueryDatabase(DataProvider provider)

DataProvider db1 = new DataProvider(Type.Oracle, Mode.PreProduction);
DataProvider db2 = new DataProvider(Type.MySql, Mode.Testing);
DataProvider db3 = new DataProvider(Type.MongoDB, "my custom connection string");

Alright, so this next example is a bit hokey, but it should work for the sake of illustration. Now we’ve gone so far as to make our application database agnostic. We aren’t tied down to anything anymore! We are super flexible and can substitute all kinds of things now. We have thrown off the chains that bound us and now we have virtually nothing left. We have to “inject” all the meaning, the instance variables, into our method so it can have something to work with.

So what is wrong with all of this? Nothing. The problem comes when you use the wrong one for the job. If you are writing a one-off script that you know you are going to throw in the trash before lunch, then using 1 layer is just fine! Writing a provider and an object factory for it would be a complete waste of time. In the same way, you might really need the flexibility of having many possible connection strings and even database types. Perhaps your project started small but now has grown and needs to accommodate multiple instances of itself. If that is the case, don’t lock yourself down with just 2 layers. Go for lots of dependency injection. It will make your life easier at many corners as you walk down the road.

So why do I say that a lot of dependency injection smells like premature optimization? Because I see many framework tutorials and boilerplates advocating that you do a LOT of 4-layer stuff – even for a 1-page CRUD application with just 2 tables behind it. “Look how we’re using best practices! Our code is so modular and beautiful!” No it’s not. You are violating the YAGNI (You Ain’t Going to Need It) principal. It’s ten times longer than it needs to be and is spread out over 20 files. It’s hard to read and that makes it harder to maintain, not easier. It also makes for baffling tutorial.

The situation is similar for fresh DBAs who try to normalize their tables too much. They make the schema hell to understand at a glance. The same goes for developers having too many layers of abstraction too early on. What should you do instead? Build something that works, then expand on it. Go from brittle to flexible. If you are a master who has done this all twenty times before, then you can probably start with something more in the middle. But a newbie? Forget it. Get all your queries to actually run first and then refactor it. Don’t start by chasing framework bunnies all day. Get your hands dirty with code that does something substantial out of the gate. The best thing about code is you can change it. You don’t have to make it perfect up front and in fact, you shouldn’t. It’s, as Donald Knuth says, the root of all kinds of evil.

Profiling over intuition

You need to profile your applications because you probably can’t intuit exactly where they are spending their CPU cycles or stashing their memory. Just like it takes a very socially savvy and clever counselor to figure out what people are really thinking, it takes a computer to understand a computer in the wild.

You may think, “Oh, but I understand the stack from top to bottom. I know what’s going on in there.” No you don’t. The spinning disk layer? The disk controller layer? The file system layer? The hypervisor? The guest OS? The driver? The API to the driver? Your code? Its dependencies on interrupts from devices you know nothing about? The unpredictability of the internet? The stupid GUI error your user is perpetuating behind your back? Your head cannot contain all these things.

Modern software is not analogous to gears turning together in a simple machine, but rather to a pinball bouncing around in a flashy arcade on the verge of tilt.

If you are smart and experienced, you’ll have a good guess where to look but even the best are frequently surprised. Profile your application. Use good tools to do it. If they don’t exist, then write a lot of stuff to the logs and add millisecond resolution to the entries. Look for gaps. Find the largest gap and start there, don’t just start anywhere.

The three rules of optimization

Some people think these rules are only for novice programmers, but I think they are valid even for those in the genius class. Some examples and metaphors are in order.

When do I optimize my code?

1. Don’t
2. Don’t yet
3. Profile before optimizing

OK, so it’s supposed to be funny and memorable, but that doesn’t make it any less true. This is closely related to the rule of “smallest testable case”. When writing something, start by giving it the fewest moving parts. That may mean you make something that performs terribly, takes up a gig of RAM, eats up a ton of disk space, wastes 7 out of your 8 precious CPU cores, saves way more state information than you’ll ever need and doesn’t clean up after itself nicely.

But what DOES it do? It should work. Make sure the thing can get from point A to point B without driving off the road. Do that before you make the car go faster or turn sharper or get good gas mileage.

When you try to optimize as you go, you get distracted by rabbits trails. It’s like an author who can’t get her story written because she keeps stopping to fix every word or phrase that Microsoft Word has underlined in red on her screen.

Interruptions are coming to sabotage your programming productivity in just 45 minutes time! Don’t have nothing to show at the end of the day because your were sprucing up the error messages that Main() spits back at the user if their optional command-line parameters are not in the right format. Do that part later! Go build something worth caring about first and then go back and put some care into it.

Building a game server that needs to support thousands of connections? Get the TCP stack working with one thread and one client before trying to parrallelize everything. Each day has enough trouble of it’s own.

Another big reason to wait to optimize is that the design might evolve and a part you built might need to change drastically. Don’t spend all of Tuesday optimizing your Postgres queries only to have the lead switch the back-end to MongoDB on Wednesday. That’s a whole day flushed down the toilet. Instead, knock out real features. Keep pushing optimization back. Heck, maybe you won’t need it after all, or not much of it anyway.

The final point is that when you finally go to optimize, don’t just start where you think there might be something worth cleaning up. Profile the CPU, profile the memory and find out what is ACTUALLY making the software run slow. You might think it’s that nasty database query inside of a loop but the profiler will tell you it’s actually taking forever to timeout reading some third-party config file you hadn’t considered. If you have an infinite amount to time to optimize, then sure, do everything. But if you are human and exist in three dimensions like the rest of us, then you probably only have an extra week to spruce things up. Make sure you pick the best stuff to tidy. Don’t be scrubbing the closet with a toothbrush when the dining room table is piled with dishes. Profile first.

Donald Knuth said that premature optimization is the root of all kinds of evil in programming. Take his advice and knock out your first draft end-to-end.


A simple one-page static Angular.js application with AJAX

After a lot of research and experimentation, I’ve decided to bite the bullet and learn to use Angular.js and then connect it to the .NET MVC Web API. I found many of the tutorials out there to be unusually frustrating. There is no end to people complaining about the Angular documentation and their howls are justified. The official tutorial is especially vague and convoluted. It tries to do way too much stuff at once with little explanation of what is going on. It also spreads all the code out between many files. Of course this is how you should organize a real application, but I want to see what I’m doing on one screen so I can easily detect typos and relationships. I decided to try and eliminate as many variables as possible and build a simple static site (no dynamic back-end server language) that:

1. Display a simple message on the screen.

2. On an event generated by the user, change the messages using Angular’s two-way binding.

3. Made an AJAX call to a remote web service (a static dummy one in this case) and set the message to the response received.

If it can do all that, it’s a long way toward doing a lot more. For newbie’s, how can this be demonstrated with the fewest lines of code? I aimed to find out. Here is what I came up with.

<!DOCTYPE html>
<html lang="en" ng-app="myApp">
    <meta charset="utf-8" />
    <title>Angular.js Static Demo</title>
    <script src="angular.js"></script>
<body ng-controller="messageController">
    Hello, here is a message:<br />

    <br /><br />
    <button ng-click="simpleChange();">Update message with two-way binding javascript.</button><br />
    <button ng-click="ajaxChange();">Update message with two-way binding and an ajax call.</button>

    <script type="text/javascript">
        var myApp = angular.module('myApp', []);
        myApp.controller('messageController', function ($scope, $http) {

            $scope.message = 'Initial message!';

            $scope.simpleChange = function () {
                $scope.message = "Message changed!";

            $scope.ajaxChange = function () {
                $http.get('message.html', {})
                .then(function (data, status, headers, config) {
                    $scope.message =;


A few comments:

Note the ng-whatever properties in the html and body tags. They are called directives and these are what trigger the initializations. {{message}} is the dynamic piece that we will be automatically updating.

Surprisingly little is needed in the end, though it took me a while to get here. Many examples of controller initialization out there neglected to include $http as a parameter. If you leave it out, then calls to $http fail silently – the worst sort of error of all. There are also syntax changes between the latest version of Angular and it’s state from a year ago. About half the answers I came across on Stack Overflow had recent comments complaining about how a particular snippet doesn’t work anymore. Ugg.

Notice that onClick events are replaced with ng-click so that Angular can include it’s secret sauce in the call.

If you were calling that ajax url with jquery the “data” variable would be all you need. But, because Angular turns everything into JSON automatically (which is actually really great), you need “” to get to the meat of the response. The file message.html is a plain text file (no HTML tags) containing one line of text.

That’s it – that’s all you need! You don’t need to define a bunch of models or resources or “promises” or anything else – not right away at least.

Pretty impressed with GitHub

I’ve been hearing about GitHub for quite some time and have stolen numerous bits of open-source code from there. I didn’t get what all the hubbub was about though until I decided to sign up and throw a small project up there last night.

Wow. Everything is pretty dang slick – the standalone desktop utility for committing new code (about a hundred times easier than installing Git from scratch), the nice looking code diffs,  history maps, and tidy interface. It truly ENCOURAGES you to jump into some open source project and start contributing. Kudos to lowering the barrier to entry. It’s akin to how MindStorm made robotics fun and easy to dabble in. I’m optimistic about uploading some of my old projects and sprucing them up a bit.

Git itself I could take or leave. SVN is just fine for many things. But the Hub is really a fine force for good on the web.