On doing it yourself
A short essay on the value of doing things the hard way.
by Dustin Ingram
In programming there is a wide-spread 1st order theory that one shouldn’t build one’s own tools, languages, and especially operating systems. This is true—an incredible amount of time and energy has gone down these ratholes. On the 2nd hand, if you can build your own tools, languages and operating systems, then you absolutely should because the leverage that can be obtained (and often the time not wasted in trying to fix other people’s not quite right tools) can be incredible.
- Alan Kay, The Power of the Context
My initial attempt to solve this problem would be to:
- find a couple of existing libraries that were created to solve this problem;
- check out a few examples, read some documentation for each, and generally get a feel for how each is implemented; and,
- maybe try a proof-of-concept for one or two, and then settle on what best fits my needs.
- To me, the amount of time I would need to invest to create something even remotely comparable would outweigh the amount of time it would take to find an existing solution, modify it to suit my needs, or even scrap an existing solution for another one. It seemed ludicrous to me do anything else.
The more I thought about this, the more I realized the value in doing it his way. Aside from the intrinsic knowledge gained by re-implementing a system (of which the value has been shown over and over again), the real benefit is this:
If no one ever builds anything new, we never get anything new.
We see this all the time. Take Bootstrap, for example. Did we need another front-end framework? Most would say we didn’t. Is it the best we’ve got now? Definitely—can you even name any of it’s predecessors? What about Netflix? It seems as if nearly every service they create is homebrewed specifically for their own needs. Could they have become what they are today just by using commercial, off-the-shelf options? Maybe, but I would argue probably not. Off-the-shelf solutions are, by their very nature, generic. They’re built to be general-purpose and usable by a wide audience. A custom solution will often solve a particular problem better, but it will require more effort. Using off-the-shelf solutions usually involve managing dependencies rather than managing code. As far as time spent, neither situation is inherently better.
Here’s the problem with all of this. Take a look at this article, which was once featured on the front page of HN, titled, “The best programmers are the quickest to Google.” The focus is basically that all the relevant, most granular pieces of whatever project you’re building have already been written, and all you have to do is search Google for it. Here’s an excerpt:
If you need to implement something in code and it’s not cutting edge technology, Google it first. If someone else hasn’t already done it yet, you’re either Googling it wrong or way off in what you’re trying to accomplish.
The issue here is the idea that we’ll always be good with what we’ve got. If this mindset permeates us as developers (and some might say it already has), we’re just rearranging deck chairs on the Titanic. Everything is a Remix.. Nobody will create anything new, ever.
Fortunately, there are some of us that reject this, that realize that the long term educational benefit, and net creativity this fosters within our community, outweighs the costs of doing it the hard way. These are the people that will change the game. These are the people which will create the tools of the future.
Update: Alan Kay’s Response
During a recent HN AMA, I was able to ask Alan Kay himself about the paradox between choosing when to DIY. Specifically I asked him “How does one decide when to DIY, and when to use what’s already been built?”
This is a tough question. (And always has been in a sense, because every era has had projects where the tool building has sunk the project into a black hole.)
It really helped at Parc to work with real geniuses like Chuck Thacker and Dan Ingalls (and quite a few more). There is a very thin boundary between making the 2nd order work vs getting wiped out by the effort.
Another perspective on this is to think about “not getting caught by dependencies” – what if there were really good independent module systems – perhaps aided by hardware – that allowed both worlds to work together (so one doesn’t get buried under “useful patches”, etc.)
One of my favorite things to watch at Parc was how well Dan Ingalls was able to bootstrap a new system out of an old one by really using what objects are good for, and especially where the new system was even much better at facilitating the next bootstrap.
I’m not a big Unix fan – it was too late on the scene for the level of ideas that it had – but if you take the cultural history it came from, there were several things they tried to do that were admirable – including really having a tiny kernel and using Unix processes for all systems building (this was a very useful version of “OOP” – you just couldn’t have small objects because of the way processes were implemented). It was quite sad to see how this pretty nice mix and match approach gradually decayed into huge loads and dependencies. Part of this was that the rather good idea of parsing non-command messages in each process – we used this in the first Smalltalk at Parc – became much too ad hoc because there was not a strong attempt to intertwine a real language around the message structures (this very same thing happened with http – just think of what this could have been if anyone had been noticing …)
Thanks to Brian Duggan, Dan McClory, and Patrick Smith for reading drafts of this post.