Avoiding Development MercantilismPosted: February 26, 2008
Prior to the rise of the capitalist view of economics, the prevailing economic world view was known as mercantilism. Mercantlism was essentially based on the theory that trade was a zero-sum game; there was a limited amount of wealth (in the form of gold or silver or other bullion) in the world, and one nation having more of it inevitably meant that other nations had less of it. One of the revolutions of the capitalist view was in recognizing that wealth is not, in fact, a zero-sum game, and that the amount of wealth in the world can increase as a result of increases in productivity due to things like economies of scale and technological improvements. The way to become richer as a nation is not, in other words, merely to try to make sure that more wealth enters the nation than leaves it via trade imbalances; instead, you can and should look for ways to increase the total amount of wealth being produced.
That’s perhaps an overly-simplistic summary of things, but the point here is not to debate economics. Rather, it’s to point out a common mistake that many development organizations make: they view development output as a fixed commodity just as the mercantilists viewed the amount of wealth in the world as fixed. But as every developer knows, development output is in no way fixed, and is highly dependent on factors such as the toolset being used, the fit of the developer’s skillset and experience and temperment to a particular task, the developer’s enthusiasm for the task, the current state of the code base (size, cleanliness, documentation), and the amount of organizational drag (in the form of meetings, reports, e-mails, etc.).
So what does it mean to be a development capitalist? The easier part, in my experience, is in matching people to the right tasks and in trying to reduce the amount of organizational overhead. Those, at least, are easy decisions to make, as they tend not to come with too much potential risk or up-front cost. The hard decisions, then, are around technical considerations: how much time do you spend building infrastructure and tools, and how much time do you spend trying to keep the code base clean and small? Both those efforts, and especially the effort to build infrastructure or tools, come with a potentially huge up-front cost and, at best, a speculative payout down the road. It’s tempting, then, to simply see them as costs to be minimized. Doing so, however, can miss the huge potential productivity gains you can get down the line that will more than make up for any up-front investment.
As an example, consider what I spent my half of my day doing on Friday: performance tuning a couple of critical tests that I (and presumably everyone else on the team) run every time before checking in. The tests verify that all of our UI configuration files and gscript classes are error-free, and as such they catch a ton of errors and it’s critical to keep them clean. I probably run the tests an average of about 10 times a day, and before I started I had to run two tests that, combined, took about 200 seconds to execute. The tests had some overlap, however, so I could tell that there might be some low-hanging fruit for further optimization, and about four hours of work later I had managed to combine the tests into a single test that took 140 seconds to execute. Four hours of work to save 60 seconds of test execute time might seem like a bad return on investment, but it won’t take long to pay off. At my rate of test execute, it’ll take about 24 days of development time for me to make back that four hours: but there are also seven other people on my team that will probably save about 5 minutes a day each, and another 40 or so across development that will eventually benefit from the optimizations.
So was the time worth it? It’s always a hard call, and it can be hard to know when to stop; we certainly can’t spend all our time building infrastructure or we’ll never get our products out the door. We’ve got release deadlines just like everyone else, so it’s always easy to say “we’ll do that later, we can’t afford the cost right now.” But the payoff for taking a few days (or weeks or months) up-front to do things with seemingly small payoffs can, over the long-haul with months (or years) of development time ahead and dozens of programmers working with that code base, lead to huge time-savings and huge productivity increases for the team. Eventually it can make the difference between development grinding to a halt under the weight of an ever-growing code base and maintaining the ability to make steady forward progress.
That will really only happen if you have an organizational commitment to taking the long view and recognizing the long-term benefits of making a constant investment in your infrastructure, tooling, and code base quality. It also requires giving developers the freedom to make those improvements when they see an opportunity as well as the freedom to take some risks; not all such bets are going to pay off, and I could very well have spent four hours on Friday without being able to improve the performance of those tests in the least. Being willing to take those risks and let people scratch their particular technological itches every now and then will almost always pay off in the long run, and in my experience such investments usually pay off much faster (within weeks or months) than you initially think they will. And the next time you find yourself dividing a project into developer-days or man-months (which we should all know are Mythical anyway), make sure to ask yourself if you’re falling into a mercantilist mindset where all costs are fixed instead of looking for ways to make the whole organization more effective.