Swoopers and Bashers and Writers and Programmers

It’s funny that Bruce Eckel posted on Artima yesterday about how programming is analogous to writing.  I’ve thought that for a long time, and Eric Roberts, who developed the introductory CS curriculum at Stanford (and is just an amazing teacher in general) liked to point out to incoming freshmen that success in those classes was more closely correlated with SAT verbal test scores than with SAT math scores.  But what I’d been meaning to write about was a bit of a tangent to that:  it wasn’t about how programming is like writing, but rather about how people have wildly different writing styles, which I think everyone accepts (no one expects Dan Brown to compose a novel the same way as Salmon Rushdie or Haruki Murakami), yet people often expect that there’s One True Way to write quality software.  Method X works for them, so if it doesn’t work as well for you, it must be because you’re doing something wrong.

But the truth of the matter is that programmers program differently, just like writers write differently.  One of my favorite discussions of writing comes from Kurt Vonnegut.  In his novel Timequake, he says there are two types of writers:

Tellers of stories with ink on paper, not that they matter any more, have been either swoopers or bashers. Swoopers write a story quickly, higgledy-piggledy, crinkum-crankum, any which way. Then they go over it again painstakingly, fixing everything that is just plain awful or doesn’t work. Bashers go one sentence at a time, getting it exactly right before they go on to the next one. When they’re done they’re done.

It’s an over-generalization, but as over-generalizations go it’s a pretty brilliant one.  Personally I fall into the swooper category; in fact, I’m even more extreme in that I’ll often rewrite from whole cloth large sections or even entire essays (or e-mails, or term papers).  I’ve found that I need to get something down in some roughly-complete form simply in order to collect my thoughts, so I simply couldn’t bash out something perfect one sentence at a time no matter how much I wanted to.  When English teachers in grade school made us write outlines for an essay, I’d write my outline after I was done with the paper, because that was the only way the two would line up.

Interestingly (to me, at least), I’ve found that I’m most productive when I program the same way.  For a long time, I felt kind of bad that I didn’t enjoy doing rigorous test-driven development; it seemed like TDD was some kind of ideal of well-constructed code and if I was just a little more disciplined, I’d fully TDD everything and my code would be flawless as a result.  But in reality, that style is just sub-optimal for me.  Just like when I write, I need to sketch something out in order to really see where I’m going.  So my development methodology these days is often to start with a brutal, hacked-up end-to-end spike of a feature, write some end-to-end tests, and then start building it sideways, back-filling more targeted tests as I go and always keeping it close to some known-good state.  If I try to throw that spike away and start over with TDD, or if I don’t do that spike at all, I go slower and produce code that’s harder to read or modify later.

My point is that there is no one right way to develop; what works well for one person won’t work so well for another person.  Most developers, I think, would admit that different methodologies and techniques and tools are appropriate for different problem spaces; one-person throwaway projects are obviously a different deal from 100-person projects where the code needs to live for decades.  But even within the same problem space, people are just different, and one programming methodology or style doesn’t fit all.

As a developer, it’s your responsibility to figure out what works for you, and that requires some experimentation and often the willingness to try out something that feels horribly awkward and unfamiliar at first.  Unfortunately, it’s often hard to know if you’re giving up too early or if something really just isn’t going to work for you.  So by all means, read all you can about TDD, BDD, pair programming, rapid prototyping, getting real, modeling, or just go out there and start hacking.  Try a method out, ask people who like it, take what works and leave what doesn’t, and find the style that leads to you doing your best work.

And if you ever happen to find yourself in a position of authority, one of the worst things you can do is require everyone to try to program the same way just because it’s the way that works for you or that works for some guy in a book you read.  Give people the freedom to do their best work and, surprise surprise, they will.


Finishing Refactorings

Technical debt, we all know, is hard to manage.  To fight against it, you have to (among other things) refactor and improve your code mercilessly.  But along the way, your attempts to make the code base a safer place can actually make them worse:  if you add in a new way to do something without actually removing the old way, you could end up with a code base that’s more cluttered and more inconsistent, making it even harder to understand than if you’d just never tried in the first place.

To take a contrived example, imagine that you’re writing a new test that needs to create a sample hierarchy of widgets and gizmos for your application, and you realize your existing architecture could use some improvements.  Over time you’ve accumulated a bunch of random helper methods that do various things, like WidgetUtil.createSampleWidgetWithOneGizmo(), but the methods therein are brittle, take too many arbitrary parameters, and are difficult to combine to create richer test data.  So, you decide to refactor the test utilities using the builder pattern to make things more fully-parameterizable and chainable.  Great idea, right?  So you create your nifty new WidgetBuilder and GizmoBuilder classes, use them to write your tests, and as predicted they make the data setup a lot clearer and more flexible.

The question, then, is what do you do next?  Do you just use them in for new tests, or do you go back and refactor the 158 existing tests that use WidgetUtil so that they use your new builder classes?  And do you stop there, or do you attempt to kill off all of your old data creation methods in favor of new builder patterns for everything?

When trying to improve the code, the real work is often not in the initial improvement itself, but rather in fully replacing the old way of doing something with the new, improved way.  Either option has some serious potential downside.  On the one hand, forcing the change through every part of the system is usually hugely labor intensive  and carries a high risk of breaking something that was already working, all for no immediate return whatsoever.  On the other hand, leaving things as is just adds to the complexity of the system:  now there are two ways to do X instead of just one, and every subsequent developer needs to understand both of those and know what the differences are.

So what’s the right thing to do?  Well, as with most programming tasks, it comes down to a judgment call about whether to attempt to push it through the system, whether to just add the new change but not attempt to refactor further, or whether to abandon the change and just do things the old way to avoid adding complexity to the system.  Here are a few questions to ask yourself:

  • How localized is the change?  Is it likely that most other developers will need to be aware of both ways to do things, or will only a smaller subset have to deal with it?
  • How bad is the status quo?  Is the improvement drastic, or merely incremental?
  • How much work is the refactoring?  Are there five other places where a similar pattern is present or 500?
  • How likely is the refactoring to break things?  Is it a fairly straightforward drop-in replacement or change, or is it something more involved that could turn out to have unanticipated interactions?
  • Can automated tools do the refactoring, or is it something that has to be done by hand?
  • How long is this system going to be around for?  Is this something relatively throwaway (though it always seems like every program lives longer than its creators expect), or is it something you know you’ll be dealing with five or ten years from now?
  • How likely is the refactoring to uncover and fix latent bugs or to otherwise clean up buggy areas in the system?

In general, in my experience most people tend to err too far on the side of not finishing things off and truly eliminating all vestiges of an old way to do something, be it in the form of a method or merely a general approach to a problem.  Especially once a system gets to a certain size, the cost starts to seem prohibitive.  But unfortunately, those are often exactly the times that it’s important to keep the code clean and avoid technical debt; technical debt is much more of a killer on large projects than on small ones.

So next time you come up with a new way to do something, or decide that way X is better than way Y, ask yourself if you’re stopping too early or if you should really be following the refactoring through to the end.


Thoughts On Tech Debt

Martin Fowler recently updated his article on technical debt, and we’ve been discussing it in-house lately as well (though isn’t that always a conversation at any company with long-lived products?), so I’ve been thinking about it a lot lately.  

Personally, I think it’s perhaps the most difficult engineering concept for non-engineers to internalize, because most things in the real world just don’t work that way; hence, the necessity of some analogy to a more common real-world concept.  The core feature of development that lets technical debt happen is that the input of one “operation” is always the output of the previous one, meaning that mistakes and shortcuts build up over time, progressively dragging you down.  Everyone has infrastructure that subtly impacts everything they do (if you’re a chef and your pots are poor quality or your kitchen is poorly laid out, it makes everything harder), and reputation can always make your business suffer (if you’re a sales guy and you offend a potential client, that can hurt you long term), but there are very few disciplines where you have to continuously build something up over the course of years.  Even if you’re in construction, once you’re done with a given building you move on to the next one.  But code bases are expected to essentially live forever, meaning that mistakes made during the beginning add up over time.  That just doesn’t happen if you’re a chef, or a salesman, or a doctor, or an artist, or almost any other job you can think of.  For engineers, the concept comes naturally:  everyone understands that the decisions they make now will affect how they do their job in the months and years ahead.  But for someone that’s never experienced that, I think it’s just a very difficult concept to internalize.

That said, it’s an important concept to grasp.  To me, the important part about technical debt isn’t the principal, as it were:  it’s the interest.  Realistically, though, it doesn’t work like real interest.  Rather, it’s more like a tax:  the amount you pay isn’t fixed based on the size of the “debt,” but rather is generally proportional to how much work you want to do, and the size of the “debt” really determines the tax rate rather than some fixed amount of overhead.  (One might argue that there is, in fact, some fixed amount in the form of ongoing maintenance, so there’s probably an argument to be made that the tax analogy isn’t really accurate either.)  But either way you think about it, the important part of the concept isn’t just that there’s a backlog of stuff to fix, but rather that prior decisions that were made affect your ability to work productively in the future.

There are two things that I think are less obvious about how insidious technical debt is.  The first one is that incurring the debt sets expectations artificially high about how much work the team can do; if you incur a ton of debt in the first version of a product in order to get it out the door, you’ve set the expectation that the team can do X amount of work in a 12-month release, when in reality you could only do X/2 without incurring debt . . . and because of that debt, it’s now more like X/2.5.  The second insidious thing is that paying it down requires a huge resource commitment and delivers very little short term benefit.  It often seems like a total black hole; if you’re paying 10% yearly interest on a $100,000 loan, paying back $50,000 on top of the interest only saves you $5,000 a year.  So paying off the debt often seems like a poor investment, which means it just builds up and slowly exerts more and more of a tax on development, making it even harder to do something about it.

Of course, technical debt isn’t exactly measurable, and neither is productivity, but just for fun let’s pretend that we can and do a little  math anyway, since I think it’s an interesting exercise.  Imagine we’re measuring both productivity and debt in feature-dollars, and that we have a team that can do $100k worth of feature-dollars in a given year.  The first version of the product, however, needs to be ready in one year and have $200k worth of features in it.  So to get over the hump, the team borrows $100k at 10% APR.  Of course, the technical debt lenders are cut throat, and in reality it’s always harder to fix things than it would have been do them right in the first place; we can imagine that as if the tech debt lenders charged back-breaking fees, say 30%.  So after year one, we’ve got $200k worth of features and $130k worth of debt costing us $13k a year.

The team realizes that it overextended in the first release, but no one can quite swallow a 50% cut in productivity; the team did $200k the first time around, right?  So instead, they shoot for $120k, thinking that’s a much more reasonable target.  But their original rate minutes the $13k in debt means that to get $120k of features out, they need to incur another $33k of debt, which with fees we’ll round up to $40k.

By the time the third year rolls around, the project is $170k in debt, and the team decides to do something about it.  They decide to scale their dev effort back in half and only deliver $60k worth of features, so they pay $17k in interest on the debt, do $60k worth of feature work, and have $23k leftover for technical debt.  By sacrificing about 1/4 of their total dev capacity for the release (and more like 40% of their actual feature-building capacity), the team manages to reduce the debt down to $147k, saving them all of $2.3k per year in debt.  So next time around, they’re basically in exactly the same boat.

As the debt gets ever higher, there becomes an inflection point where the debt is high enough to nearly bring development to a total halt, and yet so large that nothing can be done about it.  Imagine if the team instead tried to deliver $200k of features in each release.  In the second release, they’re paying $13k in interest, so they have to take out $113k in loans to hit their target, adding maybe $150k after fees.  In the third release, they’re paying $28k in interest, so they have to take out $128k in loans, adding $160k in debt.  So by the fourth release, they’ve got $440k in debt; if they take on no further loans (and after some point you really can’t), their dev capacity will around half of what it should be.  But the debt is also so large that there’s realistically no way to pay it down; it would take five years of no further feature work.  So at that point, it’s basically checkmate for the product . . . either you limp along with a product that doesn’t really evolve anymore and hope that a competitor doesn’t blow by you while you’re standing still, or you try to rewrite the whole thing and hope that doesn’t completely kill the project (which is by far the most likely outcome of a total rewrite effort).

You can play through that scenario with different perceived interest rates, or thinking of the debt as a tax instead of a constant amount, and over longer periods of time, but hopefully it illustrates the problems that I mentioned above, both around artificially increasing expectations for the team, leading to yet more debt, leading to yet more pressure to cut corners, and around the fact that paying down the debt often requires a Herculean effort for very little payoff.  Make of that what you will.  As with real debt, there’s a time and a place to incur it:  sometimes it’s important to hit a deadline, or to get a feature in for a key client, and the interest and fees are worth the cost.  But in the long run, technical debt can’t be allowed to build up to the point where it’s both too large to pay down and too large to allow for future development work, which requires walking a fine line between incurring debt when it’s necessary to get things done fast enough and holding it off or paying it down so that it doesn’t get out of control.