Developing like there’s no tomorrow.

The apocryphal story of a large portion of the development team going on a ski trip and the bus careening down a ravine and killing them all is a story which strikes fear into the heart of any business.  Another is the cranky cowboy who writes half of the code-base in a single drunken night and proceeds to quit when he’s offended by an innocent feature request from Marketing.

In both of those cases there’s the dreaded fear of a significant chunk of code that’s been left behind.  Sometimes as developers our instincts are to be the domain expert in a particular area of code.  After all, it’s a certain amount of job security.  I’d like to argue, though, that in the end, everyone on the team benefits if the code is written for others in mind.

Others can help you fix your bugs.

Everyone encounters bugs in her/his code.  Even if you believe you’ve written something completely perfectly, sometimes the specifications change, and what was a feature is now a bug.  If you’ve written code that is understandable and documented, someone else can come and help you fix those problems.  No longer are you chained to your productivity; it’s actually possible to go on a vacation.  While this may seem like a recipe for you being replaceable, it makes you, as a developer, more valuable than some yahoo who might alienate everyone shortly before jetting.

More viewpoints == more opportunities.

If others understand what you’ve written, then it’s rather possible for others to contribute ideas.  Sometimes we become wedded to a particular implementation or path, and while it’s easy to become attached to it, it’s significantly more useful to have a marketplace of ideas to choose from.

You don’t want to be That Asshole.

You know the type; the guy who insists everything be written in a very particular way which no one else understands and he’s not the sharing type.  He might be invaluable now, but if he were ever to encounter you again, you’ll be sure not to recommend him.  After all, you’ve had to deal with his mess after he left for greener pastures.  Don’t be That Asshole.

So, what can you do?

For one, document your code and your changes as comments about why you do certain things.  Sometimes it is possible to be too verbose, but more often than not it’s important to know “yes, we’re doing something that looks totally dumb here, but it’s for a good reason.”  Second, write up a design document for anything more complex than a simple refactor or a bugfix.  If it’s a module or a system, someone will need to know how to use it or make changes to it, and if there’s a document to refer to as canon, then at least the next developer that comes along will have a starting out point.  Third, code reviews (or pair programming, if you’re into the whole “Agile” thing) are your friend.  Having a second, or in some cases, third pair of eyes there when you justify your decisions not only helps someone else understand what you’ve been working on for a few days, but the anticipation that that’s happening just naturally makes you write better code.

The way I like to approach it is to develop code in such a way that I’m always willing to show someone else, or use it as an example for a programming student.  In the cases where that particular statement isn’t true, there by necessity needs to be a good reason; whether it be the need to minimize the change footprint, or the need to maintain the same functionality (even if it’s broken as is).  Otherwise, I’m always developing as though someone’s looking over my shoulder.

Finally, much like the title of this post, I try to develop like there’s no tomorrow.  I have just today, and whatever I get done needs to be in such a state where someone else could sit down at my computer and finish what I’ve done.  Much like those authors of large epic series (*ahem* I’m looking at you, Robert Jordan), it’s important for developers not only to document what they’ve done, but also what they intend to do so that others can pick up where they’ve left off.

Note: this post was previously written by me while I was at a different company.  Still applicable in the general sense, but definitely something we’re a lot better at doing than other places.

Taking Responsibility

I’ve been a bit hesitant to write something about this, since I think it’s something of a delicate subject . . . it’s just a bit odd to write about, knowing that people I work with will read it.  But it’s an important enough subject, and one that I feel strongly enough about, that I think it’s finally time to say something about it.

For those of you who are reading this and don’t know, my technical title is something like “PolicyCenter Architect.”  Since I think the word “architect” in a software context has taken on all sorts of horrible connotations (it conjures up images of people who draw boxes and arrows but don’t actually write any code themselves), I tend not to use it personally.  Instead, I tend to think of my role on the team as being the guy who should be blamed when things don’t work.  Not the guy who should be blamed when something he did doesn’t work, but just the guy to blame, period.

I know it’s something of an extreme stance to take, which is why I think it’s worth an explanation.  Coming at it from a different angle:  my job is to make sure that the project doesn’t fail for technical reasons.  If the code is buggy, or it doesn’t perform, or it’s not flexible enough or in the right ways, then I’ve failed to do my job.  It doesn’t really matter why, or who wrote the code:  I’m supposed to make sure that doesn’t happen, and to do whatever it takes to get there.  In some ways that’s fundamentally unfair, since I can’t possibly control everything that goes on, yet I’m personally responsible for it anyway.  I honestly think that’s still how it has to be.

Taking responsibility for things is a powerful thing, especially when they’re things that you don’t directly control.  Part of being a professional is certainly taking responsibility for your own work, and not passing the blame to someone else when things go wrong.  But I think it’s even more powerful to simply say I will not let this fail.  Full stop.  But I think the only way to really get to that mindset is to take responsibility for that failure; otherwise, there will always be the temptation to throw up your hands and pass the blame on to someone else, and at the moment you absolve yourself of responsibility for that failure you’re lessening your personal drive to do something about it.  The other thing that needs to happen is that you have to accept the fact that things could fail, that you have limited control over that, and if they do it’ll be your fault anyway.  Failure sucks, and it’s a hard thing to swallow:  on my current project, it took me about five months to really come to peace with the fact that I was going to have to make a lot of very, very hard yet speculative decisions and that my best efforts could still end in failure (and a very, very costly failure at that).  Once I finally did, though, I slept a lot easier, and it freed me up to do better work than I otherwise would have been able to do.  Whatever role I managed to play in making the project succeed, I think it was significantly aided by the fact that I got over being afraid of screwing up.

I think there are three primary reasons why taking responsibility and blame for things you don’t actually have control over works out.  First of all, it forces you to do whatever you can to minimize that chance of failure.  Since you don’t have complete control, you can’t get it down to zero, but you sure have an incentive to get it there.  And not only that, but you have an incentive to do what’s right for the project as a whole, not just for your part of it or for you personally.  That extra drive, extra motivation, and whole-project orientation will lead to you doing more to make the project succeed than you would otherwise.  Secondly, it lets you make hard decisions under uncertain conditions.  A lot of projects fail simply because no one is willing to make those decisions; it’s always less personally risky to leave things as they are, or to take the well-trod route, or to pass the decision off to someone else even if they’re not in the best position to make the decision.  Sometimes, you’re the best person to make the decision, and you just have to be willing to do it.  Sometimes you’re not, so you have to pass the analysis off to someone else, but in order for them to really make the best decision they can they have to know that you’re going to back them up and not blame them if it goes wrong for some reason.  Lastly, while I really have no idea what anyone who works with me really thinks, I like to think that having someone willing to take that level of responsibility lets everyone else on the team do better work too, since they know that someone has their back, they can focus on doing their best instead of worrying about trying to look good, and they have the freedom to do what they think is the right thing.

It’s worth mentioning that the flip side of taking the blame is not taking the credit.  I’m not the first one to point out that good leaders should accept blame and deflect responsibility, but I think it’s one of those things that it’s hard to repeat enough.  Never, ever take credit for other people’s work (even if you helped out), and when in doubt err on the side of taking too little (preferably way too little) credit.

I Am Hate Method Overloading (And So Can You!)

My hatred of method overloading has become a running joke at Guidewire. My hatred is genuine, icy hot, and unquenchable. Let me explain why.

First Principals
First of all, just think about naming in the abstract. Things should have good names. A good name is unique and easy to understand. If you have method overloading, the name of a method is no longer unique. Instead, the real name of the method is the sane, human chosen name, plus the fully qualified name of each argument’s type. Doesn’t that just seem sort of insane? If you are writing a tool that needs to refer to methods, or if you are just trying to look up a method reflectively, you have to know the name, plus all the argument types. And you have to know this even if the method isn’t overloaded: you pay the price for this feature even when it isn’t used.

Maybe that strikes you as a bit philosophical. People use method overloading in java, so there must be some uses for it. I’ll grant that, but there are better tools to address those problems.

In the code I work with day to day, I see method overloading primarily used in two situations:

Telescoping Methods
You may have a function that takes some number of arguments. The last few arguments may not be all that important, and most users would be annoyed in having to figure out what to pass into them. So you create a few more methods with the same name and fewer arguments, which call through to the “master” method. I’ve seen cases where we have five different versions of a method with varying numbers of arguments.

So how do I propose people deal with this situation without overloading? It turns out to be a solved problem: default arguments. We are (probably) going to support these in the Diamond release of GScript:

  function emailSomeone( address:String, subject:String, body:String,
                         cc:String=null, logToServer:boolean=false, 
                         html:boolean = false ) {
    // a very well done implementation

A much cleaner solution. One method, with obvious syntax and, if your IDE is any good, it will let you know what the default values of the option arguments are (unlike method overloading.)

True Overloading
Sometimes you truly want a method to take two different types. A good example of this is the XMLNode.parse() method, which can take a String or a File or an InputStream.

I actually would probably argue with you on this one. I don’t think three separate parse methods named parseString(), parseFile() and parseInputStream() would be a bad thing. Code completion is going to make it obvious which one to pick and, really, picking a unique name isn’t going to kill you.

But fine, you insist that I’m a terrible API designer and you *must* have one method. OK, then use a union type (also probably available in the Diamond release of GScript):

  function parse( src:(String|File|IOStream) ) : XMLNode {
    if( src typeis String ) {
      // parse the string

A union type lets you say “this argument is this type or that type.” It’s then up to you to distinguish between them at runtime.

You will probably object that this syntax is moderately annoying, but I’d counter that it will end up being fewer lines of code than if you used method overloading and that, if you really want a single function to handle three different types, you should deal with the consequences. If it bothers you too much, just pick unique names for the methods!

Let’s say you accept my alternatives to the above uses of method overloading. You might still wonder why I hate it. After all, it’s just a feature and a pretty common one at that. Why throw it out?

To understand why I’d like to throw it out, you have to understand a bit about how the GScript parser works. As you probably know, GScript makes heavy use of type inference to help developers avoid the boilerplate you find in most statically typed languages.

For example, you might have the following code:

  var lstOfNums = {1, 2, 3}
  var total = 0
  lstOfNums.each( \ i -> { total = total + i  } )

In the code above, we are passing a block into the each() method on List, and using it to sum up all the numbers in the list. ‘i‘ is the parameter to the block, and we infer it’s type to ‘int’ based on the type of the list.

This sort of inference is very useful, and it takes advantage of context sensitive parsing: we can parse the block expression because we know the type of argument that each() expects.

Now, it turns out that method overloading makes this context sensitive parsing difficult because it means that when you are parsing an expression there is no guarantee that there is a single context type. You have to accept that there may be multiple types in context when parsing any expression.

Let me explain that a bit more. Say you have two methods:

  function foo( i : int ) {
  function foo( i : String ) {

and you are attempting to parse this expression:

  foo( someVar )

What type can we infer that the context type is when we parse the expression someVar? Well, there isn’t any single context type. It might be an int or it might be a String. That isn’t a big deal here, but it becomes a big deal if the methods took blocks, or enums or any other place where GScript does context type sensitive parsing. You end up having lists of context types rather than a single context type in all of your expression parsing code. Ugly.

Furthermore, when you have method overloading, you have to score method invocations. If there is more than one version of a method, and you are parsing a method invocation, you don’t know which version of the method you are calling until after you’ve parsed all the arguments. So you’ve got to run through all the argument types and see which one is the “best” match. This ends up being some really complicated code.

Complexity Kills

Bitch, bitch, moan, moan. Just make it work, you say. If the java guys can do it, why can’t you? Well, we have made it work (for the most part.) But there’s a real price we pay for it.

I’m a Berkeley, worse-is-better sort of guy. I think that simplicity of design is the most important thing. I can’t tell you how much more complicated method overloading makes the implementation of the GScript parser. Parsing expressions, parsing arguments, assignability testing, etc. It bleeds throughout the entire parser, its little tentacles of complexity touching places you would never expect. If you come across a particularly nasty part of the parser, it’s a good bet that it’s there either because of method overloading or, at least, is made more complicated by it.

Oh, man up! you say. That’s the parser’s and tool developer’s problem, not yours.

Nope. It’s your problem too. Like Josh Bloch says, projects have a complexity budget. When we blow a big chunk of that budget on an idiotic feature like method overloading, that means we can’t spend it on other, better stuff.

Unfortunately, because GScript is Java compatible, we simply can’t remove support for method overloading. If we could though, GScript would have other, better features and, more importantly, fewer bugs.

That is why I am hate method overloading. And so can you.

Engineering As Failure Avoidance

The common view of engineering is that you’re building something up:  successively adding parts, features, layers, etc. to eventually achieve some sort of functional goal.  That general view holds whether you’re building a bridge or a piece of software.

There’s another way to look at engineering, though, and that’s as the art of failure avoidance.  When building a bridge, you have to anticipate all the ways that the bridge could fail and plan around them:  worst-case weight loads, earthquakes and high winds, extreme heat or cold, etc.  A good bridge, in the structural sense, is one that manages to avoid all the possible failure modes.  You might think of this as the Anna Karenina rule for engineering:  all successful projects are alike (they don’t fail), while all unsuccessful projects fail in their own unique ways.

Good software engineering is the same in that respect, with the difference that software can generally fail in far more ways than a physical structure can thanks to the magic of things like concurrency, state, and combinatorial explosions of possible inputs.  So what does it mean to engineer to avoid failure rather than just to build features?  Here are a few rules I try to keep in mind.

Test Like You’re Trying To Break It

Engineers just tend not to do the best job at this; you wrote the feature to do X, you test that it does X, and you move on.  But the bugs that approach catches are just the easy ones, and the nastier bugs are the ones that are a result of simply not thinking about certain scenarios or combinations of cases.  Sure, it does X if you use the feature as intended, but what if you try to abuse it?  What if you deliberately put in invalid input, or try to use it at inappropriate times, or otherwise violate any implicit assumptions about when and how the feature will be used?  One of the benefits of test-driven development is that it forces you to at least consider those use cases more.  In the end, though, one set of eyes generally isn’t good enough, which is why pair programming/testing can help and why you still need some amount of dedicated QA time even if your engineers write all the tests you can think of.  But as an engineer, the more you can really try to break the code you’ve just written, the better your tests will be.

No Wishful Thinking

One of the biggest sources of software failure, in my opinion, is simply the fact that software engineers always want to believe that their software works and, given a lack of immediate hard evidence to the contrary, will tend to want to believe that things will work.  Optimism in general is good for your morale, but when it crosses over into wishful thinking you get into trouble (i.e. “I’m confident in our ability to deliver on the features we’ve agreed upon” becomes “We haven’t really tested X yet, but I’m pretty sure it’ll work” or “We haven’t really tested that kind of load but I’m pretty sure we can handle it” or “The schedule looks tight but I think we can make it”).  It helps to have at least one surly, grumpy pessimist on your team to provide a check against most people’s natural tendancy to want to assume things will work out okay.  The rule is generally that if you haven’t demonstrated that it’ll work it probably won’t.

Think About The Worst Case

Another classic engineering failure mode is to only consider the expected or average case and not to plan for or test out the worst case.  For example, if you’re displaying a list of items, how many items will that list have on average versus the 95th percentile or absolute worst cases?  If the list might have 30 items on average but could have 10000 in a worst case, you’re going to need to design your software such that it performs at least acceptably under the worst case, even if it doesn’t appear that often.  It’s easy to conflate “how often X happens” with “how much work I should put in to handle X” but in reality you need to make sure you handle those 1% cases gracefully (which doesn’t necessarily mean “optimally”) even if that doubles the effort.

Optimize For Debugging And Bug-Fixing

No matter what you do your software is going to be buggy (well, maybe not if you’re Don Knuth, but for mere mortals); hopefully your testing procedures allow you to find those problems before they go into production, but an important part of failure avoidance is fixing them when they come up.  There are really two sides to that coin:  tracking the problem down and fixing the problem.  Tracking the problem down generally requires the right set of tools and the right sort of code organization; well-structured code is easier to debug, and explicit code, even if it’s verbose, tends to be much easier to debug than implicit or declarative code where “magic” happens in some incredibly general way that’s hard to put a breakpoint on.

Being able to fix bugs requires something of the same approach.  As a company that ships highly configurable products, though, we have an extra set of issues that come up when you ship frameworks and tools.  If too much is built into the framework in a way that isn’t controllable by the person writing code on top of the framework, you can end up without a bail-out when bugs do occur.  So as much as declarative, implicit programming can be useful, it’s usually good to have an imperative, explicit bail-out when necessary to allow working around shortcomings in the underlying platform.

Consider The Failure Modes

Related to the previous point’s observation that failure is, in some sense, inevitable, it’s important to consider what the failure modes actually are and to design the system in such a way that they’re as benign as possible.  For example, if an automated process can make the right decision 90% of the time, ideally you’d like to identify the 10% of the cases where the system can’t figure out the right thing with 100% certainty and kick that out to a user to make the final call.  If you’re writing a security framework, you need to consider if you want the inevitable mis-configuration to fail open (such that anyone can access things) or fail closed (such that no one can).  In the aforementioned case of a huge list, perhaps you can get away with simply capping the number of entries that can be displayed, or perhaps one page being glacially slow 1% of the time is fine provided you can keep it from slowing the entire server or database to a crawl.  In other words, you don’t have to write the perfect system, but you should write one that avoids doing anything you’re not sure is correct, at least in cases where correctness matters.

If You Can’t Get It Right, Don’t Do It

Lastly, some features just weren’t meant to be built.  They’re too complex, or too ambigious, or otherwise too hard to get right.  Discrection is, as they say, the better part of valor, and it’s important to know the limits of your tools, schedule, and team abilities.  It’s almost always preferable to have 50 100% correct features than 100 50% correct features.

Frameworks and Foreign Languages

I spent part of my recent vacation in Spain, and since I don’t really speak Spanish I was forced to get around by simply parroting certain key phrases. I always feel strange speaking phrases in a language I don’t really understand, since I don’t really fully understand what it is that I’m actually saying; I might be able to swap out a few choice nouns here or there (i.e. ask “where is the museum?” instead of “where is the train station,”) but my ability to do transformations that would be trivial in a language I really understood (i.e. “where was the train station?” or “how far is the train station?”) simply isn’t there. But despite all that, plenty of people are able to navigate around and vaguely communicate in foreign languages through simple rote memorization of common phrases, despite not really understanding what it is that they’re saying or even what the components are that make up those phrases.

The phrase book approach is entirely different from what you’d do if you were to take a class in a foreign language; in that case you’d start from the ground up with some simple nouns and a few common verbs, learning the irregular verbs as well as the common conjugations of regular verbs, and then gradually expand out to other tenses and an expanding vocabulary.  That sort of learning is much more thorough and is generally what you need to really become fluent in a language that you intend to speak, but if you just need to get around for a week or two knowing a few key phrases is generally enough to muddle through.

Upon returning, I was taking some time to try to explain the PolicyCenter delegation model, a pretty complicated concept, to another developer when I came to the realization that perhaps I was going about things the wrong way.  Just like there are two ways to go about learning or using a language, there are two different strategies for learning a framework:  you can try to learn all the nuances or you can just learn the basic phrases, and only really dig in once it’s clear the commitment will be worth it.

The reality is that most people will learn a framework more via the phrasebook approach, where they’ll simply cut-and-paste-and-tweak their way to accomplishing their goal.  Eventually, if they use the framework enough, they’ll put in the work to become fluent in it and understand why things are put together the way they are, or why you generally only use certain combinations of options.  But just as it takes a long time to develop any usable vocabulary if you attempt to start learning a language from the ground up, with an investment that’s simply far too high for something you’ll only need for a week of your life, learning a framework inside out is way too time-consuming and presents an initial barrier to entry that’s far too high.

Unfortunately, many of us designing those frameworks don’t think that way:  we put a lot of time into making sure the API is right and the design solid, and into testing things and making sure it all fits together and is flexible in the right ways while not being overly complicated, and we often unconsciously expect that the people using our frameworks will take the time to understand all those nuances and really learn the language rather than just copying out of a phrasebook.  But that’s just not how it works, and even people that will eventually want to learn the full language will start out trying it out by copying and pasting examples.  Without those examples, people just flail, the barrier to entry is too high, and much of the hard work of creating a great framework is wasted because it ends up being under-utilized.

Whenever I come across a general guiding principle for my development work, I try to come up with some simple catchphrase to encapsulate the idea, and for this one it’s “optimize for copy-and-paste.”  If you want to create a great, configurable product, you can’t just drop people a toolset and some documentation; you have to give them a large, realistic set of working examples that do close enough to what they need to do for them to visualize how to get there.  And you can’t just treat your example code as “throwaway” code that doesn’t need to be good, since people will be using it as their starting point, so they’d better be good examples that you’d be okay with having someone else use in production.

Software Is Not A Term Paper

I’ve been thinking a lot lately, and having lots of discussions, about what our development process should be for the next release of PolicyCenter. I’ve taken the potentially-controversial position that date targets and deadlines are detrimental to developing a software project, and it’s a position that I feel many non-developers don’t really understand.

The intuitive position that I’m arguing against is what I think of as the term paper philosophy; having the deadline there forces you to get it together to crank through the thing, whereas without any firm deadline you’d keep working on it forever (or keep putting off working on it), and moreover you wouldn’t really work very hard on it either. Some people need that deadline pressure and the rush of adrenaline that accompanies the fear of not getting things done, but even people that don’t thrive on it often benefit from having a deadline to focus their efforts.

What I’m saying is that that theory doesn’t apply very well to long-lived software projects. It might apply to toy projects in college that you never work on again, but the two fundamental differences between software and a term paper (or nearly anything else you try to build on a deadline) are that you keep having to work on the software project and there are far more corners to cut in software development.

It’s hard to over-emphasize the significance of that statement, and I think that non-developers really don’t have a very good analog of what that means. There just aren’t that many other fields or endeavors in which something is built up successively over years or even decades of work. There are even fewer where you can release an intermediate product off of that tree periodically that people will actually use (and expect support and upgrades for), and fewer still where you can “successfully” cut as many corners as you can in software development without having things completely fall apart in the short term. The only analogy I can think of might be construction work, where if you rush the foundations of a building then the future stories won’t fare so well, but it might not be obvious that’s the case until you start on those future stories or until an earthquake or some other disaster hits. But most sorts of projects or tasks that people do are kind of one-time things; if you keep using them, you use them as they were when they were finished rather than attempting to build more and bigger things on top of your initial work. If you rush building a chair, maybe the chair’s a little off-balance, but it’s not going to affect future chairs that you build. And of course, that’s all combined with the fact that software estimation for anything other than the immediate task at hand is basically impossible, meaning it’s hard to avoid deadline pressure by just estimating accurately, since you’re always trying to estimate something that’s fundamentally pretty unknowable.

What sorts of corners can you cut that will make the product look “done” but will come back to haunt you later? Here are a few examples:

  • No or incomplete tests
  • Ignoring or not considering edge cases or error conditions
  • Ignoring or not considering feature interactions
  • Poor UI design
  • Poor datamodelling
  • Poor API design
  • Improper encapsulation, decomposition, or otherwise messy code
  • Inconsistency between different areas of the code
  • No, poor, or inaccurate documentation and specs

All of those things will inevitably bite you later down the line; some of them will bite your customers too, some of them will make future changes nearly impossible or far more difficult, some will cause your development effort to grind to a halt in the future, and others will be fixable but just require more work to do later than they would have to do right initially. Nearly all of them will eventually slow down development, putting even more pressure on future deadlines and leading to a vicious cycle where even more short-term hacks enter the system.

So let’s imagine a real-world scenario. You’ve got what you think is 3 months worth of work to do, but one month in less than 1/3 of it (as far as you can tell) is done. What do you think will happen? People will feel behind, and they’ll pull whatever strings they can to try to try to make up the lost time, either consciously or unconsciously. The same thing will happen with more short-term deadlines if you ask someone to try to get something done by the end of the week: they’ll find a way to try to make it happen, even if they should really take more time. In the long run, you’re going to pay dearly for those sorts of decisions.

So what’s the solution? I’m honestly not sure, though I have some theories I want to try out. There are two obvious problems with not having deadlines at all. First of all, some people really do need deadline pressure to get things done, and others simply will go on working on less important things forever in that case. My personal opinion is that those issues can be dealt with without formal deadlines, by constantly re-estimating and re-prioritizing work instead. The second problem is harder to escape, at least if you’re selling your products to people: everyone you’re selling to expects you to tell them what you’ll have done and when it will be done, so it’s inescapable that you’ll end up making some level of date-based feature commitments that then have the potential to put deadline pressure on your team and cause them to cut corners. Some types of agile methodologies can hopefully mitigate those problems for at least part of your development cycle, but most of those work better for consulting-style projects that actually have an end date at some point, and I honestly don’t feel like it’s a solved problem yet for our type of development work. As a company, we do a very good job of (and take immense pride in) standing behind the commitments to our customers, but we can definitely do a better job internally of mitigating the pressure that such commitments put on the development team. I’m hopeful we’ll be trying some new things out in the next release cycle; whether those experiments work out or not, hopefully I’ll get a chance to write about them and tell everyone whether they worked out and why.

Being Wrong

add to del.icio.usAdd to Blinkslistadd to furlDigg itadd to ma.gnoliaStumble It!add to simpyseed the vineTailRankpost to facebook
Since my blog posts largely consist of short follow ups to keefs magisterial posts, where I try to say essentially what he said and hope that, by association, I appear smart, I’ll follow up his “Getting it Right vs. Being Right” post with a practical piece of advice for senior developers who want to foster the type of environment he outlines:

Admit you’ve screwed up. Often. And Loudly.

Even the best coders inflict horrors on the code base from time to time. It is cathartic and, perhaps, even crucial that the best developers admit this openly and enthusiastically about themselves in front of other developers and, especially, in front of management. This does two things.

Most importantly, it freaks management out.

At first.

Then, if they are reasonable, they come to realize that, despite the fact that their ace programmer has admitted all of these systems he has designed are screwed up, things are actually limping along reasonably well. So maybe, despite the fact that imperfect humans are implementing this software, they’ll get something usable and useful in the end. And, if they listen closely, maybe they can even make a good guess where the technical debt that is going to eat up 50% of the next release is. (NB: when good developers screw up, they often screw up AWESOMELY on some of the core parts of the system. Loads of fun unwinding that sort of stuff.)

Secondly, it allows the other developers to relax with respect to their own limitations in the face of complexity. It allows them (or teaches them) to be humble, without being humiliated. People admit when they are going off the rails and when they need help, and bad ideas don’t get as far into the system. The flow of information about the state of the system becomes less clogged with egos. And it fosters a sense of community, where we are all in it together against our own fallibility.

Screwing up software sucks. But, if you are developer, you have. And so have all your coworkers. Maintaining a sense of humor and brutal honesty about it is the best way to deal with this universal (and hilarious) fact.