Phoenix First Two XP Sprints

Phoenix has finished its Sprint 5 and 6, which are the first two XP Sprints.

An XP team always goes through four stages, “forming, storming, norming, performing”. The first Sprint felt like a storming stage, where we are trying to figure out the best way to get the code in without spending too much time on upfront design. At the same time, we are also getting used to paired programming.

Even though paired-programming has become an old trick for me, I still feel that my pairing skill has gotten worse during the past three years of working solo. The second Sprint felt a lot better, and I am hoping to keep this trend.

Items that worth noting:

  • We modified the lava lamp to have a green light on when everything is good. Even though it is redundant, it has very positive effect among us. The only thing we might need to watch out is that someone mentioned that they could be fire hazard because the lamp gets very hot at the end of the day. So we are going to turn them off by the end of the day. This is when I found out that the X10 remote controller does not work, so they are back for replacement now.
  • The lava lamps are helping us getting on the habit of treating broken tests as the highest priority. Due to the nature of Phoenix, we got some interesting test breakage already. We got tests that only break on the server, tests that only break on Linux, and a test that hung. One interesting discovery is that each time we are forced to figure out what is wrong and fix them, our tests ended up making better sense and being more like behavior driven, and I was planning on settling for hacks to keep the test passing!
  • At the beginning of the project, we chose to create just enough stories to get us through the first Sprint, then created a few more for the second Sprint. Looking back, I think that is a good choice. The kind of stories that we create now are so much different but better from the earlier ones. I think that is because at the beginning, your system has literally nothing. It would take a very good story writer to come up with a list stories that really fit into the “INVEST” category of the story. I am not saying it is impossible, I just think that two Sprints of bad stories is not a bad price to pay to get the ball rolling as early as possible and avoid lots of hassle to learn and teach and debate about good stories vs bad stories.

Lava Lamp with CruiseControl

As we are getting Phoenix project under way, I am trying to get it started right by introducing more XP practices. The first three things that we are trying to do are Paired-Programming, Test-Driven Development, Continuous Integration.  This blog, is about Continuous Integration.

Actually, Guidewire has already built an internal tool, ToolsHarness, to handle continuous integration, as I have written in “Managing Tests with ToolsHarness, Individually“. The only difference that I want to introduce for Phoenix project is to fix broken tests AS SOON AS POSSIBLE.

What this means is that I want the testing status of our branch to show right in out faces, without us having to launch a browser, so that we know to take action the moment a test is broken.

I talked to the developer who manages ToolsHarness, and he wrote a servlet that serves information about broken tests and test status like this picture, except in one HTTP GET. Then I set up CruiseControl(version 2.8.2) with X10 publisher, following the setup described on this blog post “Bubble, Bubble, Build’s In Trouble“.

One thing about the normal lava lamp setup has always bugged me in the past, which is when the continuous integration server is in the “testing” state. When you have test broken, the red lava lamp will be on, and you just have to remind yourself that the fix is in and test is running. In some projects, I have used “project soundscape“, so that when tests finishe but are still broken, you will know about it. But if you happen to step outside, you will miss it. Or if you just came in, you have to check the browser or ask others.

So this time, I have done it a little differently, taking advantage of the fact that CruiseControl is not the process running the tests. I bought two lava lamp, one kind of in the red color and the other in blue. I set it up so that when there are two independent lava lamps:

  • Red Lava Lamp for broken tests: When there are broken tests, it will be on, otherwise, it will be off
  • Blue Lava Lamp for testing status: When there are tests running, it will be on, otherwise it will be off

In this way, you have four state to display:

  • Neither is on: All tests pass and the tests are up-to-date
  • Blue is on and red is off: All tests pass so far, but there are tests running against newer changes
  • Blue is off and red is on (see below): You have broken tests, and no code checked in to fix it
  • Both blue and red are on (see below): You have broken tests and someone has cheked in new code (hopefully to fix it)


The setup is pretty straightforward, except CruiseControl 2.8.2 release is missing two crucial files, “lib/win32com.dll” and “lib/javax.comm.properties”, for X10 publisher to work. That, and me missing a tiny but also crucial detail in the documentation, caused my three-hour-hair-pulling experience, and that was with Jeffrey coming to rescue through GTalk. I am going to submit the patch for the release script to include those two files, and documentation with the following checklist:

  • You should provide all FOUR attributes related to X10 for the element, so that you are aware of them and make sure they are correct. These four attributes are as following:
    • “houseCode” and “deviceCode” are for X10 module configuration.
    • “port”, with the value of COM1, COM2, etc., to match the place you plugin the COM module.
    • The last one is “interfaceModel”, which you should really double check with the COM module that you have.
  • Make sure “javax.comm.properties” is in your CruiseControl lib directory (should be there after 2.8.3)
  • Make sure you copy “win32com.dll” from CruiseControl lib directory (should be there after 2.8.3) to your Java bin directory

In the end, I would like to say that I am a satisfied ci-guys customer!


Enterprise Agile Testing – Part I : Introduction

The idea of “Enterprise Agile Testing” has been in my head for several months now, result of what I have learned at Guidewire and based on my previous XP experience at ThoughtWorks. I am planning on a proposal to Agile 2008 on this topic. Before I can choose my proposal topic, I need to write everything down first, kind of like a project engagement.

Actually, project engagement is not a bad analogy. I must define what my proposal is about and what it not about — what is out of scope if you will. My approach is to write a series of blog posts, each cover a specific topic and look back to see what I end up with when I am done. If I approach it as if I like writing a book or even like that agile 2007 paper, I might never finish it.

The Enterprise in Agile Testing

Enterprise here means large scale software development. The large scale can come about through a large code base or a large team. Here I am ignoring the controversial topic of whether or not large code base or large team are problems that should be avoided in the first place. They exist, I just want to point out two things that result from them with regarding testing in such an environment.

First, with a large code base, a tester cannot clearly hold onto the code’s design in his or her head, let alone the intention of the test. Instead, agile testing in enterprise environments requires a comprehensive testing framework. This framework must do more than what JUnit does out-of-the box, so that anyone (including you) can come back to a test at any time and understand it.

Second, with large team, it is pretty much impossible to ensure everyone is aware of how important testing is. This is not to say that you should give up on a large team’s continuous improvement on treating testing seriously and writing better tests, but I have found out that the line of “Zero Test breakage” is extremely hard to hold. As a result, some middle ground must be reached between complete awareness and total ignorance. Only in this way is it possible to see results in testing improvement efforts.

Agile

Since Rob and Jim pulled me away from the EJB madness and introduced me to the wonderful world of XP in 2001, “Enterprise” has slowly restored its place in my vocabulary. At the same time, the term “Agile” is getting closer and closer to my list of red flag words.

Agile here refers to the situation where the code is constantly under changes. This can happen because the requirements keep changing, or the development is done iteratively through story driven development. Constant change makes the ability to write concise tests more important. Additionally, the tests status of project must be treated as more than a binary state if testing is to keep pace with development. In this way, the development would not be paralyzed because there will almost always a test broken here and there, and the turn-around time for testing the checked-in code is not as short as 10 minutes.

Testing

So you have a large project code base that you keep changing, with a team of members with mixed skills. You need to ensure that the code (including the tests) you write is of high enough quality so that two moths from now you can still read them, understand what they do, understand why they do that, and change them. At the same time, you want to give others the time and tools to adjust to test infected development and hopefully test-driven development eventually.

Content and Structure

So, the above is the introduction. The items that are in my mind are as following, I’ll update the links as I post them.

  • Test utilities like assertion, builder
  • TestBase with annotations for test environment configuration
  • ToolsHarness, a continuous integration server that treats tests individually
  • Active and stable branch, localizing the damage