Automation of Burn-up and Burn-down charts using GScript and Entrance

I have always found that the burn-up and burn-down charts are very informative and fit to the iterative story-based development very well. Every project that I work on, I will try to figure out different ways to generate burn-up and burn-down charts.

Two months ago, I took the job on putting Platform on Sprints. After some consideration, I have decided to follow the setup that I have for the AF team, creating the stories in the form of JIRA issues. However, the chart generation that I had for AF team was still semi-manual, which means that it takes a couple of minutes to download, and a couple of minutes to update the stats every morning. The worst part is that when I get busy or sick, I will forget.

So my first action item was to figure out how to generate the same kind of charts with the push of a button. The idea seems to be easy enough:

  1. Figure out the search criteria to retrieve all the JIRA issues of the backlog
  2. Count the issues that are in different states
  3. Update the data with the counts, and check it into Perforce.
  4. Refresh the chart with the updated data

Number one and two were actually not that hard, because Guidewire GScript has a nice WebServices support. With a few tries, I was able to count the beans.

Here is an example of the data generated. I think you get the idea just looking at it.

Date,Day,Closed,Deferral Requested,In QA,Open Stories,Open New Features,Open Bugs
10/09/2009,41,55,0,1,13,7,40
10/12/2009,42,55,0,1,14,7,40
10/13/2009,43,56,0,0,14,7,40
10/14/2009,44,56,0,0,16,7,41
10/15/2009,45,56,0,0,21,8,42
10/16/2009,46,58,0,1,19,8,42
10/19/2009,47,58,0,2,28,8,42
10/20/2009,48,58,0,6,26,8,42
10/21/2009,49,58,0,6,26,8,42
10/22/2009,50,58,0,7,25,8,44

Number three took less time but a bit of research because Perforce Java library’s API is not exactly straightforward.

It took me a while to figure out how to do the last one. After looking into JFreeChart and Google Chart API, I eventually turned to my dear friend, Tod Landis, who is also my partner at Entrance, and he quickly drafted an entrance script for me. Based on it, I was able to write a template that can be used for all teams within a few hours.


PLOT
  very light yellow  area,
  light yellow filled circles and line,
  DATALABELS
  very light orange  area,
  light orange filled circles and line,
  very light red  area,
  light red filled circles and line,
  very light blue area,
  light blue filled circles and line,
  very light gray area,
  dark gray filled circles and line,
  very light green area,
  dark green filled circles and line,
  all AXISLABELS
WITH
  ZEROBASED
  TITLE "Sprint"
  TITLE X "Days"
  TITLE Y "Points"
  SCALE Y 0 100 0
  CLIP
  legend
  gridlines
  collar
  no sides
SELECT
  `Open Bugs`,
  `Open Bugs`,
  date,
  `Open New Features`,
  `Open New Features`,
  `Open Stories`,
  `Open Stories`,
  `In QA`,
  `In QA`,
  `Deferral Requested`,
  `Deferral Requested`,
  `Closed`,
  `Closed`,
  day
from report;

Please note this is the final PLOT script, there are other SQLs run before this to import the data into the MySQL database, sum up the data to produce a stacked chart, and even out the labels.

And I now have this chart generated automatically every morning with the help of a windows scheduler.


Phoenix First Two XP Sprints

Phoenix has finished its Sprint 5 and 6, which are the first two XP Sprints.

An XP team always goes through four stages, “forming, storming, norming, performing”. The first Sprint felt like a storming stage, where we are trying to figure out the best way to get the code in without spending too much time on upfront design. At the same time, we are also getting used to paired programming.

Even though paired-programming has become an old trick for me, I still feel that my pairing skill has gotten worse during the past three years of working solo. The second Sprint felt a lot better, and I am hoping to keep this trend.

Items that worth noting:

  • We modified the lava lamp to have a green light on when everything is good. Even though it is redundant, it has very positive effect among us. The only thing we might need to watch out is that someone mentioned that they could be fire hazard because the lamp gets very hot at the end of the day. So we are going to turn them off by the end of the day. This is when I found out that the X10 remote controller does not work, so they are back for replacement now.
  • The lava lamps are helping us getting on the habit of treating broken tests as the highest priority. Due to the nature of Phoenix, we got some interesting test breakage already. We got tests that only break on the server, tests that only break on Linux, and a test that hung. One interesting discovery is that each time we are forced to figure out what is wrong and fix them, our tests ended up making better sense and being more like behavior driven, and I was planning on settling for hacks to keep the test passing!
  • At the beginning of the project, we chose to create just enough stories to get us through the first Sprint, then created a few more for the second Sprint. Looking back, I think that is a good choice. The kind of stories that we create now are so much different but better from the earlier ones. I think that is because at the beginning, your system has literally nothing. It would take a very good story writer to come up with a list stories that really fit into the “INVEST” category of the story. I am not saying it is impossible, I just think that two Sprints of bad stories is not a bad price to pay to get the ball rolling as early as possible and avoid lots of hassle to learn and teach and debate about good stories vs bad stories.

Lava Lamp with CruiseControl

As we are getting Phoenix project under way, I am trying to get it started right by introducing more XP practices. The first three things that we are trying to do are Paired-Programming, Test-Driven Development, Continuous Integration.  This blog, is about Continuous Integration.

Actually, Guidewire has already built an internal tool, ToolsHarness, to handle continuous integration, as I have written in “Managing Tests with ToolsHarness, Individually“. The only difference that I want to introduce for Phoenix project is to fix broken tests AS SOON AS POSSIBLE.

What this means is that I want the testing status of our branch to show right in out faces, without us having to launch a browser, so that we know to take action the moment a test is broken.

I talked to the developer who manages ToolsHarness, and he wrote a servlet that serves information about broken tests and test status like this picture, except in one HTTP GET. Then I set up CruiseControl(version 2.8.2) with X10 publisher, following the setup described on this blog post “Bubble, Bubble, Build’s In Trouble“.

One thing about the normal lava lamp setup has always bugged me in the past, which is when the continuous integration server is in the “testing” state. When you have test broken, the red lava lamp will be on, and you just have to remind yourself that the fix is in and test is running. In some projects, I have used “project soundscape“, so that when tests finishe but are still broken, you will know about it. But if you happen to step outside, you will miss it. Or if you just came in, you have to check the browser or ask others.

So this time, I have done it a little differently, taking advantage of the fact that CruiseControl is not the process running the tests. I bought two lava lamp, one kind of in the red color and the other in blue. I set it up so that when there are two independent lava lamps:

  • Red Lava Lamp for broken tests: When there are broken tests, it will be on, otherwise, it will be off
  • Blue Lava Lamp for testing status: When there are tests running, it will be on, otherwise it will be off

In this way, you have four state to display:

  • Neither is on: All tests pass and the tests are up-to-date
  • Blue is on and red is off: All tests pass so far, but there are tests running against newer changes
  • Blue is off and red is on (see below): You have broken tests, and no code checked in to fix it
  • Both blue and red are on (see below): You have broken tests and someone has cheked in new code (hopefully to fix it)


The setup is pretty straightforward, except CruiseControl 2.8.2 release is missing two crucial files, “lib/win32com.dll” and “lib/javax.comm.properties”, for X10 publisher to work. That, and me missing a tiny but also crucial detail in the documentation, caused my three-hour-hair-pulling experience, and that was with Jeffrey coming to rescue through GTalk. I am going to submit the patch for the release script to include those two files, and documentation with the following checklist:

  • You should provide all FOUR attributes related to X10 for the element, so that you are aware of them and make sure they are correct. These four attributes are as following:
    • “houseCode” and “deviceCode” are for X10 module configuration.
    • “port”, with the value of COM1, COM2, etc., to match the place you plugin the COM module.
    • The last one is “interfaceModel”, which you should really double check with the COM module that you have.
  • Make sure “javax.comm.properties” is in your CruiseControl lib directory (should be there after 2.8.3)
  • Make sure you copy “win32com.dll” from CruiseControl lib directory (should be there after 2.8.3) to your Java bin directory

In the end, I would like to say that I am a satisfied ci-guys customer!


Pair-programming

Pair-programming is not a common practice at Guidewire right now.  I hope one day more people at Guidewire can agree with this post.

http://www.nomachetejuggling.com/2009/02/21/i-love-pair-programming/

Right now, we are trying it out at Phoenix project.


JIRA Story Wall

With the “shared dashboard” feature of JIRA, we have been experimenting a shared dashboard that can be served as a virtual story wall that can be useful to us. And here is one version.

AF stories are in the form of JIRA items, in this way, JIRAs created by other teams for bug fixese or support can be rolled into one backlog. Creating stories in the form of JIRA is not nearly as trivial and easy as creating stories on the index cards. But once you pass that phase and get yourself used to it, it does bring a lot of benefits of a digital media.

(Note, I had to take the image down because of its detailed information about the development status on items.)

On the left, the first section shows the stories for the current Sprint with status and the person who is working on them. Each person is to finish the JIRAs assigned to him or her, before picking the ones assigned to the general bucket (AF General).

The section section shows the stories allocated for the next Sprint, grouped by assignees and components. The third section shows the full current backlog by component and priority. We used it a lot when trying to figure out what to work on next or what to push to next release. The last one is the backlog for the next milestone.

During the Sprint, some issues will come up. The most urgent ones will be pulled into the current Sprint to be dealt with right away. The others will either be added to the next Sprint, or add to the appropriate backlog. At the beginning of the Sprint, after counting the JIRAs already added to the Sprint, carrying over the ones from the past Sprint, we will select more JIRAs from the backlog by looking through the components.

On the right, the first section is the list of the JIRAs that the current user is working on (In Progress). It has been pretty useful to me to come in and get started right away by looking at this short list. However, I just learned today that everybody else is just looking at the JIRAs assigned to him or her in the current Sprint.

The JIRAs in the next list are the ones that have been marked as resolved by developers but not verified by QA. They are sorted based on the order that QAs would like to process them. QA team uses this to pick the JIRAs to verify during the Sprint.

The last section on the right contains the JIRAs that have not been added to any backlog. In this way, all the JIRAs will be looked at before adding to the backlog. One thing about using JIRA as story is that anyone can create a JIRA and assign it to your team, which mean your backlog can grow without you knowing it. With this extra step of adding newly created JIRA to the appropriate backlog, we are always aware of any new work coming our way.


Burn-up and Burn-down Charts

I have always thought Sprint reporting is a major communication tool to be used within the team as well as to the outside. It is the time for the team to take one step back, look at the project as a whole, comparing notes, and make continuous improvements. It is also the time for the team to report the progress and any difficulties encountered, so that the stake holders can make plan adjustments and provide help if needed.

Burn-up and burn-down charts are my favorite report, because they fit very well in the story based iteration model of the project development. Anyone understanding stories and iterations (not that it is always easy to learn) can understand these charts very easily. I also find that these chart can generate more questions and lead the team in the right direction.

Burn-down Chart

Burn-down chart is straightforward and easy to understand. It measures the burn rate of the story in the unit of the story points. It is really easy to understand different ways of predicting the outcome of the project by predicting the future velocity of the Sprints.

All the iteration tracking tools that I have tried have this support. This one is made by Pivotal Tracker.

For those who use JIRA or good-old story cards to track the iterations, it is not hard to produce this chart as well, with the worst part being figuring out where to use what formula. The following are from the other two projects, made with Microsoft Excel and Google spreadsheet. With customized tool, I get to explore different styles.

In the first one, the stories are divided into “must-have” and “everything else” category, and tracked at the same time. The prediction lines are shown in different color. In the second one, the progress is shown along with the burn-down, so that in the case where it is actually “burning up”, it shows that it is not caused by losing velocity.











Burn-up Chart

For a project with just a single coach, burn-up chart can be a great help. It can explain a lot of concepts in the story based development, iterative development, and can help the coach recognize the patterns in the development and take actions to adjust the direction of the team.

I have found that burn-up chart always a bit harder to understand, and might look intimidating. So if you are introducing it for the first time, you should not just paste it in a report and email to others. It is best to show it in person and let the conversation start.

The first chart shows a project where development is fairly smooth, QA can just keep up with the story being finished. On the other hand, the project requirement is very volatile. The interesting thing to point out is that because the team is focusing on one Sprint at a time during the release, and one story at a time during the Sprint, the dramatic scope changes did not affect the development at all.

The second one is a quite typical burn-up chart, where the team discovers new cases as they go, and adding the understanding to the backlog in the form of the stories.

Sprint Burn-up Chart

I also found a burn-up chart for the Sprint is useful to figure out what happened during the Sprint. I think this is what is called “Sprint Signature” in the SCRUM book.

A Sprint burn-up chart should be used strictly internally, because only the team who have just been through the Sprint can look at it, talk about it and then draw conclusions. This should never be used for any other purpose, in my humble opinion.


Staying Agile by Going off “Agile”

This is a blog post of a statement that I finally made after reading “The Decline and Fall of Agile

I am going off “Agile”

No, I am not going to give up test-driven development. In fact, I am doing more of it by adopting more behavior-driven development, which is actually harder at certain cases. It helps me understand the code and verifies the design (I believe it does that rather than “drives” out the design nowadays but that is another post).

No, I am not going to give up aggressive refactoring. Every time, and I do mean EVERY TIME, I slack on it, I end up paying the price one way or another and kicking myself. I have been proud of every single line of code that I have produced (cannot say that for all the code that I have inherited and worked on), and they always serve me and my team well.

No, I am not going to give up on iterative development in the form of Iterations or Sprints. They help my teams focus, avoid distractions, and can still response to the request from outside the team with crystal clear transparency.

So what is it?

I am going to take “agile” off my vocabulary in all communications.

Rather than saying

“Not able to have QA accepting the stories as soon as they are finished is not agile”

I’ll say

“We need to get those finished stories accepted as soon as possible, so that we can close the feedback loop. When they are accepted, we know we are doing a good job. And when they are not, we can trace back to our thoughts as we were developing them and understand where it went wrong”.

Rather than saying

“Not setting a goal at the begining of the Sprint and verifying them through the Sprint signature is not agile”

I’ll say

“We need to establish a way to provide feedback regarding our work and make continuous improvement to the way we work, so that we can provide better value to the people who pay us. One way we can do that is to look back at our progress in the past Sprint, talk about our experiences and thoughts, and come up with action items to make things better”

Apparently this will make conversation longer, because I have to present proof more than a good book (dozens of good books available as a matter of fact). Sometimes, I will have to wait, patiently, for the opportunity to present itself so that I can use as an example to persuade others to slow down, do it right, and do it well.

Why

I have been thinking along this line for a while. I read the “Good Agile and Bad Agile“, felt annoyed because there is truth in what he is saying. From time to time, I get annoyed by the negative comments that don’t even make sense to me. I wrote one post about “Things You Cannot Get Certified For“, and argued hard on several news groups that I subscribe.

Very soon I got tired of it. Using the word “agile” has caused more distraction than its worth. Practices are sometimes picked upon literally and are attacked. Rather than looking at the value something is trying to bring, many seem to tend to look at the cost(time, tools, processes) first. It got attacked, it got debated, and at the end nothing is done and the bad things just keep going. And you get people from all over the world writing about how “agile” did not work for them and laughing at anyone who is interested in trying.

To add insult to injury, you can also hear usage of agile in the format like “Let’s be agile about it, instead of insisting on … “. It is really hard to argue in this situation, because you cannot just simply say “no, lets not be agile about it because we should insist on going through this three hour meeting to make sure that our stories are up to the standard”.

I have been avoiding throwing agile around for a while and I think I am happy with the result. I also have been ignoring the bad usage of ‘agile’ out there so that I can stay healthy to focus on bringing agility to my teams. (I swear this is the last time). A month ago, I went through all my blog posts and took agile out of the labels and categories.

I have been thinking about writing a post like this and finally decided to do it after reading James’ post “The Decline and Fall of Agile