Automation of Burn-up and Burn-down charts using GScript and Entrance

I have always found that the burn-up and burn-down charts are very informative and fit to the iterative story-based development very well. Every project that I work on, I will try to figure out different ways to generate burn-up and burn-down charts.

Two months ago, I took the job on putting Platform on Sprints. After some consideration, I have decided to follow the setup that I have for the AF team, creating the stories in the form of JIRA issues. However, the chart generation that I had for AF team was still semi-manual, which means that it takes a couple of minutes to download, and a couple of minutes to update the stats every morning. The worst part is that when I get busy or sick, I will forget.

So my first action item was to figure out how to generate the same kind of charts with the push of a button. The idea seems to be easy enough:

  1. Figure out the search criteria to retrieve all the JIRA issues of the backlog
  2. Count the issues that are in different states
  3. Update the data with the counts, and check it into Perforce.
  4. Refresh the chart with the updated data

Number one and two were actually not that hard, because Guidewire GScript has a nice WebServices support. With a few tries, I was able to count the beans.

Here is an example of the data generated. I think you get the idea just looking at it.

Date,Day,Closed,Deferral Requested,In QA,Open Stories,Open New Features,Open Bugs
10/09/2009,41,55,0,1,13,7,40
10/12/2009,42,55,0,1,14,7,40
10/13/2009,43,56,0,0,14,7,40
10/14/2009,44,56,0,0,16,7,41
10/15/2009,45,56,0,0,21,8,42
10/16/2009,46,58,0,1,19,8,42
10/19/2009,47,58,0,2,28,8,42
10/20/2009,48,58,0,6,26,8,42
10/21/2009,49,58,0,6,26,8,42
10/22/2009,50,58,0,7,25,8,44

Number three took less time but a bit of research because Perforce Java library’s API is not exactly straightforward.

It took me a while to figure out how to do the last one. After looking into JFreeChart and Google Chart API, I eventually turned to my dear friend, Tod Landis, who is also my partner at Entrance, and he quickly drafted an entrance script for me. Based on it, I was able to write a template that can be used for all teams within a few hours.


PLOT
  very light yellow  area,
  light yellow filled circles and line,
  DATALABELS
  very light orange  area,
  light orange filled circles and line,
  very light red  area,
  light red filled circles and line,
  very light blue area,
  light blue filled circles and line,
  very light gray area,
  dark gray filled circles and line,
  very light green area,
  dark green filled circles and line,
  all AXISLABELS
WITH
  ZEROBASED
  TITLE "Sprint"
  TITLE X "Days"
  TITLE Y "Points"
  SCALE Y 0 100 0
  CLIP
  legend
  gridlines
  collar
  no sides
SELECT
  `Open Bugs`,
  `Open Bugs`,
  date,
  `Open New Features`,
  `Open New Features`,
  `Open Stories`,
  `Open Stories`,
  `In QA`,
  `In QA`,
  `Deferral Requested`,
  `Deferral Requested`,
  `Closed`,
  `Closed`,
  day
from report;

Please note this is the final PLOT script, there are other SQLs run before this to import the data into the MySQL database, sum up the data to produce a stacked chart, and even out the labels.

And I now have this chart generated automatically every morning with the help of a windows scheduler.


Phoenix First Two XP Sprints

Phoenix has finished its Sprint 5 and 6, which are the first two XP Sprints.

An XP team always goes through four stages, “forming, storming, norming, performing”. The first Sprint felt like a storming stage, where we are trying to figure out the best way to get the code in without spending too much time on upfront design. At the same time, we are also getting used to paired programming.

Even though paired-programming has become an old trick for me, I still feel that my pairing skill has gotten worse during the past three years of working solo. The second Sprint felt a lot better, and I am hoping to keep this trend.

Items that worth noting:

  • We modified the lava lamp to have a green light on when everything is good. Even though it is redundant, it has very positive effect among us. The only thing we might need to watch out is that someone mentioned that they could be fire hazard because the lamp gets very hot at the end of the day. So we are going to turn them off by the end of the day. This is when I found out that the X10 remote controller does not work, so they are back for replacement now.
  • The lava lamps are helping us getting on the habit of treating broken tests as the highest priority. Due to the nature of Phoenix, we got some interesting test breakage already. We got tests that only break on the server, tests that only break on Linux, and a test that hung. One interesting discovery is that each time we are forced to figure out what is wrong and fix them, our tests ended up making better sense and being more like behavior driven, and I was planning on settling for hacks to keep the test passing!
  • At the beginning of the project, we chose to create just enough stories to get us through the first Sprint, then created a few more for the second Sprint. Looking back, I think that is a good choice. The kind of stories that we create now are so much different but better from the earlier ones. I think that is because at the beginning, your system has literally nothing. It would take a very good story writer to come up with a list stories that really fit into the “INVEST” category of the story. I am not saying it is impossible, I just think that two Sprints of bad stories is not a bad price to pay to get the ball rolling as early as possible and avoid lots of hassle to learn and teach and debate about good stories vs bad stories.

Burn-up and Burn-down Charts

I have always thought Sprint reporting is a major communication tool to be used within the team as well as to the outside. It is the time for the team to take one step back, look at the project as a whole, comparing notes, and make continuous improvements. It is also the time for the team to report the progress and any difficulties encountered, so that the stake holders can make plan adjustments and provide help if needed.

Burn-up and burn-down charts are my favorite report, because they fit very well in the story based iteration model of the project development. Anyone understanding stories and iterations (not that it is always easy to learn) can understand these charts very easily. I also find that these chart can generate more questions and lead the team in the right direction.

Burn-down Chart

Burn-down chart is straightforward and easy to understand. It measures the burn rate of the story in the unit of the story points. It is really easy to understand different ways of predicting the outcome of the project by predicting the future velocity of the Sprints.

All the iteration tracking tools that I have tried have this support. This one is made by Pivotal Tracker.

For those who use JIRA or good-old story cards to track the iterations, it is not hard to produce this chart as well, with the worst part being figuring out where to use what formula. The following are from the other two projects, made with Microsoft Excel and Google spreadsheet. With customized tool, I get to explore different styles.

In the first one, the stories are divided into “must-have” and “everything else” category, and tracked at the same time. The prediction lines are shown in different color. In the second one, the progress is shown along with the burn-down, so that in the case where it is actually “burning up”, it shows that it is not caused by losing velocity.











Burn-up Chart

For a project with just a single coach, burn-up chart can be a great help. It can explain a lot of concepts in the story based development, iterative development, and can help the coach recognize the patterns in the development and take actions to adjust the direction of the team.

I have found that burn-up chart always a bit harder to understand, and might look intimidating. So if you are introducing it for the first time, you should not just paste it in a report and email to others. It is best to show it in person and let the conversation start.

The first chart shows a project where development is fairly smooth, QA can just keep up with the story being finished. On the other hand, the project requirement is very volatile. The interesting thing to point out is that because the team is focusing on one Sprint at a time during the release, and one story at a time during the Sprint, the dramatic scope changes did not affect the development at all.

The second one is a quite typical burn-up chart, where the team discovers new cases as they go, and adding the understanding to the backlog in the form of the stories.

Sprint Burn-up Chart

I also found a burn-up chart for the Sprint is useful to figure out what happened during the Sprint. I think this is what is called “Sprint Signature” in the SCRUM book.

A Sprint burn-up chart should be used strictly internally, because only the team who have just been through the Sprint can look at it, talk about it and then draw conclusions. This should never be used for any other purpose, in my humble opinion.