Tuesday, December 8, 2009

what about my old meterics

A few weeks ago I spoke at the PMI chapter meeting http://www.pmi-swohio-chapter.org/present.shtml on some experiences from "my agile journey". The timing of this worked out really well being unemployed and all. One would think I'd have more time to blog but finding a new opportunity is a very challenging, time consuming and interesting project. In any case I appreciated the professional stimulation and engagement that was not related to finding employment. I especially appreciated the questions.
There was one question that caused me to pause, unfortunately the woman who asked the question came up to me latter and apologized for putting me on the spot. Reality was it was those kind of questions that motivate me to give such talks and I thank her. If I can find her contact info I will forward this on. I probably didn't give the smoothest answer; so now sometime latter let me try again.

As I recall the question went something like this- "We've been trying to adopt an agile approach but we have a problem with one of our established metrics that people track. That metric is the days a bug report is open. With sprints sometimes we don't get to fixing a bug for many weeks and this is being raised as a problem with our agile adoption."

My quick answer is you should be considering resolution of the bug as part of sprint planning every 2-4 weeks. If the product owner and team decided that other work was more important than that's where the priority is. I recommend that you make sure all then necessary stakeholders are participating in sprint planning.

The longer story I didn't have on the tip of my tongue or the time to deliver goes like this. First when a team begins the agile journey they tend to carry over some habits or patterns from the waterfall world and just look at agile as a series of small waterfalls. They look at the work in a sprint as just the development tasks and don't establish any quality goals or metrics that help define what it means to be done with work in the sprint. If they are sprinting along and building up a bunch of bugs that aren't getting fixed then the person raising the yellow flag about the increase in duration for open bugs is raising a valid alarm. In that case they have sort of a cafeteria style agile where some practices are adopted and others are not. This is a common problem and one that the team should recognize and deal with. A healthy team adopting agile should not be building up an increasing stack of open bugs. Somewhere there should be the idea of "potentially shippable code" every sprint.
Second in the agile world one needs to look more carefully at what is means for something to be a bug. Let me describe two examples. In case A when someone clicks the OK button after a certain set of steps the system responds with "Access violation" or "Invalid object reference" This is clearly a unacceptable response and the result of some type of implementation error. This is a classic bug. However consider case B when someone clicks the OK button the system responds with a message indicating "no x selected". The user looks at the screen and there is only one choice for things of type x. They may think something like "why do I have to select black as a color when black is the only choice available." The program should be smart enough to realize that and make the selection automatically. This may also be consider a bug.
In case B the problem is most likely due to a flaw in the requirements or a use story. It is sometimes referred to as an emergent requirement. It was a difficult requirement to specify until the customer had some experience with the system and could actually see the problem. The bug represented in case B may be something that every recognizes is a bug but the severity or consequences are not as great as case A. The team and product owner may prioritize other work to be done before working on the bug like the one in case B. If that's the business decision then that's how it should go. Maybe the frequence of the user encountering this case is very low. The bug in case B may stay open for several sprints or even across multiple releases. The key point here is an explict decision (or multiple decisions) was made not to address case B in favor of something else that had higher business value.
Assuming these tradeoffs and decisions are being made, the the question of the standing of the traditional metric bug open duration is called into question. One would have to consider if the metrics truely supported governance of the development process or if they are a carry over from a different process? Hence it is hard question to answer with out context.

No comments:

Post a Comment