October 2007

Monthly Archive

Research, Spikes, Tracer Bullets, Oh My!

Posted by on 22 Oct 2007 | Tagged as: Agile, Architecture, Product Owner, Scrum, TDD, XP

A couple of years ago, a team that I was working on as ScrumMaster had a discussion on what is a spike versus research versus tracer bullets.  The reason for the discussion was how loosely we used these terms with our current customer.  It was confusing to both the customer and us.  It also allowed too much room for working on things we did not commit to as a team within a Scrum sprint.

The team sat down and decided that spike, research, and tracer bullet all meant different things to them.  Here is what we decided on along with their respective indicators for using them:

  • Spike – a quick and dirty implementation, designed to be thrown away, to gain knowledge – indicator: unable to estimate a user story effectively
  • Research – broad, foundational knowledge-gaining to decide what to spike or give the ability to estimate – indicator: don’t know a potential solution
  • Tracer Bullet – very narrow implementation in production quality of an epic/large user story – indicator: user story is too large in estimation

This clarification allowed the customer and team to identify when and how these activities should be used.  We decided with the customer that a spike and research were timeboxed activities and were similar to an investment by the customer.  These were done in one iteration and used to help define an upcoming user story’s estimate and a starting point for implementation in the next iteration.  Although estimates were an indicator for spikes and research, they were not the end goal.  The idea of a spike or research is also to:

  • Understand “how” to implement a piece of business value
  • Propose a solution to help the customer make business value decisions
  • Minimize risk hidden in the cost of implementing a piece of business value
  • Control the cost of R&D through the use of an “investment” model

A tracer bullet was used to breakdown an epic or large user story into smaller chunks and could have some effect on the customer’s backlog of features.  If the team discussed a user story on the backlog which introduced a new architectural element then a tracer bullet would be implemented to introduce it into the software without the overhead of a detailed user interface.  For instance, if we were hooking into our Peoplesoft and Siebel instances and wanted to show a customer’s information combined from both systems, we may have a tracer bullet user story such as:

As a customer service rep I want to view the customer’s name and multiple system identifiers

After implementing this user story we may have many other backlog user stories which references additional customer information which must retrieved from either or both of the systems.

The team put these definitions and indicators up on the wall as a big, beautiful information radiator that we could refer to.  Other teams also started to use these descriptions in their own projects to help clarify essential work with their customers.

Building a Definition of Done

Posted by on 05 Oct 2007 | Tagged as: Agile, Architecture, Leadership, Product Owner, Scrum, XP

Joe, the Developer, waltzed into work one sunny Tuesday morning and was approached by Kenny, the Project Manager, asking if the feature Joe was working on was done. Joe had checked in his code to their shared source code repository yesterday afternoon and had unit tested it before doing so. With an emphatic “yes” Joe confirmed the feature’s completion. Kenny sighed in relief and said “great, then we will go ahead and deploy it into the UAT environment for our customer advocates to view”. Joe quickly backtracked on his original answer and blurted out “but it has not been fully tested by QA, the documentation is not updated, and I still need to pass a code review before it is finished”.

Has this ever happened to you? Were you Joe or Kenny? How did you react in this situation? Did it feel like development was not being honest? Did it seem that the Project Manager was assuming too much? We’ve got just the tool for you; the “Definition of Done”.   Following is a list of steps that I use when coaching a team on their own Defintion of Done:

  1. Brainstorm – write down, one artifact per post-it note, all artifacts essential for delivering on a feature, iteration/sprint, and release
  2. Identify Non-Iteration/Sprint Artifacts – identify artifacts which can not currently be done every iteration/sprint
  3. Capture Impediments – reflect on each artifact not currently done every iteration/sprint and identify the obstacle to it’s inclusion in an iteration/sprint deliverable
  4. Commitment – get a consensus on the Definition of Done; those items which are able to be done for a feature and iteration/sprint

During the brainstorming portion of the exercise it is important to discuss whether or not each artifact is needed to deliver features for release.  Some examples are:

  • Installation Build (golden bits)
  • Pass All Automated Tests in Staging Environment
  • Sign Off
  • Pass Audit
  • Installation Documentation Accepted by Operations
  • Release Notes Updated
  • Training Manuals Updated

It is important to note that these are not features of the application but rather the artifacts which are generated for a release.  Some questions you may ask about each artifact are:

  • Who is the target audience for this artifact?
  • Is this a transitory artifact for the team or stakeholders?
  • Who would pay for this?
  • Is it practical to maintain this artifact?

When identifying non-iteration/sprint artifacts I usually ask the team to create a waterline mark below the brainstormed post-it notes.  Look at each of the artifacts written on the post-it notes above the line and discuss whether or not it can be done every iteration/sprint for each feature, potentially incrementally.  If it can leave it above the waterline.  If it can not then move the artifact below the waterline.

In the next step of capturing impediments the team will look at each of the artifacts below the waterline and discuss all of the obstacles which stop them from delivering this each iteration/sprint.  This is a difficult task for some teams because we must not hold ourselves to the current status quo.  I like to inform the team that answers such as “that is just the way it is” or “we can’t do anything about that” are not acceptable answers since we can not action them.  The obstacles, no matter how large an effort to remove them may seem, can be informative to management about how they can support the team in releasing with more predictability.  I have found that many of the obstacles identified by teams in this step create issues such as having an unpredictable release period after the last feature is added.   The obstacle may be that we have an independent verification from QA.  There could be many reasons behind this, derived usually from audit guidelines and governance policies, but there may be creative ways to incrementally conduct the verification which increases predictability of the release stabilization period.  Over time these obstacles can be removed and the artifact which was not included in the Definition of Done for each iteration/sprint based on that obstacle can be promoted to above the waterline.

Once you have your Definition of Done, identified artifacts which can not be delivered each iteration/sprint, and captured the obstacles for those artifacts it is time to gain consensus from the team.  You can use any consensus building technique you would like but I tend to use the Fist of Five technique.  If the team agrees with the Definition of Done then we are finished.  If there people on the team who are not on board yet it is time to discuss their issues and work towards a consensus.  It is important that all members of the team agree to the Definition of Done since they will all be accountable to each other for delivering on it for each feature.  Once you have consensus I like to have the Definition of Done posted in the team’s co-located area as an information radiator informing them of their accountability to each other.

The Definition of Done exercise can have many ramifications:

  • Creation of an impediments list that management can work on to support the delivery of the team
  • Organizational awareness of problems stemming from organizational structure
  • Team better understanding expectations of their delivery objectives
  • Team awareness of other team member roles and their input to the delivery

If you do not already have a Definition of Done or it has not been formally posted, try this exercise out.  I hope that building a Definition of Done in this manner helps your team get even better at their delivery.  Below is an example of a real team’s Definition of Done:

Definition of Done example