April 2007

Monthly Archive

Automated Acceptance Testing

Posted by on 12 Apr 2007 | Tagged as: Agile, Architecture

The following paragraphs came from an email I sent in response to questions about FitNesse and StoryTestIQ which can be found on the Scrum development mailing list here.   I was reminded of this post to the mailing from a recent blog entry on my friend’s blog here.

Good day Anne,

I will preface my email with the fact that I am one of the developers for StoryTestIQ (aka STIQ) and that we use it on many projects currently at SolutionsIQ. We created STIQ out of a need to help our Product Owner describe what they were asking for in their feature requests and ultimately user stories. STIQ is a combination of Fit, FitNesse, and Selenium with some special sauce that allows you to do both web UI and beneath the UI acceptance testing.

Here is a scenario of how we use it:

Upon completion of the Sprint Planning Meeting we come out with the following artifacts related to the development of our acceptance tests:

* User Stories the team has committed to

* Acceptance Criteria (aka “Confirmation” from Ron Jeffries description of a user story)

* Some high level communications about the user interactions which have been formally or informally captured

Our teams immediately go to work on created automated Acceptance Tests using this information and further collaboration with the Product Owner and other Subject Matter Experts (SME). After we have built up a good amount of Acceptance Tests for our applications we will collect a bunch of utility scripts which get us to specific parts of the application or do repetitive things like login as multiple types of users. Usually we can put together some scaffolding for our Acceptance Tests rather quickly using these utility scripts and much of the collaboration happens after this.

After setting up our Acceptance Tests we will “tag” them with the name of the Sprint so that we can create a test suite which represents all Acceptance Tests for the Sprint. All of the individual tests which are collected into this test suite should be failing to begin with because we have added in the expectations of the User Story’s Acceptance Criteria into the tests. Once a user story has tests ready (meaning failing tests created with acceptance of the tests from our Product Owner) then we begin coding using TDD and potentially creating more QA regression tests which go beyond the Acceptance Tests. These extra tests may go into a different tool (such JMeter, QTP, DBUnit, xUnit, etc…) or could be extra tests added to STIQ.

Once the team has developed all of the artifacts needed to meet their Definition of Done and the Acceptance Tests are all passing for a User Story then the User Story is “accepted” with validation from the Product Owner during the Sprint hopefully. We get feedback on a continual basis through a Continuous Integration tool that we use with STIQ to show if all Acceptance Tests are passing that have been included into the build (meaning the development has been worked through to make that test “pass”).

We are not perfect in the creation of all Acceptance Tests at the beginning of the Sprint with the Product Owner, SME, business analysts, etc. But we do have a very good start on capturing the intent of the Sprint deliverables. There are many times that development of code to satisfy an Acceptance Test may identify a potential issue which the Product Owner negotiates with the team to resolve. There are also times that the Product Owner, upon seeing the actual working code, may decide to negotiate on specific details of an Acceptance Test. I believe that the tight feedback loops with the Product Owner increases our ability to deliver closer to what the customer wants without as much rework.

I hope this helps and would be interested in what and how you decide to move forward with Acceptance Testing mostly in the automated capacity. In full disclosure it is not an easy practice to work into your daily routines. There are many bumps and bruises along the way but once it starts to settle out things get a whole lot better. I know for myself that I do not like creating software in any other way.

Clean Up, Clean Up, Everybody, Do Your Share

Posted by on 08 Apr 2007 | Tagged as: Agile, Architecture, Scrum

The Playroom

While playing dolls with my 4 year old daughter in her designated play area, I took a look around at the current disorganization which had emerged prior to and during this week that I was taking care of the kids. It became apparent that I was unable to serve my 3 month old’s eating and changing needs and also stay on top of each toy my daughter played with during the day. This was the case even though my daughter is above average at the play with one toy, put it away, and grab the next toy concept. I gazed at tiny doll clothing strewn throughout wicker play baskets and thrust into the depths of the playhouse. Tea party sets were placed atop tables and toy boxes. As my daughter rang the doorbell of the Barbie playhouse and waited for the doll I controlled to answer the door, I began to associate what I was seeing with the accrual of technical debt in software.

Dolls everywhere

It was only about 1 month prior that my wife and I had worked with our daughter to clean and organize the play area. At that time we were quite satisfied each toy was put into a proper place, easily accessible, and easy to put back in it’s place. Over the next couple of weeks the planned effect seemed to be going off without a hitch. Now that I was sitting here playing Barbie and noticing the state of the play area it was all becoming more clear. We hadn’t noticed the small, incremental stages of messiness that was growing under the clutter covering facade. How did the play room get like this?

Upon reflection I began to connect moments during this timespan where I had been present in the play room playing with my daughter and the small things that had been neglected which added to the slow gathering clutter. The time we were playing with the memory game and I got up to answer the phone. Due to circumstances in conversation or while walking around the house, we did not go back to the play room. I found some of the memory cards under the play table behind a collection of dolls who were in trouble based on my daughter’s assessment and thus put in the corner. Those multiple times my daughter and I decided to play for 5 to 10 minutes while my wife finished getting ready and then rushed putting toys away to leave the house quickly. And the list goes on and on.

Technical Debt

Technical debt was originally coined by Ward Cunningham in the early nineties, based upon research I found by Martin Fowler. Here is a description from the technical debt page on what is and what is not considered technical debt:

  • Technical Debt includes those internal things that you choose not to do now, but which will impede future development if left undone. This includes deferred refactoring.
  • Technical Debt doesn’t include deferred functionality, except possibly in edge cases where delivered functionality is “good enough” for the customer, but doesn’t satisfy some standard (e.g., a UI element that isn’t fully compliant with some UI standard).

Although the above items address technical debt from a developer-centric perspective there are other types of technical debt which also have negative effects on a project’s delivery capabilities. These come from many other disciplines, including:

  • Quality
  • Configuration Management
  • Design
  • Platform experience

Quality Debt

Test automation has become more widespread over the past few years. As agile software development becomes more prevalent, an emphasis on testability and automation will drive even better technological solutions to current test automation issues. Agile projects use an iterative and incremental approach to delivery of features. It is important to be as close to “shippable” as reasonably possible. Our traditional software development practices, in my experience, do not enable teams to meet the needs of an agile project while also maintaining a consistent cost per feature as the codebase grows.

Cumulative story points per sprint

The use of agile software development practices which promote building integrity into systems continually through the development process are essential to combat quality debt. Current practices which have been found to support building integrity in are automated acceptance tests, test-driven development, and continuous integration. Automated acceptance tests are used to ensure the software being created meets the needs of the customer. When used in a test first approach we can define “done” for each feature earlier in the development process and therefore drive development of the right software. We use tools such as StoryTestIQ, Fit, GreenPepper, and Selenium to create acceptance tests. Test-Driven Development (TDD) increases a team’s ability to maintain high test code coverage which minimizes code quality concerns. Feedback related to automated acceptance tests and unit tests developed using TDD helps a team know the state of the system under development. In order to fully utilize this feedback, we can use techniques such as continuous integration which builds, runs all tests, and deploys each time the source repository for your project is modified and reports the results. If a team is notified that the continuous integration environment is showing the current state of the system is broken they can then investigate and fix the issue immediately instead of waiting for QA to point out the issue after the fact.

Quality debt is usually accrued because teams are not allotted sufficient time to test the features developed for the release. This leads to the release of software which is buggy and difficult to maintain. By using an iterative and incremental approach to delivering features which are tested, coded, and integrated we help to mitigate this risk. Ensuring that the system always runs and has integrity built in throughout the development life cycle will lead to decreased quality debt.

Configuration Management Debt

Software configuration management is a discipline whose importance in contributing to iterative and incremental delivery is often overlooked by teams. If teams allow builds, source control processes, and change management techniques to become too complicated, they can have tremendous negative effects on the team’s overall productivity. In my experience, teams that share responsibility of build scripts and their designs tend to manage builds more effectively in terms of simplification and duration. Not all projects need to produce complicated source control branching strategies which impede development and integration of project components. By using practices such as TDD and continuous integration we are able to conduct all development on a single source repository location. Reasonable simplification of these processes along with continuous maintenance of the artifacts and tools involved will facilitate teams in effective software configuration management.

Design Debt

The design of your software can increase the cost per feature if it is neglected. Have you ever been part of a legacy project where it seemed almost impossible to add the feature asked for because you weren’t sure what will break once it is added? The design debt inherent in these software projects has made the product so brittle that the value per feature added may not be enough to continue paying the cost of delivery.

Value per feature added

In order to combat design debt we can use multiple techniques such as refactoring, system of names, and design through tests using TDD.

From the refactoring home page maintained by Martin Fowler:

“Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior.”

Refactoring is the third step in the TDD mantra; write test, write code, refactor. Bob Martin describes TDD as a design technique in this quote:

“The act of writing a unit test is more an act of design than of verification.”

When I describe refactoring to people I describe it as the “process of restructuring your code to an acceptable design”. Depending upon the system you are developing, acceptable design may have different context and understanding. By refactoring mercilessly to an acceptable design we are able to maintain the design of our running system. This does not mean that we no longer conduct design sessions to determine a design direction for implementing a feature but these sessions should produce just sufficient enough design to get started. Also, as a team we should decide if an artifact needs to be captured from our design session. Ask the following questions:

  • Is there a value to formally capturing our design in a document?
  • Who would get value from reading this design document?
  • Is the customer willing to invest money to create this design document?

Another technique that we can use is a pattern called System of Names as described by Ward Cunningham. The basic premise of this idea is to have a glossary or domain model which can be communicated by all parties involved in describing and implementing a system. Have you ever been in a requirements discussion and found that the definition of a particular term which people were using was not the same due to the perspective of each person involved? Have you ever come into a project where a variable or function name was used in multiple contexts with separate definitions therefore creating extra complexity in the code learning process? By defining our model as we describe and implement features for a system we are able to minimize the amount of confusion and interpretation inherent in communication. We should use face to face communication whenever possible because documentation, email, and other forms of communication tend to exacerbate confusion and interpretation.

Platform Experience Debt

It is important on any software project to have the right people on the delivery team in order to plan and execute effectively. It becomes apparent much earlier in an agile software development approach whether or not we have the right people on the delivery team. The iterative and incremental delivery of software will make visible missing disciplines based upon the vertical market domain, software platforms, and understanding of team chosen practices and patterns. As a team we must understand what experience we will need on our team to enable the successful delivery of the customer’s software. I recommend visiting this question on a frequent basis, such as at sprint/iteration boundaries, to minimize the impact.

Another type of platform experience debt is the limited knowledge of a platform or discipline used on a project. Have you ever been on a team where your progress was impeded because a particular section of the software or platform could only be modified by one individual on the team? Or a contributor who no longer is working on the project? Software that has been built using platforms and disciplines for which people who have experience are limited will have an increased cost per feature unless the expertise debt can be overcome. Reliance on a single individual from the team or within the organization to maintain particular software artifacts increases risk in terms of cost, schedule, and scope as a project progresses. Find ways to minimize the use of platforms and disciplines for which limited experience is available. Also, look for ways to share knowledge on all aspects of the software across the team over time, as suggested in the description of collective code ownership. It is understandable that some team members may have more knowledge in a specific domain but it should be possible and encouraged for those individuals to mentor others on the team in that domain.


Discipline is an essential aspect of successful agile software development. When the pressure is on to deliver we must decide how much technical debt are we willing to leave in our software. By using iterative and incremental software delivery along with building integrity into our software continuously through the development process we can meet the challenges of delivering the right software with high quality in a timely manner. This will minimize our accrual of technical debt and lead to a more consistent cost per feature developed. It is not easy to apply the disciplines of agile software development but as my wife reminded me recently when discussing discipline, “You always have to go back to the basics”.

The Song

The title of this blog entry comes from a song that millions of children sing to make the toy cleanup process fun. Here are the words to this song:

Clean up, clean up
Everybody, everywhere
Clean up, clean up
Everybody do your share

Although the song itself may be too embarrassing for software delivery teams to sing while they do their work, I believe the intent behind this song can be a valuable tenet for success.

Daily Scrum from Hell

Posted by on 07 Apr 2007 | Tagged as: Agile, Scrum

This is from the last Scrum Gathering in November 2006.

YouTube Preview Image