Java

Archived Posts from this Category

Executable Specifications – Presentation from AgilePalooza

Posted by on 06 Aug 2009 | Tagged as: Acceptance Testing, Agile, Architecture, DotNet, General, Java, Open Source, Scrum, Software Architecture, TDD, User Stories, XP

Earlier this year I did a presentation on Executable Specficiations for AgilePalooza conference. There is information about working with legacy code, commercial off-the-shelf (COTS) systems, and Acceptance Test-Driven Development (ATDD) using automated acceptance testing tools. Also, the presentation lists types of automated acceptance testing tools out there along with actual names of tools and what they are best used for on projects. Hope it is interesting to you.

Designing Through Programmer Tests (TDD)

Posted by on 27 May 2009 | Tagged as: Agile, General, Java, Scrum, TDD, Uncategorized, XP

To reduce duplication and rigidity of the programmer test relationship to implementation code, move away from class and methods as the definition of a “unit” in your unit tests. Instead, use the following question to drive your next constraint on the software:

What should the software do next for the user?

The following coding session will provide an example of applying this question. The fictitious application is a micro-blogging tool named “Jitter”. This is a Seattle-based fictitious company that focuses on enabling coffee injected folks write short messages and have common online messaging shorthand to be expanded for easy reading. The user story we are working on is:

So that it is easier to keep up with my kid’s messages, Mothers want to automatically expand their kid’s shorthand

The acceptance criteria for this user story are:

  • LOL, AFAIK, and TTYL are expandable
  • Able to expand lower and upper case versions of shorthand

The existing code already includes a JitterSession class that users obtain when they authenticate into Jitter to see messages from other people they are following. Mothers can follow their children in this application and so will see their messages in the list of new messages. The client application will automatically expand all of the messages written in shorthand.

The following programmer test expects to expand LOL to “laughing out loud” inside the next message in the JitterSession.

public class WhenUsersWantToExpandMessagesThatContainShorthandTest {

    @Test
    public void shouldExpandLOLToLaughingOutLoud() {
        JitterSession session = mock(JitterSession.class);
        when(session.getNextMessage()).thenReturn("Expand LOL please");
        MessageExpander expander = new MessageExpander(session);
        assertThat(expander.getNextMessage(), equalTo("Expand laughing out loud please"));
    }

}

The MessageExpander class did not exist so along the way I created a skeleton of this class to make the code compile. Once the assertion is failing, I then make the test pass with the following implementation code inside the MessageExpander class:

public String getNextMessage() {
    String msg = session.getNextMessage();
    return msg.replaceAll("LOL", "laughing out loud");
}

This is the most basic message expansion I could do for only one instance of shorthand text. I notice that there are different variations of the message that I want to handle. What if LOL is written in lower case? What if it is written as “Lol”? Should it be expanded? Also, what if some variation of LOL is inside a word? It probably should not expand the shorthand in that case except if the characters surrounding it are symbols, not letters. I write all of this down in the programmer test as comments so I don’t forget about all of these.

// shouldExpandLOLIfLowerCase
// shouldNotExpandLOLIfMixedCase
// shouldNotExpandLOLIfInsideWord
// shouldExpandIfSurroundingCharactersAreNotLetters

I then start working through this list of test cases to enhance the message expansion capabilities in Jitter.

@Test
public void shouldExpandLOLIfLowerCase() {
    when(session.getNextMessage()).thenReturn("Expand lol please");
    MessageExpander expander = new MessageExpander(session);
    assertThat(expander.getNextMessage(), equalTo("Expand laughing out loud please"));
}

This forced me to use the java.util.regex.Pattern class to handle case insensitivity.

public String getNextMessage() {
    String msg = session.getNextMessage();
    return Pattern.compile("LOL", Pattern.CASE_INSENSITIVE).matcher(msg).replaceAll("laughing out loud");
}

Now make it so mixed case versions of LOL are not expanded.

@Test
public void shouldNotExpandLOLIfMixedCase() {
    String msg = "Do not expand Lol please";
    when(session.getNextMessage()).thenReturn(msg);
    MessageExpander expander = new MessageExpander(session);
    assertThat(expander.getNextMessage(), equalTo(msg));
}

This forced me to stop using the Pattern.CASE_INSENSITIVE flag in the pattern compilation. Instead I tell it to match only “LOL” or “lol” for replacement.

public String getNextMessage() {
    String msg = session.getNextMessage();
    return Pattern.compile("LOL|lol").matcher(msg).replaceAll("laughing out loud");
}

Next we’ll make sure that if LOL is inside a word it is not expanded.

@Test
public void shouldNotExpandLOLIfInsideWord() {
    String msg = "Do not expand PLOL or LOLP or PLOLP please";
    when(session.getNextMessage()).thenReturn(msg);
    MessageExpander expander = new MessageExpander(session);
    assertThat(expander.getNextMessage(), equalTo(msg));
}

The pattern matching is now modified to use spaces around each variation of valid LOL shorthand.

return Pattern.compile("\\sLOL\\s|\\slol\\s").matcher(msg).replaceAll("laughing out loud");

Finally, it is important that if the characters around LOL are not letters it still expands.

@Test
public void shouldExpandIfSurroundingCharactersAreNotLetters() {
    when(session.getNextMessage()).thenReturn("Expand .lol! please");
    MessageExpander expander = new MessageExpander(session);
    assertThat(expander.getNextMessage(), equalTo("Expand .laughing out loud! please"));
}

The final implementation of the pattern matching code looks as follows.

return Pattern.compile("\\bLOL\\b|\\blol\\b").matcher(msg).replaceAll("laughing out loud");

I will defer refactoring this implementation until I have to expand additional instances of shorthand text. It just so happens that our acceptance criterion for the user story asks that AFAIK and TTYL are expanded, as well. I won’t show the code for the other shorthand variations in the acceptance criteria. However, I do want to discuss how the focus on “what should the software do next” drove the design of this small component.

Driving the software development using TDD focusing on what the software should do next helps guide us to only implement what is needed and with 100% programmer test coverage for all lines of code. For those who have some experience with object-oriented programming will implement the code with high cohesion, modules focused on specific responsibilities, and low coupling, modules that make few assumptions about other module they interact with will do. This is supported by the disciplined application of TDD. The failing programmer test represents something that the software does not do yet. We focus on modifying the software with the simplest implementation that will make the programmer test pass. Then we focus on enhancing the software’s design with the refactoring step. It has been my experience that refactoring refactoring represents most of the effort expended when doing TDD effectively.

My Talk @ SD West 2009 on “Managing Software Debt”

Posted by on 27 Mar 2009 | Tagged as: Acceptance Testing, Agile, Architecture, Distributed Computing, DotNet, General, IASA, Java, Jini/JavaSpaces, Leadership, Maven, Open Source, Podcasts, Product Owner, Ruby, Scrum, Software Architecture, TDD, User Stories, XP

I have uploaded the talk I did at SD West 2009 on Yahoo! Video and here it is:

Top 25 Open Source Projects — Recommended for Enterprise Use

Posted by on 17 Dec 2008 | Tagged as: Architecture, Distributed Computing, General, Java, Maven, Open Source, Ruby, Software Architecture

This is a bit off my usual topics on this blog but I am a heavy open source user and this article is something that I hope gets to more enterprise operations, managers and executives. I have been using and deploying production available applications using open source tools, libraries, and platforms for over 12 years now. Open source tools can do almost anything commercial products are able to do and have transformed the software industry in that time span. The list given in the article contains open source projects that I would recommend and have used in the past either directly or indirectly including *nix tools and libraries shown.

I would like to add to this listing with some of the tools I have come to use often:

  • Maven 2.x+ (http://maven.apache.org/)
  • JBoss (http://www.jboss.org/)
  • Rio/Jini/Apache River (http://incubator.apache.org/river/RIVER/index.html)
  • Apache Commons (http://commons.apache.org/)
  • Subversion (http://subversion.tigris.org/)
  • Apache Web Server (http://httpd.apache.org/)
  • Bouncy Castle (http://www.bouncycastle.org/)
  • Time and Money (http://timeandmoney.sourceforge.net/)
  • Spring Framework (http://www.springframework.org/)
  • Hadoop (http://hadoop.apache.org/)
  • Ruby on Rails (http://www.rubyonrails.org/)

This is some of the open source that I have and still use on my projects. What are your favorites that were not on the list?

Executable Design — A New Name for TDD?

Posted by on 13 Dec 2008 | Tagged as: Acceptance Testing, Agile, Architecture, General, Java, Scrum, Software Architecture, TDD, XP

For multiple years now I have thrown around the name “Executable Design” to describe Test-Driven Development (TDD) and how it is used for design rather than a test-centric tool. The name itself causes problems for those who are initially introduced to the technique. As a coach I was looking for a way to introduce it without stereotyping it as extra tests inhibiting more code getting delivered.

From my readings of multiple books, articles, and blog postings along with my own experiences with TDD the content of what I am about to distill is not new. This post is entirely about explaining the technique in a way that garners interest quickly. There are multiple pieces to “Executable Design” beyond the basic process of:

  • Red, Green, Refactor or
  • Write Test, Write Code, Refactor

These statements and the technique is the basis for practicing Executable Design but are not sufficient for describing the value and nuance of the practice. Not that I will be able to present it sufficiently in a single blog post but I want to present the basic principles.

While in a meeting with a team recently we were presented with a question I have heard often:

“Why should we use TDD?”

There are many reasons but generic reasoning alone is not sufficient. We discussed the safety net that good code coverage creates. We discussed the reason system tests do not take the place of unit tests. Then we started to touch on design and this is where it got interesting (and usually it does about this time for me). Before I can describe the rest of this discussion I want to present what lead up to this meeting.

A coach that I highly respect seemed a bit preoccupied one day when he wandered into my team’s area. I asked him what was going on and he told me that some of his issues with the current team he was coaching. He wondered why they were not consistently using TDD in their day-to-day development. The team had allowed a card saying “We do TDD” onto their Working Agreement and were not adhering to it.

I happened to know a bit about the background of this project that our company has been working on for over 2 1/2 years. There is a significant legacy codebase developed over many more years with poor design, multiple open source libraries included, and heavy logic built into relational database stored procedures. Also, just recently management on the client’s side had changed significantly and caused quite a shake up in terms of their participation and guidance of release deliverables. Yet the team was supposed to deliver on a date with certain features that were not well defined. This lead me to discuss the following situations that a coach can find their way into:

  1. You could come into a team that has limited pressure on features and schedule and has considered the impact of learning a new technique such as Executable Design. Also, they have asked for a coach to help them implement Executable Design effectively. This is a highly successful situation for a coach to enter.
  2. You could come into a team that has deadline pressures but has some leeway on features or vise versa and has considered the impact of learning a new technique such as Executable Design within their current release. Also, they have asked for a coach to help them implement Executable Design effectively. This is somewhat successful but pressures of the release rise and fall in this situation and may impact the effectiveness of the coaching.
  3. You could come into a team that has deadline pressures and has not considered implementing Executable Design seriously as a team. Also, they have NOT asked for a coach and yet they have gotten one. The coach and the techniques they are attempting to help the team implement may seem like a distraction to the team’s real work of delivering a release. This is usually not successful and please let me know if you are a person who is somewhat successful in this situation because we could hire you.

The current team situation seemed to be more like #3 above and therefore the lack of success in helping the team adopt TDD did not surprise me. Also, I started to play devil’s advocate and provide a list of reasons for this team NOT to do TDD:

  • At current velocity the team is just barely going to make their release date with the minimum feature set
  • Not enough people on the team know how to do TDD well enough to continue it’s use without the coach
  • The architecture of the system is poor since most logic is captured in Java Server Pages (JSP) and stored procedures
  • The code base is large and contains only about 5-10% test coverage at this time
  • It sometimes takes 10 times longer to do TDD than just add functionality desired by customer

This is not the full list but you get the picture. Don’t get me wrong, the list above begs to me the need for Executable Design but if the team does not have significant experience to implement it effectively it could seem overhead with little benefit to show for it.

After discussing this and more stuff that I won’t go into he told me about a couple of things that he can do to help the team. One of them was to work on minimizing the reasons for not doing Executable Design by discussing them with their ScrumMaster and actioning them on the impediments list. Some of those actions would go to upper management who get together each day and resolve impediments at an organizational level. One of the actions was to get our CTO and myself into a room with the team so they can ask the question “why should we do TDD?”.

Now we are in the room and most of the team members had been exposed to TDD through pairing sessions. Some of them had some ideas about where TDD was useful and why they thought it was not on this project. During the discussion one of the team members brought up a great discussion point:

“One of the problems with our use of TDD is that we are not using it for improving the design. If we just create unit tests to test the way the code is structured right now it will not do any good. In fact, it seems like we are wasting time putting in unit tests and system tests since they are not helping us implement new functionality faster.”

This team member had just said in the first sentence what I instinctually think when approaching a code base. The reason to do TDD is not just to create code coverage but to force design improvement as the code is being written. This is why I call TDD and its best known principles and practices of applying it Executable Design. If you are not improving the design of the application then you are not doing Executable Design. You might be just adding tests.

Some of the principles I have found to help me in applying Executable Design effectively are (and most, if not all, of these are not something I came up with):

  • Don’t write implementation code for your application without a failing unit test
  • Separate unit tests from system and persistence tests. (as described in this previous blog entry)
  • Create interfaces with integration points in a need-driven way (as described in this previous blog entry)
  • Always start implementing from the outside in (such as in Behavior-Driven Development and as described in this previous blog entry)
  • Mercilessly refactor the code you are working on to an acceptable design (the limits of which are described in this previous blog entry)
  • Execute your full “unit test” suite as often as possible (as described in this previous blog entry)
  • Use the “campground rules” of working in the code: “Leave the site in better shape than when you arrived”
  • Create a working agreement that the whole team is willing to adhere to, not just what the coach or a few think is the “right” agreements to have.

Try these out on your own or with your team and see how they work for you. Modify as necessary and always look for improvements. There are many thought leaders in the Agile community that have written down important principles that may work for you and your team.

And finally, now that I have filled an entire blog post with “Executable Design” what do people think about the name? It has worked for me in the past to explain the basic nature of TDD so I will use it either way unless others have better names that I can steal?

Managing Software Debt presentation @ Agile Vancouver

Posted by on 10 Nov 2008 | Tagged as: Agile, Architecture, DotNet, General, Java, Leadership, Product Owner, Scrum, Software Architecture, TDD, XP

On November 6th I presented an updated version of the Managing Software Debt talk at Agile Vancouver “Much Ado About Agile” conference. This is a link to the presentation deck:

Managing Software Debt – Agile Vancouver (PDF)

I was honored to present at this local conference and had a great time meeting up with old friends and getting to know some new ones. I hope that I can do this again soon. If you are interested in more information and other presentations at Agile Vancouver you can go to their home page.

Defining the “Unit” in Unit Testing

Posted by on 27 Sep 2008 | Tagged as: Agile, Architecture, DotNet, Java, TDD, XP

“Hey, Ben. We just figured out a great way to manage test-driven development and good database design.”, said an enthusiastic developer using Extreme Programming (XP) practices on their project. “Our application is highly data-centric. Therefore, in the first iteration we design the database schema modifications and create tests to validate all of it’s implementation characteristics. The next iteration we build the data access layer and business services on top of the database modifications. This has allowed us to design the database correctly before developing code around the wrong data model. What do you think about this approach?”

Ben is an agile coach who checks in with this team from time to time. He likes the fact that this team has continually looked for improvements in the way they develop software. In this case Ben sees a potential issue and so he asks a question, “What do you deliver to the customer at the end of your first iteration?”.

“We show the customer our database design and tests executing.”

“What does the customer think about this?” Ben probes further.

“He doesn’t seem to care about this part of our product review. He told us that we should just show him the finished feature after the next iteration.”

“Didn’t we setup a Definition of Done for the team to assess quality of our delivery each iteration? If we don’t have customer accepted functionality at the end of the iteration didn’t we decide that the work is not done?”

“Yeah, but we found out that the data design and automated test case development for it takes too long to fit into an iteration along with feature development on top of it.”

“Hmmm, that sounds like we may be working around our definition of done which seems like a ‘smell’ in our process. Lets sit down and see what the root cause of this new process that extends development of functionality over two iterations rather than within one iteration.”

Relational databases have proven themselves to be great persistence platforms. As their usage increased in software development they have gone beyond their intended usage into the application server realm with stored procedures, functions, and triggers not to mention parsing and other functionality added recently in the marketplace. Applications become highly dependent on a relational database and become more difficult to change over time. The more difficult the relational database becomes to change the more we baby it and look for ways to not have to modify it for long periods of time. This leads to designing up front and the “getting it right the first time” mentality with our data models.

When we start in the bowels of our system with design, tests, and implementation we tend to “gold plate” our implementation. Thus developing more code and tests than is actually needed for the implementation. Many times this approach violates the YAGNI (“you ain’t gonna need it”) guideline for agile software development.

During coaching engagements I speak with team members about starting from the user’s point of view even in development of your unit tests. What is the next piece of functionality that will support their feature request? Many developers immediately comment that these are no longer “unit tests” as they have previously defined them. I ask what they characterize as a unit test and it usually is not easy for them to verbalize. If we think of a unit as part of an existing design we will tend to write tests for all potential ways the design could be used. If we always drive our next test based on what the next unit of functionality should do for the user then we will implement only what is necessary.

Behavior-Driven Development (BDD) describes a process for developing from the outside in. The first piece of code the developer implements in this approach is the interface. From the interface a developer drives out the rest of the functionality. This ensures that all code is directly related to functionality valuable to the customer or other code already written. Although there are great frameworks out there to support BDD in your development environment you can begin by starting to think about your xUnit tests in this manner. Start from the interface and make sure each additional capability implemented to satisfy a test is adding or supporting value from the user’s point of view. Please read this wikipedia entry on BDD for actual test case and code examples.

In addition, I will make a suggestion that whenever a technology element within your architecture is difficult to change you should definitely minimize interactions with it and abstract all access. Proper unit testing that focuses on a unit of code will force the use of proper interface abstractions in order to not make the test dependent on components external to the unit such as a database. Minimizing interactions will reduce your application’s dependence and also increase changeability of it for future business functionality to be implemented. Business changes so software should be able to change with it.

Beat Cross-site Scripting Issue with StoryTestIQ

Posted by on 25 Aug 2008 | Tagged as: Acceptance Testing, Agile, Architecture, DotNet, Java, TDD

A few years ago I was privileged to be on a team with some excellent developers where I currently work, SolutionsIQ. One of whom saw the need to stabilize development on an incredibly unstable codebase with no tests. He came to the team with a proposed tool that he slapped together in his free time. It was a mashup of Selenium and FitNesse along with some modifications to support some special needs we had on the project. His name was Paul Dupuy and the entire team, including Mickey Phoenix, Rand Huso, Jonathon Golden, and Charles Liu, spent some time working on enhancing the tool, now called StoryTestIQ (aka ‘STIQ’), to make it what it is today. Other teams at SolutionsIQ have been using StoryTestIQ, as well, and helping to enhance its capabilities from time to time.

One issue that we have had is that the visual testing portion of StoryTestIQ runs inside a web browser. The security models for each browser is a little bit different but for the most part they restrict cross-site scripting. For most of the projects we have used StoryTestIQ on the customer’s browser was always Microsoft Internet Explorer (IE). Given this constraint we were able to run in a special non-secured mode of IE called HTML Applications (HTA). This mode of IE also has a problem. The browser history is not retained while in HTA mode therefore some JavaScript functions will not work if used in your application.

Recently I have been working on a project that must work in multiple browsers. Also, I have a Mac so the constraint of using IE is not necessarily reasonable without some coaxing. Instead I decided to take on a quick Apache2 with mod_proxy experiment to see if I could serve both StoryTestIQ, which serves its pages from the FitNesse server inside, and the project’s web application server. It worked and so now I will share it with you.

I will assume that you are already using StoryTestIQ and have Apache2 installed already. If not, please make sure each of these run within your environment before moving on. In my first configuration the project I was developing is running on the Jetty web application server on port 8080. I started up StoryTestIQ on port 9999 and left my project’s web application up and running while I configured Apache2. I used MacPorts to install apache2 in the /opt/local/apache2 directory. In my environment the httpd.conf, Apache2 main configuration file, was located at /opt/local/apache2/conf/httpd.conf. In your own Apache2 installation find the default httpd.conf file for configuring the server. Inside the httpd.conf configuration file make sure that mod_proxy is installed and loaded by searching for a line similar to this:

LoadModule proxy_module modules/mod_proxy.so

If you are not currently using mod_proxy then please review their web site to install and configure mod_proxy for your environment. If mod_proxy is successfully installed then you can add the following lines to the httpd.conf file below all of the LoadModule entries:

#
# Proxy configuration for STIQ and local Jetty application
#
ProxyPass         /stiq  http://localhost:9999/stiq
ProxyPassReverse  /stiq  http://localhost:9999/stiq
ProxyPass         /files  http://localhost:9999/files
ProxyPassReverse  /files  http://localhost:9999/files
ProxyPass         /STIQResults  http://localhost:9999/STIQResults
ProxyPassReverse  /STIQResults  http://localhost:9999/STIQResults
ProxyPass         /myapp  http://localhost:8080/myapp
ProxyPassReverse  /myapp  http://localhost:8080/myapp

#
# Rewrite ProjectRoot, STIQ repository main content page, to STIQ server
#
RewriteEngine On
RewriteLog “/opt/local/apache2/logs/rewrite.log”
RewriteRule ^/ProjectRoot(.*) http://localhost:9999/ProjectRoot$1 [P]

There are multiple directories which can be easily proxied using basic ProxyPass and ProxyPassReverse directives. These are /stiq, /files, and /STIQResults. Due to the wiki page URL construction the main content page within the STIQ repository cannot use these basic directives. Instead you must use the RewriteEngine with a rule to map any URL starting with /ProjectRoot* to the STIQ Server. The problem was that the wiki page would ask for a URL such as http://localhost:9999/ProjectRoot.StoryTests and the basic proxy directives would see this as a new URL compared to basic /ProjectRoot. The use of the RewriteRule allows anything that starts with /ProjectRoot to get proxied across.

Once you have these configurations added to the httpd.conf you can restart the Apache2 server. In my environment the command was:

/opt/local/apache2/bin/httpd -k restart

After the web server is restarted you can launch StoryTestIQ inside your browser using a URL similar to this one:

http://localhost/stiq/runner.html?startPage=/ProjectRoot&suite=StoryTests

You’ll notice that this is going through the default HTTP port 80 instead of STIQ’s port 9999. We are now proxied on port 80 to port 9999 for STIQ related URL. You can write automated acceptance tests opening  a URL starting with http://localhost/myapp and it will proxy to port 8080 where your web application is running. Make sure to have all of the port numbers correct in your configurations if your environment differs from mine.

I have also configured a Ruby on Rails application for automated acceptance tests against with STIQ. In that case I had to configure mod_proxy with the following directive adding an application name to the Apache2 URL to proxy:

ProxyPass         /myapp  http://localhost:3000
ProxyPassReverse  /myapp  http://localhost:3000

You can see the welcome page for your application using this basic configuration but once you have any URL locations identified in your web pages that start with ‘/’ they will not be found. In order to support URL which start with ‘/’ you must modify the appropriate environment configuration inside your Ruby on Rails application. The environment configuration are found in the ${project_root}/config/environments directory by default. I added the following to my development.rb environment configuration file which is what I use with STIQ:

# This allows me to run using mod_proxy on Apache2
# Using StoryTestIQ for automated acceptance testing
# It runs in a browser and therefore cross-site scripting is not allowed
# which the mod_proxy allows me to get around by passing both the StoryTestIQ
# server and the application under test through the same root URL
ActionController::AbstractRequest.relative_url_root = “/myapp”

This will cause all requests for URL starting with a ‘/’ to become ‘/myapp/’. This allows them to be found through the Apache2 proxy.

If you are interested in beating the cross-site scripting restrictions of most major browsers for use with StoryTestIQ I hope this blog entry will help you out. This same mechanism could help with other tools out there who have browser restriction issues. Let me know through your comments if this worked for you or if you have any suggestions on modifications that could make this easier.

Technical Debt Workshop – A Perspective

Posted by on 21 Aug 2008 | Tagged as: Acceptance Testing, Agile, Architecture, DotNet, Java, Leadership, Product Owner, Scrum, TDD, XP

Last week I was invited to participate in a LAWST-style workshop on Technical Debt. I was honored to be there with such a great group of people from diverse industries and experiences.

Preface: I am writing this blog entry for myself and therefore it may not be as useful to those reading. Also, the perspective on discussion points are from my own understanding. I will link to other portrayals, blogs, and articles on technical debt in the future to round out these understandings. I do have positions on this topic that I have talked about in earlier posts, conference presentations, and in articles that I will continue to build upon in the future with influence from people in the workshop and outside, as well.

On the night before the workshop began, a large group of the participants went out to a Grand Rapids, MI watering hole to get introduced and start lively discussions. I learned some interesting nuggets such as how some roads in Texas are paved cow runs. We also discussed more of the philosophical and metaphorical underpinnings of the term debt in relation to technology artifacts. One item I took away from this was around discretionary income. Some technology shops either do not have discretionary income or choose to use it developing new capabilities rather than investing it to minimize their current technical debt.

Nancy Van Schooenderwoert provided me with many good pieces of information. While discussing our agile coaching experiences with teams she said:

“The only question that matters at the end of an iteration is ‘Did you build more or less trust with the customer?'”

Some people who read this may find this difficult to find true but I have found this question is important to ask each iteration. Scrum and other agile frameworks attempt to build trust between parties that may have played the blame-game with each other in past projects. As a team and customer build trust the motivation of the team and willingness of a customer to collaborate grows stronger. These characteristics enable faster product development with higher quality.

Now for the play-by-play of the workshop itself.

Rick Hower: 10 Warning Signs of Technical Debt

Rick facilitated a brainstorming session to gather warning signs of technical debt in your code. We brainstormed way more than 10 warning signs that I did not write down. Rick and potentially other participants will be writing an article to choose the top 10 warning signs that I will link to once it comes out.

Matt Heusser: Root Causes of Technical Debt

Here are some of the notes I took on Matt’s topic:

Typical training in computer science tends to not entail testing, maintenance of your own code beyond an assignment, or team interaction.

Technical people tend to take a victim position when developing code too fast creating technical debt. For instance “those mean business people pushed me around and I have no power to change their mind”. Non-technical people don’t always understand what they are asking for when making decisions on feature delivery. If they knew the impact of these decisions they may decide to pay off some of the technical debt before investing in new features.

North American businesses tend to look for short-term versus long-term results. This could impact planning and delivery since the short-term goals may be hacks while long-term results show decay of software can be costly.

Engineers have observable physical reasons for qualifying assessments (ie. “This bridge will take traffic going 40 mph”). Software development organizations do not seem to have these types of qualifying assessment tools fully figured out yet. Matt used mechanical engineering in the 1800’s versus electrical engineering during the same time period to support this idea. Mechanical engineering was well established in the late 1800’s yet electrical engineering was still in its infancy. Useful units to measure were known for mechanical engineering yet the electrical engineering folks did not have similar units of measure. Over time the electrical engineering discipline gained new knowledge and developed useful units of measure that we use today. Maybe we as the software industry are still trying to find our useful units of measure.

David Walker: False Starts

Misuse and bad deployment of practices hurts an organization.

David mentioned an organization that may be of interest: ACQ (American Society for Quality). They have a topic on their site called ECQ (Economic Case for Quality) that may be a helpful talking point.

Chris McMahon asked the question “Why would you not do excellent work?” in a discussion on technical people asking for permission to work on technical debt in their software development efforts. I thought this was a great question so I wrote it down.

Steve Poling: Technical Debt is Fascism

Steve got our attention with the name of this topic. Steve wanted us to come up with formulas we could use to calculate technical debt. I thought he had some good ideas and I hope he is able to develop formulas that actually work for particular instances of technical debt.

Steve brought up Godwin’s Law: “As a Usenet discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one.” I thought this was interesting since the term technical debt could be dilluted in its usage if it covers too much of a spectrum and is not able to be pinned down.

I thought Steve brought up a fairly good metaphor for technical debt revolving around his trailer hitch. He had noticed the need to paint his trailer hitch for some time in order to protect it from the elements. The problem was that other pressing items of business came up and he did not paint the hitch. Over time he went to paint the trailer hitch and now it had rust on it. This meant that the total effort to protect the trailer hitch from the elements had grown exponentially. He now had to clean the hitch and make sure the rust was gone and then he could paint it.

Ron Jeffries asked to draw a picture on the black board to discuss technical debt and he came up with what I thought was an amazing representation of the issue. Here is my attempt at recreating his phenomenal chalk talk in a tool:

We can make incremental progress on a large feature area within a system adding value with each addition. In order to make incremental progress we should keep code clean and easy to change so the addition of more features on top of existing functionality does not slow down. In fact we should be able to deliver features faster on top of each capability already built into system. This does not always (or even usually for that matter) happen in projects and instead we end up with a “big ball of mud” to work with. As a team is working on this system they begin an effort to add a new feature and get caught in a quagmire of old crusty code that is difficult to work with. What is even worse is when there are external dependencies to this big ball of mud that makes changes even more risky than it would be on its own.

David Christiansen: Where did all this $@#% come from?

David was entertaining and provided some instances of what he constituted as technical debt. Here are the sources he listed in the presentation:

  • Taking shortcuts (experienced)
  • Hacks (beginner)
  • Geniuses (overcomplicated)
  • Living in the now
  • Futurists
  • Time
  • The Iron Triangle
  • Anticipating reuse
  • Documentation
  • Innovation
  • Trying to avoid technical debt

Some people in the room thought this list went well beyond what technical debt should be. This list seems to cover almost anything that causes software to become difficult to work with. It was a great list of issues and David went on to say he wasn’t as interested in defining technical so much as help to create tools and practices that would minimize bad software.

David also discussed how documentation is a gamble:

“Like a slot machine the house always wins. Sometimes you get a winner but most of the time it is a loss.”

I will probably use this quote in the future with proper acknowledgements so thank you David.

Brian Marick said the following during the questions and answers portion of the presentation:

“Debt is a property between the people and the code.”

I thought this was interesting and already have multiple thoughts on how this can be applied. I am not sure what is the best way to view this statement yet I found it important enough to write down so I will think about it some more. Also, I hope to ask Brian more about this point in the future.

Michael Feathers also brought to the group a point about the tradeoffs made in terms of “navigability versus changeability in the code”. Some technical folks like code navigation to be explicit and therefore have difficulty reading strongly object-oriented code that separates implementation from interfaces to support change.

Nancy Van Schooenderwoert: A Garden of Metaphors

Nancy proposed that we could start to explain technical debt more succinctly if given a combination of a metaphor and metrics. The metaphor can help people who are non-technical understand the impact of their decisions on technical artifacts. For instance the use of the word debt helps people visualize the problems with just making it work versus craftsmanship (word brought up by Matt Heusser that I thought was useful). She mentioned that technical debt is about expectation setting.

NOTE: Nancy wrote to me and added the following comment – “Metrics and Metaphor have opposite weaknesses so they support each other well. People can be suspicious of metrics, because there is an infinite choice of things to measure and how to measure them. Metaphor, on the other hand rings true because of familiar experiences the listener has had. The only problem is that it depends on tech debt *truly* being like the thing in the metaphor. We have to check back with reality to know if that’s the case. That implies we measure something to see whether and how the behavior of tech debt is different from the behavior of financial debt (or whatever else we used as metaphor).

I think some of the most useful metrics to start with are
* value stream mapping
* bug metrics
* story points delivered, and remaining to be done

Nancy also brought a great alternative metaphor to debt based on Gerald Weinberg’s “Addiction” trigger responses. Sometimes decisions are made for short-term results without understanding their long-term effects such as in alcohol, smoking, and other drug addictions. To enable better responses to the addiction we must setup a more appropriate environment that allows proper responses to made within. Here is my portrayal of the “Addiction” metaphor drawing Nancy put up:

The “Environmentalist” as a metaphor was also brought up by Nancy. In life nobody has to pay for the air we are dirtying up. Economic systems poorly reflect environmental quality and this has helped lead to issues we are now faced with in global warming.

David Christiansen: You’ve got a lot of Technical Debt, Now What?

I don’t have any notes about David’s talk but Chris McMahon mentioned something I thought was wonderful:

JFDI – Just #%$ Do It

We got to this point when people started asking why wouldn’t a technical person just fix the issues when they seem them. Chris discussed an example of how they decided to use Bugzilla. One of the developers got tired of the old bug tracking system, and he JFDI by installing Bugzilla. It was pointed out that there are also examples of JFDI backfiring and I can think of a situation that impacted a team for multiple days because of JFDI.

The visibility into each task that a person takes on in some organizations makes this approach difficult to follow. How can we help people decide to make the right choices while developing software in these environments?

Matt Heusser: Debt Securities

Matt brought up the term “moral hazard” to describe how technical people act based on their insulation from the long-term effects. For instance, a person may take a shortcut in developing a feature since they are not going to be working on the code 1 year from now when it must be modified to support a new feature. Matt pointed out two practices that may help minimize this effect:

  • Customer close to the Team
  • Agreement on how to work together

Chet Hendrickson pointed out that a good way to minimize the problem with moral hazard is by:

“Lowering the threshold of pain”

For instance, Chet brought up doing our taxes. Yes he could incrementally update his taxes 2 hours each month. Instead he waits until 2 weeks prior to get his tax forms completed because the potential headache of tax evasion is strong enough that it crosses a threshold.

Brian Marick: Market World

Brian defined the term market world as:

“Get as much for as little”

He then described the term social world as:

“Transactions are not accounted for”

Brian discussed that use of the term “debt” as a talking point may push us into a “market world”. This could be problematic since it leads to the creation of technical debt by only doing enough now to get it out the door. Maybe we could do more to introduce social aspects into the talking points for removing what we now call technical debt.

Brian is a tremendous thinker, IMHO. He brings creative and profound discussion points to the table. Here is one such point he made:

“Agile teams care deeply about business value…the problem with agile teams is they care more about business value than the business does.”

Being an agile coach has lead me to believe this is true many times over. I wonder if this is something we can work on as an industry and should we move more towards social world ideas to identify the right vision for our software delivery focus.

Rob V.S.: Developer in a Non-agile Environment

Rob pointed out some points about development in a non-agile environment. Here are some of those:

  • Many developers, for some reason, want to appease customers with their time estimates
  • Rob saw this appeasement was causing problems for him and others in the code
  • Rob decided he would start buffering estimates to incorporate refactoring into his processes
  • A problem occurred when refactoring was causing implementation times to shorten and making his estimate buffers unnecessary
  • Rob was finding that he was able to implement features in about 1/2 the time of others now that he was refactoring code
  • To his amazement the refactoring worked every time even though it felt like a risk each time he thought about it in his estimates

I thought Rob’s story was a great addition to the group. Not everybody, maybe not even a majority, of the people who participated were involved in projects that were using an agile framework.

Michael Feathers: Recovery

Michael explained that he saw that the industry was talking about people issues more often today then before. This seemed like a good thing yet he wondered when we would get back to discussing technology issues. Chris McMahon said the following that I thought was a good principle to follow:

“Make the easy things easy and the hard things possible”

I am not sure where I heard something similar before but Chris brought it up so quickly that I attribute it to him until further notice. 😉 (NOTE: Chris said “as far as I know originated as a design principle in the early days of Perl development. I was quoting Larry Wall.”)

David Andersen: Business Models

David pointed out something that I discuss often in training courses on Scrum:

“IT as a cost center versus a profit center”

I was quite interested in this topic and see this as potentially the most important software development environment problem to be solved. Yet the problem may be so large that finding a solution may be near impossible. Therefore I have found discussing the issue one organization at a time sometimes helps.

David expressed that companies who work in a time and materials approach tend to be cost centers. The idea is that we employ a warm body to take on the work. Those companies whose approach is a service for a fee tend to think like a profit center. The right people will emerge and deliver the services based on setting up the proper relationship.

Brian Marick brought up a reference to the Winchester Mystery House that is filled with all kinds of oddities. I can’t remember why he brought this up in terms of business models but it could be something to think about when discussing technical debt and its potential ramifications.

Matt Heusser: Clean Code and the Weaker Brother

Matt presented the idea of the weaker brother and it caused me to take another perspective look at team composition. At least it gave a strong analogy to draw from for conversation about it. One thing that I thought was interesting about Socialtext, where a few of the folks including Matt work, is they are truly distributed as team members. They have communication tools that help them minimize the potential issues with fully distributed team. One of the tools and processes they use is every commit message to source control goes to the development email list. Something that happens in response to this from time to time is other people on the team can challenge the implementation and a healthy, respectful banter goes on that improves quality of the overall software. I will take this away as a talking point on distributed teams and may even use it on one of our projects in the near future to see how it works for us.

Chris McMahon discussed a policy they had at a previous employer that said everyone must work on a team at least 1 week per month, even the iteration leader (similar to a ScrumMaster role). I will have to think about this policy and its ramifications but I truly believe that I must work on a team every once in a while to keep my practices up to snuff.

Michael Feathers: Topic of Unknown Origin but Greatly Appreciated (my own words)

Michael had a question that I thought was interesting:

“Would the world be better if all your code was gone in 3 months?”

The answer to this question for your particular project context may help the team decide what the effect of technical debt is today. I had a couple of comments on the subsequent discussion points but never got them out because there were so many passionate people with great points, ideas, and questions. Here are the points around taking an incremental improvement to these codebases from my own experience with some horrible situations:

Abuse Stories – Mike Cohn brought this up during a presentation and they were not in his slide materials.  He has since added them and I believe them to be greatly important and easy to implement types of stories to describe the cost of not addressing what are usually technical debt or more likely architectural features. You can follow the link to an old blog entry I posted on this subject.

“Finding a common enemy” – I find teams are not usually motivated until they have a common purpose. One way to find a common purpose quickly is to find a common enemy. This may be a competitor’s product or another team (I hope not but hey?). This can bring a team together and focus their efforts to defeat the competitor. I have heard of companies who developed this focus and cornered their marketplace in a fairly short timeframe. This could also help to address technical debt since those issues will be fixed in order to continue on the path to defeating the common enemy.

Michael did a great job of describing how companies may not consider the value of their existing software enough. Code has value and organizations who understand this can make better decisions about how they treat their software assets. The idea is to prevent devaluation of a software asset when appropriate.

Ron Jeffries & Chet Hendrickson

OK, now this was fun. Ron and Chet put on a artistic show that put technical debt into a perspective easily understood, IMHO. I will attempt to recreate their drawings and the reasoning behind each one but I hope they write an article on this soon since they are incredibly apt to delivering a message succinctly.

As a team is building a system their velocity will either stay fairly consistent, decelerate, or accelerate.

Each feature implementation is developed using a combination of available system capabilities and newly developed capabilities.

A team can choose within their current envioronmental context to build a rats nest that is hard to work with later or a well-designed system that more easily changes with new business needs.

The punch line, from what I could gather, was the use of a negative term “debt” may be problematic from an emotional point of view. Rather than using a negative to discuss the topic it may be better to discuss how we can build upon what we have. Thus we can call technical debt something like “liquid assets”. We can use our existing code to develop new capabilities for our business quickly and with lower cost then doing so from scratch. I am not sure if this term will stick but I like the building upon what we have already developed point of view.

Chet and Ron also brought up the 4 basic principles of simplicity in code by Kent Beck:

  1. Tests all run
  2. No duplication
  3. Expresses all design ideas
  4. Minimize code

* These are in order of importance since each of last 3 are potentially in conflict with each other.

Wrap Up

There is so much more that I didn’t take down, remember, potentially understood the importance of, and whatever else that stopped me from recording it. The above content may seem somewhat haphazard since I did not create a coherent overview but rather just recorded what I heard from my perspective. I hope it is still something that others can get some ideas from and use effectively. Lets start reducing technical debt for the good of our organizations and customers and the morale of our team.

The “Wright Model” for Describing Incremental Architecture

Posted by on 02 Jun 2008 | Tagged as: Agile, Architecture, DotNet, Java, Product Owner, Scrum, TDD, XP

One of the most common questions in teaching and coaching agile processes to groups is:

“How do we design our software while delivering iterations of potentially shippable product increments?”

Scrum, an agile process that I teach about and coach organizations on implementation of, asks that each Sprint, analogous to an iteration, delivers a potentially shippable product increment. There is emphasis on potentially shippable since it is quite common to have releases that involve running multiple Sprints until there is enough value for your users. I usually describe potentially shippable product increment is that the software is of a quality that a releasable version would include. This means that each Product Backlog item that is implemented during the Sprint is tested, coded, integrated, documented, and verified. Scrum teams gain a better understanding of what deliverables are necessary to make this happen by creating a Definition of Done.

When I ask what design decisions are difficult in the Scrum process to deliver it usually revolves around high level architecture decisions or data modeling. There are other specific design areas that get brought up but lets focus on these for now. In order to help a Scrum team understand from a conceptual point of view how incremental design works across a release of their software product I use a diagram that I believe Mike Cohn created. Here is my interpretation of the diagram:

Incremental Design on New Product

This diagram attempts to describe how in new software product development efforts more emphasis in early Sprints is put into implementation of architecture elements. As the architecture is more fully defined and implemented in later Sprints emphasis is increasingly put into feature development. In the final Sprint of a release there may be little to no architecture implementation left to do. This diagram demonstrates the expectations of early release deliverables in terms of technical architecture implementation to support feature delivery. It also shows that each Sprint in the release should deliver features that a user can review.

In a software product team that delivers more than one release may have less architecture emphasis in early Sprints of following releases. This is shown in a modified version of the diagram below:

Incremental Design in Following Releases of Product

After describing the above diagrams to a developer named David Wright he approached me to validate his understanding. Within 10 minutes of my description of incremental architecture he had developed a new diagram perspective which I thought was brilliant. His diagram involved two axis with the x-axis representing the surface visible to users and the y-axis representing depth of the architecture. In Sprint 1 of a multiple Sprint release a portion of both the surface and architecture depth are realized. The figure below is a visual representation of the portions implemented of a fully releasable product version. The dark blue areas of the grid show implementation of the surface and depth in Sprint 1 while the empty grid elements represent what has not been implemented yet.

Sprint 1 Depth and Surface of Releasable Product
As a release progresses the amount of surface visible features and architectural depth implemented is incrementally built upon towards a fully releasable product version. The following diagram shows the incremental architecture progress later in the release cycle.

Depth and Surface of Incremental Architecture Later in Release

The adoption of an incremental architecture approach comes with a couple of potential issues:

  • Less up front design of the overall release architecture is known at the beginning and while the depth of the architecture is still being implemented
  • New knowledge may impact existing implementation of architecture elements

In order to manage these types of issues we must implement disciplined practices to allow our architecture to accept change as we gain more knowledge. This is why the Extreme Programming practices such as Test-Driven Design (TDD), Continuous Integration (CI), Pair Programming (or continuous code review), and Refactoring have become so common on Scrum teams. TDD gives our design an executable way to prove we have not broken existing functionality at the component and functional level. This does not negate the need for exploratory testing by a person but it will keep manual testing to a manageable level. CI automatically runs builds, automated tests, and deployment of our application then provides feedback to the team about the current state of the integrated system. Pair Programming increases knowledge transfer across the team and provides coherent communication of the products domain into the tests, code, and support artifacts. And finally, refactoring is defined by the guy who wrote the book on it, Martin Fowler, as:

“a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior”

Refactoring is best support by a high automated test coverage that allows a team to know when the external behavior has not been changed in response to changes deep in the architecture. Here is a visual representation of architectural elements, in red, which could be refactored in response to a new feature.

Depth and Surface of Release with Refactored Elements

Each of the refactorings may be small changes but lead to larger positive impact over the span of a release. Effective use of refactoring keeps our architecture flexible and therefore able to meet evolving business needs.

I have used David Wright’s model to describe incremental architecture to other clients in conjunction with the original diagram from Mike Cohn. It has helped provide a clearer picture of incremental design and how to incorporate it into their real world projects. With David’s permission I named it the “Wright Model” and will continue to use it in the future. Thank you, David.

Big design up front (BDUF) is based on the thought that we can lock down business requirements and design our software right the first time. The problem is that business needs change almost as soon as the requirement has been developed. Not only do they change but the functionality requested is better understood once a user touches the feature and provides their feedback. Incremental design of a system throughout a release allows us to incorporate this feedback and change our architecture to satisfy the user’s needs.

Next Page »