DotNet

Archived Posts from this Category

Executable Specifications – Presentation from AgilePalooza

Posted by on 06 Aug 2009 | Tagged as: Acceptance Testing, Agile, Architecture, DotNet, General, Java, Open Source, Scrum, Software Architecture, TDD, User Stories, XP

Earlier this year I did a presentation on Executable Specficiations for AgilePalooza conference. There is information about working with legacy code, commercial off-the-shelf (COTS) systems, and Acceptance Test-Driven Development (ATDD) using automated acceptance testing tools. Also, the presentation lists types of automated acceptance testing tools out there along with actual names of tools and what they are best used for on projects. Hope it is interesting to you.

My Talk @ SD West 2009 on “Managing Software Debt”

Posted by on 27 Mar 2009 | Tagged as: Acceptance Testing, Agile, Architecture, Distributed Computing, DotNet, General, IASA, Java, Jini/JavaSpaces, Leadership, Maven, Open Source, Podcasts, Product Owner, Ruby, Scrum, Software Architecture, TDD, User Stories, XP

I have uploaded the talk I did at SD West 2009 on Yahoo! Video and here it is:

It’s BeyondAgile: people making software that works

Posted by on 25 Mar 2009 | Tagged as: Acceptance Testing, Agile, Architecture, DotNet, General, Leadership, Product Owner, Scrum, TDD, User Stories, XP

The hopefully-not-anticlimatic second event of the still-pretty-new BeyondAgile is happening on Thursday:

“Agile Challenges Clinic/Swap Meet”
Thursday, 26 March 2009
6:30 to 8:30 p.m.
Locations on the Eastside and Seattle!
Read on for agenda and location information, or visit our Google Group at
http://groups.google.com/group/beyondagile

The Blurb
At the first event, everyone (over forty people) created a backlog of ideas, suggestions, and work items. Among them were two suggestions that gave rise to this event’s focus: “a bring-a-problem” session where everyone helps everyone with their challenges. We’ve extended the idea of a clinic, where you bring a problem to an expert and get help, to a swap meet (or potluck), were everyone brings something and gets something. So come if you think you’re brimming over with answers and expertise, and come if you’re wanting other perspectives or advice on how to tangle with the problems that bedevil your days.

We’ll also form a program team, a group that’ll work proactively to plan interesting and exciting events in sufficient time for the blurb-writer–that’s me–to write and distribute the blurb a few weeks before the event. If you’re wondering who’d be good at doing that, go look in a mirror—it’s you we need!

Second Event Agenda

  • Welcome and Announcements (10 min.)
  • Formation of Program Team (10 min.)
  • “Agile Clinic/Swap Meet (85 min.)
  • Giveaway Drawing (5 min.)
  • Meeting Retrospective (10 min.)
  • Socializing and Breakout/Breakup/Beer

Event Locations

Eastside: SolutionsIQ
First Floor Training Center
10785 Willows Road NE, Suite 250
Redmond, WA
NOTE: directions and picture of building at http://www.solutionsiq.com/about-us/contact-us.php?Look for the tent signs and the blue BeyondAgile logo

Seattle: Amdocs Digital Commerce Division (formerly QPass)
Greece Room
2211 Elliott Ave Suite 400
Seattle, WA

Questions?
Contact us at beyondagile@taoproductions.com, or visit our Google Group, or contribute to the nascent Wiki at beyondagile.org.

Why come?
Here’s your chance to be in on the beginning of an exciting new collaboration of the Puget-Sound area (and beyond!) agile-interested. And if that’s not enough, we’ll hold a drawing to win exciting and valuable prizes!

What’s BeyondAgile?
It’s the combination and follow-on to a number of the agile-oriented user and interest groups that have operated in the Puget Sound area. Representatives of the Seattle XP User Group and Seattle Scrum came together in December to try and combine the efforts of all of us who care about and use agile methods.

Why does BeyondAgile exist?
o Find best practices among all agile processes
o Build a bigger agile community
o Take advantage of overlaps between existing groups
o Expand beyond meetings and provide a place to collaborate
o Align software development and business
o Explore cutting-edge ideas and techniques

What is BeyondAgile like?
That, in large part, is up to you. A primary reason for consolidating our efforts is to broaden our base of support, capability, and leadership. We envision a more active, multi-faceted organization that does more than just host talking heads. We’ll gather at least once a month on the 4th Thursday of the month, and probably more often, once you figure out what other events you’d like.

Where do BeyondAgile events happen?
Our goal is to have many answers to this. As a result, we’ve worked to remove the impediment offered by bridges and commuting: for our monthly meetings, we hold our events in both Eastside and Seattle locations, through the semi-magic of videocasting. We’re experimenting: we’ve thought of trying to alternate the “real” physical meeting between sides of the lake. At this point, we’re only at the stage of using a semi-okayish video link between the two locations, and try *very* hard to make the meeting balanced between locations. That’s harder than it seems; come and help us work it out! We’re still dreaming of enabling people anywhere to attend through streaming video, even after the meeting’s already happened, but we need more knowledge, resources, and volunteers before that’s going to happen.

I have question for you.
Great! Visit the Google Group (BeyondAgile) or send a message to beyondagile@taoproductions.com

Managing Software Debt presentation @ Agile Vancouver

Posted by on 10 Nov 2008 | Tagged as: Agile, Architecture, DotNet, General, Java, Leadership, Product Owner, Scrum, Software Architecture, TDD, XP

On November 6th I presented an updated version of the Managing Software Debt talk at Agile Vancouver “Much Ado About Agile” conference. This is a link to the presentation deck:

Managing Software Debt – Agile Vancouver (PDF)

I was honored to present at this local conference and had a great time meeting up with old friends and getting to know some new ones. I hope that I can do this again soon. If you are interested in more information and other presentations at Agile Vancouver you can go to their home page.

Defining the “Unit” in Unit Testing

Posted by on 27 Sep 2008 | Tagged as: Agile, Architecture, DotNet, Java, TDD, XP

“Hey, Ben. We just figured out a great way to manage test-driven development and good database design.”, said an enthusiastic developer using Extreme Programming (XP) practices on their project. “Our application is highly data-centric. Therefore, in the first iteration we design the database schema modifications and create tests to validate all of it’s implementation characteristics. The next iteration we build the data access layer and business services on top of the database modifications. This has allowed us to design the database correctly before developing code around the wrong data model. What do you think about this approach?”

Ben is an agile coach who checks in with this team from time to time. He likes the fact that this team has continually looked for improvements in the way they develop software. In this case Ben sees a potential issue and so he asks a question, “What do you deliver to the customer at the end of your first iteration?”.

“We show the customer our database design and tests executing.”

“What does the customer think about this?” Ben probes further.

“He doesn’t seem to care about this part of our product review. He told us that we should just show him the finished feature after the next iteration.”

“Didn’t we setup a Definition of Done for the team to assess quality of our delivery each iteration? If we don’t have customer accepted functionality at the end of the iteration didn’t we decide that the work is not done?”

“Yeah, but we found out that the data design and automated test case development for it takes too long to fit into an iteration along with feature development on top of it.”

“Hmmm, that sounds like we may be working around our definition of done which seems like a ‘smell’ in our process. Lets sit down and see what the root cause of this new process that extends development of functionality over two iterations rather than within one iteration.”

Relational databases have proven themselves to be great persistence platforms. As their usage increased in software development they have gone beyond their intended usage into the application server realm with stored procedures, functions, and triggers not to mention parsing and other functionality added recently in the marketplace. Applications become highly dependent on a relational database and become more difficult to change over time. The more difficult the relational database becomes to change the more we baby it and look for ways to not have to modify it for long periods of time. This leads to designing up front and the “getting it right the first time” mentality with our data models.

When we start in the bowels of our system with design, tests, and implementation we tend to “gold plate” our implementation. Thus developing more code and tests than is actually needed for the implementation. Many times this approach violates the YAGNI (“you ain’t gonna need it”) guideline for agile software development.

During coaching engagements I speak with team members about starting from the user’s point of view even in development of your unit tests. What is the next piece of functionality that will support their feature request? Many developers immediately comment that these are no longer “unit tests” as they have previously defined them. I ask what they characterize as a unit test and it usually is not easy for them to verbalize. If we think of a unit as part of an existing design we will tend to write tests for all potential ways the design could be used. If we always drive our next test based on what the next unit of functionality should do for the user then we will implement only what is necessary.

Behavior-Driven Development (BDD) describes a process for developing from the outside in. The first piece of code the developer implements in this approach is the interface. From the interface a developer drives out the rest of the functionality. This ensures that all code is directly related to functionality valuable to the customer or other code already written. Although there are great frameworks out there to support BDD in your development environment you can begin by starting to think about your xUnit tests in this manner. Start from the interface and make sure each additional capability implemented to satisfy a test is adding or supporting value from the user’s point of view. Please read this wikipedia entry on BDD for actual test case and code examples.

In addition, I will make a suggestion that whenever a technology element within your architecture is difficult to change you should definitely minimize interactions with it and abstract all access. Proper unit testing that focuses on a unit of code will force the use of proper interface abstractions in order to not make the test dependent on components external to the unit such as a database. Minimizing interactions will reduce your application’s dependence and also increase changeability of it for future business functionality to be implemented. Business changes so software should be able to change with it.

Beat Cross-site Scripting Issue with StoryTestIQ

Posted by on 25 Aug 2008 | Tagged as: Acceptance Testing, Agile, Architecture, DotNet, Java, TDD

A few years ago I was privileged to be on a team with some excellent developers where I currently work, SolutionsIQ. One of whom saw the need to stabilize development on an incredibly unstable codebase with no tests. He came to the team with a proposed tool that he slapped together in his free time. It was a mashup of Selenium and FitNesse along with some modifications to support some special needs we had on the project. His name was Paul Dupuy and the entire team, including Mickey Phoenix, Rand Huso, Jonathon Golden, and Charles Liu, spent some time working on enhancing the tool, now called StoryTestIQ (aka ‘STIQ’), to make it what it is today. Other teams at SolutionsIQ have been using StoryTestIQ, as well, and helping to enhance its capabilities from time to time.

One issue that we have had is that the visual testing portion of StoryTestIQ runs inside a web browser. The security models for each browser is a little bit different but for the most part they restrict cross-site scripting. For most of the projects we have used StoryTestIQ on the customer’s browser was always Microsoft Internet Explorer (IE). Given this constraint we were able to run in a special non-secured mode of IE called HTML Applications (HTA). This mode of IE also has a problem. The browser history is not retained while in HTA mode therefore some JavaScript functions will not work if used in your application.

Recently I have been working on a project that must work in multiple browsers. Also, I have a Mac so the constraint of using IE is not necessarily reasonable without some coaxing. Instead I decided to take on a quick Apache2 with mod_proxy experiment to see if I could serve both StoryTestIQ, which serves its pages from the FitNesse server inside, and the project’s web application server. It worked and so now I will share it with you.

I will assume that you are already using StoryTestIQ and have Apache2 installed already. If not, please make sure each of these run within your environment before moving on. In my first configuration the project I was developing is running on the Jetty web application server on port 8080. I started up StoryTestIQ on port 9999 and left my project’s web application up and running while I configured Apache2. I used MacPorts to install apache2 in the /opt/local/apache2 directory. In my environment the httpd.conf, Apache2 main configuration file, was located at /opt/local/apache2/conf/httpd.conf. In your own Apache2 installation find the default httpd.conf file for configuring the server. Inside the httpd.conf configuration file make sure that mod_proxy is installed and loaded by searching for a line similar to this:

LoadModule proxy_module modules/mod_proxy.so

If you are not currently using mod_proxy then please review their web site to install and configure mod_proxy for your environment. If mod_proxy is successfully installed then you can add the following lines to the httpd.conf file below all of the LoadModule entries:

#
# Proxy configuration for STIQ and local Jetty application
#
ProxyPass         /stiq  http://localhost:9999/stiq
ProxyPassReverse  /stiq  http://localhost:9999/stiq
ProxyPass         /files  http://localhost:9999/files
ProxyPassReverse  /files  http://localhost:9999/files
ProxyPass         /STIQResults  http://localhost:9999/STIQResults
ProxyPassReverse  /STIQResults  http://localhost:9999/STIQResults
ProxyPass         /myapp  http://localhost:8080/myapp
ProxyPassReverse  /myapp  http://localhost:8080/myapp

#
# Rewrite ProjectRoot, STIQ repository main content page, to STIQ server
#
RewriteEngine On
RewriteLog “/opt/local/apache2/logs/rewrite.log”
RewriteRule ^/ProjectRoot(.*) http://localhost:9999/ProjectRoot$1 [P]

There are multiple directories which can be easily proxied using basic ProxyPass and ProxyPassReverse directives. These are /stiq, /files, and /STIQResults. Due to the wiki page URL construction the main content page within the STIQ repository cannot use these basic directives. Instead you must use the RewriteEngine with a rule to map any URL starting with /ProjectRoot* to the STIQ Server. The problem was that the wiki page would ask for a URL such as http://localhost:9999/ProjectRoot.StoryTests and the basic proxy directives would see this as a new URL compared to basic /ProjectRoot. The use of the RewriteRule allows anything that starts with /ProjectRoot to get proxied across.

Once you have these configurations added to the httpd.conf you can restart the Apache2 server. In my environment the command was:

/opt/local/apache2/bin/httpd -k restart

After the web server is restarted you can launch StoryTestIQ inside your browser using a URL similar to this one:

http://localhost/stiq/runner.html?startPage=/ProjectRoot&suite=StoryTests

You’ll notice that this is going through the default HTTP port 80 instead of STIQ’s port 9999. We are now proxied on port 80 to port 9999 for STIQ related URL. You can write automated acceptance tests opening  a URL starting with http://localhost/myapp and it will proxy to port 8080 where your web application is running. Make sure to have all of the port numbers correct in your configurations if your environment differs from mine.

I have also configured a Ruby on Rails application for automated acceptance tests against with STIQ. In that case I had to configure mod_proxy with the following directive adding an application name to the Apache2 URL to proxy:

ProxyPass         /myapp  http://localhost:3000
ProxyPassReverse  /myapp  http://localhost:3000

You can see the welcome page for your application using this basic configuration but once you have any URL locations identified in your web pages that start with ‘/’ they will not be found. In order to support URL which start with ‘/’ you must modify the appropriate environment configuration inside your Ruby on Rails application. The environment configuration are found in the ${project_root}/config/environments directory by default. I added the following to my development.rb environment configuration file which is what I use with STIQ:

# This allows me to run using mod_proxy on Apache2
# Using StoryTestIQ for automated acceptance testing
# It runs in a browser and therefore cross-site scripting is not allowed
# which the mod_proxy allows me to get around by passing both the StoryTestIQ
# server and the application under test through the same root URL
ActionController::AbstractRequest.relative_url_root = “/myapp”

This will cause all requests for URL starting with a ‘/’ to become ‘/myapp/’. This allows them to be found through the Apache2 proxy.

If you are interested in beating the cross-site scripting restrictions of most major browsers for use with StoryTestIQ I hope this blog entry will help you out. This same mechanism could help with other tools out there who have browser restriction issues. Let me know through your comments if this worked for you or if you have any suggestions on modifications that could make this easier.

Technical Debt Workshop – A Perspective

Posted by on 21 Aug 2008 | Tagged as: Acceptance Testing, Agile, Architecture, DotNet, Java, Leadership, Product Owner, Scrum, TDD, XP

Last week I was invited to participate in a LAWST-style workshop on Technical Debt. I was honored to be there with such a great group of people from diverse industries and experiences.

Preface: I am writing this blog entry for myself and therefore it may not be as useful to those reading. Also, the perspective on discussion points are from my own understanding. I will link to other portrayals, blogs, and articles on technical debt in the future to round out these understandings. I do have positions on this topic that I have talked about in earlier posts, conference presentations, and in articles that I will continue to build upon in the future with influence from people in the workshop and outside, as well.

On the night before the workshop began, a large group of the participants went out to a Grand Rapids, MI watering hole to get introduced and start lively discussions. I learned some interesting nuggets such as how some roads in Texas are paved cow runs. We also discussed more of the philosophical and metaphorical underpinnings of the term debt in relation to technology artifacts. One item I took away from this was around discretionary income. Some technology shops either do not have discretionary income or choose to use it developing new capabilities rather than investing it to minimize their current technical debt.

Nancy Van Schooenderwoert provided me with many good pieces of information. While discussing our agile coaching experiences with teams she said:

“The only question that matters at the end of an iteration is ‘Did you build more or less trust with the customer?'”

Some people who read this may find this difficult to find true but I have found this question is important to ask each iteration. Scrum and other agile frameworks attempt to build trust between parties that may have played the blame-game with each other in past projects. As a team and customer build trust the motivation of the team and willingness of a customer to collaborate grows stronger. These characteristics enable faster product development with higher quality.

Now for the play-by-play of the workshop itself.

Rick Hower: 10 Warning Signs of Technical Debt

Rick facilitated a brainstorming session to gather warning signs of technical debt in your code. We brainstormed way more than 10 warning signs that I did not write down. Rick and potentially other participants will be writing an article to choose the top 10 warning signs that I will link to once it comes out.

Matt Heusser: Root Causes of Technical Debt

Here are some of the notes I took on Matt’s topic:

Typical training in computer science tends to not entail testing, maintenance of your own code beyond an assignment, or team interaction.

Technical people tend to take a victim position when developing code too fast creating technical debt. For instance “those mean business people pushed me around and I have no power to change their mind”. Non-technical people don’t always understand what they are asking for when making decisions on feature delivery. If they knew the impact of these decisions they may decide to pay off some of the technical debt before investing in new features.

North American businesses tend to look for short-term versus long-term results. This could impact planning and delivery since the short-term goals may be hacks while long-term results show decay of software can be costly.

Engineers have observable physical reasons for qualifying assessments (ie. “This bridge will take traffic going 40 mph”). Software development organizations do not seem to have these types of qualifying assessment tools fully figured out yet. Matt used mechanical engineering in the 1800’s versus electrical engineering during the same time period to support this idea. Mechanical engineering was well established in the late 1800’s yet electrical engineering was still in its infancy. Useful units to measure were known for mechanical engineering yet the electrical engineering folks did not have similar units of measure. Over time the electrical engineering discipline gained new knowledge and developed useful units of measure that we use today. Maybe we as the software industry are still trying to find our useful units of measure.

David Walker: False Starts

Misuse and bad deployment of practices hurts an organization.

David mentioned an organization that may be of interest: ACQ (American Society for Quality). They have a topic on their site called ECQ (Economic Case for Quality) that may be a helpful talking point.

Chris McMahon asked the question “Why would you not do excellent work?” in a discussion on technical people asking for permission to work on technical debt in their software development efforts. I thought this was a great question so I wrote it down.

Steve Poling: Technical Debt is Fascism

Steve got our attention with the name of this topic. Steve wanted us to come up with formulas we could use to calculate technical debt. I thought he had some good ideas and I hope he is able to develop formulas that actually work for particular instances of technical debt.

Steve brought up Godwin’s Law: “As a Usenet discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one.” I thought this was interesting since the term technical debt could be dilluted in its usage if it covers too much of a spectrum and is not able to be pinned down.

I thought Steve brought up a fairly good metaphor for technical debt revolving around his trailer hitch. He had noticed the need to paint his trailer hitch for some time in order to protect it from the elements. The problem was that other pressing items of business came up and he did not paint the hitch. Over time he went to paint the trailer hitch and now it had rust on it. This meant that the total effort to protect the trailer hitch from the elements had grown exponentially. He now had to clean the hitch and make sure the rust was gone and then he could paint it.

Ron Jeffries asked to draw a picture on the black board to discuss technical debt and he came up with what I thought was an amazing representation of the issue. Here is my attempt at recreating his phenomenal chalk talk in a tool:

We can make incremental progress on a large feature area within a system adding value with each addition. In order to make incremental progress we should keep code clean and easy to change so the addition of more features on top of existing functionality does not slow down. In fact we should be able to deliver features faster on top of each capability already built into system. This does not always (or even usually for that matter) happen in projects and instead we end up with a “big ball of mud” to work with. As a team is working on this system they begin an effort to add a new feature and get caught in a quagmire of old crusty code that is difficult to work with. What is even worse is when there are external dependencies to this big ball of mud that makes changes even more risky than it would be on its own.

David Christiansen: Where did all this $@#% come from?

David was entertaining and provided some instances of what he constituted as technical debt. Here are the sources he listed in the presentation:

  • Taking shortcuts (experienced)
  • Hacks (beginner)
  • Geniuses (overcomplicated)
  • Living in the now
  • Futurists
  • Time
  • The Iron Triangle
  • Anticipating reuse
  • Documentation
  • Innovation
  • Trying to avoid technical debt

Some people in the room thought this list went well beyond what technical debt should be. This list seems to cover almost anything that causes software to become difficult to work with. It was a great list of issues and David went on to say he wasn’t as interested in defining technical so much as help to create tools and practices that would minimize bad software.

David also discussed how documentation is a gamble:

“Like a slot machine the house always wins. Sometimes you get a winner but most of the time it is a loss.”

I will probably use this quote in the future with proper acknowledgements so thank you David.

Brian Marick said the following during the questions and answers portion of the presentation:

“Debt is a property between the people and the code.”

I thought this was interesting and already have multiple thoughts on how this can be applied. I am not sure what is the best way to view this statement yet I found it important enough to write down so I will think about it some more. Also, I hope to ask Brian more about this point in the future.

Michael Feathers also brought to the group a point about the tradeoffs made in terms of “navigability versus changeability in the code”. Some technical folks like code navigation to be explicit and therefore have difficulty reading strongly object-oriented code that separates implementation from interfaces to support change.

Nancy Van Schooenderwoert: A Garden of Metaphors

Nancy proposed that we could start to explain technical debt more succinctly if given a combination of a metaphor and metrics. The metaphor can help people who are non-technical understand the impact of their decisions on technical artifacts. For instance the use of the word debt helps people visualize the problems with just making it work versus craftsmanship (word brought up by Matt Heusser that I thought was useful). She mentioned that technical debt is about expectation setting.

NOTE: Nancy wrote to me and added the following comment – “Metrics and Metaphor have opposite weaknesses so they support each other well. People can be suspicious of metrics, because there is an infinite choice of things to measure and how to measure them. Metaphor, on the other hand rings true because of familiar experiences the listener has had. The only problem is that it depends on tech debt *truly* being like the thing in the metaphor. We have to check back with reality to know if that’s the case. That implies we measure something to see whether and how the behavior of tech debt is different from the behavior of financial debt (or whatever else we used as metaphor).

I think some of the most useful metrics to start with are
* value stream mapping
* bug metrics
* story points delivered, and remaining to be done

Nancy also brought a great alternative metaphor to debt based on Gerald Weinberg’s “Addiction” trigger responses. Sometimes decisions are made for short-term results without understanding their long-term effects such as in alcohol, smoking, and other drug addictions. To enable better responses to the addiction we must setup a more appropriate environment that allows proper responses to made within. Here is my portrayal of the “Addiction” metaphor drawing Nancy put up:

The “Environmentalist” as a metaphor was also brought up by Nancy. In life nobody has to pay for the air we are dirtying up. Economic systems poorly reflect environmental quality and this has helped lead to issues we are now faced with in global warming.

David Christiansen: You’ve got a lot of Technical Debt, Now What?

I don’t have any notes about David’s talk but Chris McMahon mentioned something I thought was wonderful:

JFDI – Just #%$ Do It

We got to this point when people started asking why wouldn’t a technical person just fix the issues when they seem them. Chris discussed an example of how they decided to use Bugzilla. One of the developers got tired of the old bug tracking system, and he JFDI by installing Bugzilla. It was pointed out that there are also examples of JFDI backfiring and I can think of a situation that impacted a team for multiple days because of JFDI.

The visibility into each task that a person takes on in some organizations makes this approach difficult to follow. How can we help people decide to make the right choices while developing software in these environments?

Matt Heusser: Debt Securities

Matt brought up the term “moral hazard” to describe how technical people act based on their insulation from the long-term effects. For instance, a person may take a shortcut in developing a feature since they are not going to be working on the code 1 year from now when it must be modified to support a new feature. Matt pointed out two practices that may help minimize this effect:

  • Customer close to the Team
  • Agreement on how to work together

Chet Hendrickson pointed out that a good way to minimize the problem with moral hazard is by:

“Lowering the threshold of pain”

For instance, Chet brought up doing our taxes. Yes he could incrementally update his taxes 2 hours each month. Instead he waits until 2 weeks prior to get his tax forms completed because the potential headache of tax evasion is strong enough that it crosses a threshold.

Brian Marick: Market World

Brian defined the term market world as:

“Get as much for as little”

He then described the term social world as:

“Transactions are not accounted for”

Brian discussed that use of the term “debt” as a talking point may push us into a “market world”. This could be problematic since it leads to the creation of technical debt by only doing enough now to get it out the door. Maybe we could do more to introduce social aspects into the talking points for removing what we now call technical debt.

Brian is a tremendous thinker, IMHO. He brings creative and profound discussion points to the table. Here is one such point he made:

“Agile teams care deeply about business value…the problem with agile teams is they care more about business value than the business does.”

Being an agile coach has lead me to believe this is true many times over. I wonder if this is something we can work on as an industry and should we move more towards social world ideas to identify the right vision for our software delivery focus.

Rob V.S.: Developer in a Non-agile Environment

Rob pointed out some points about development in a non-agile environment. Here are some of those:

  • Many developers, for some reason, want to appease customers with their time estimates
  • Rob saw this appeasement was causing problems for him and others in the code
  • Rob decided he would start buffering estimates to incorporate refactoring into his processes
  • A problem occurred when refactoring was causing implementation times to shorten and making his estimate buffers unnecessary
  • Rob was finding that he was able to implement features in about 1/2 the time of others now that he was refactoring code
  • To his amazement the refactoring worked every time even though it felt like a risk each time he thought about it in his estimates

I thought Rob’s story was a great addition to the group. Not everybody, maybe not even a majority, of the people who participated were involved in projects that were using an agile framework.

Michael Feathers: Recovery

Michael explained that he saw that the industry was talking about people issues more often today then before. This seemed like a good thing yet he wondered when we would get back to discussing technology issues. Chris McMahon said the following that I thought was a good principle to follow:

“Make the easy things easy and the hard things possible”

I am not sure where I heard something similar before but Chris brought it up so quickly that I attribute it to him until further notice. 😉 (NOTE: Chris said “as far as I know originated as a design principle in the early days of Perl development. I was quoting Larry Wall.”)

David Andersen: Business Models

David pointed out something that I discuss often in training courses on Scrum:

“IT as a cost center versus a profit center”

I was quite interested in this topic and see this as potentially the most important software development environment problem to be solved. Yet the problem may be so large that finding a solution may be near impossible. Therefore I have found discussing the issue one organization at a time sometimes helps.

David expressed that companies who work in a time and materials approach tend to be cost centers. The idea is that we employ a warm body to take on the work. Those companies whose approach is a service for a fee tend to think like a profit center. The right people will emerge and deliver the services based on setting up the proper relationship.

Brian Marick brought up a reference to the Winchester Mystery House that is filled with all kinds of oddities. I can’t remember why he brought this up in terms of business models but it could be something to think about when discussing technical debt and its potential ramifications.

Matt Heusser: Clean Code and the Weaker Brother

Matt presented the idea of the weaker brother and it caused me to take another perspective look at team composition. At least it gave a strong analogy to draw from for conversation about it. One thing that I thought was interesting about Socialtext, where a few of the folks including Matt work, is they are truly distributed as team members. They have communication tools that help them minimize the potential issues with fully distributed team. One of the tools and processes they use is every commit message to source control goes to the development email list. Something that happens in response to this from time to time is other people on the team can challenge the implementation and a healthy, respectful banter goes on that improves quality of the overall software. I will take this away as a talking point on distributed teams and may even use it on one of our projects in the near future to see how it works for us.

Chris McMahon discussed a policy they had at a previous employer that said everyone must work on a team at least 1 week per month, even the iteration leader (similar to a ScrumMaster role). I will have to think about this policy and its ramifications but I truly believe that I must work on a team every once in a while to keep my practices up to snuff.

Michael Feathers: Topic of Unknown Origin but Greatly Appreciated (my own words)

Michael had a question that I thought was interesting:

“Would the world be better if all your code was gone in 3 months?”

The answer to this question for your particular project context may help the team decide what the effect of technical debt is today. I had a couple of comments on the subsequent discussion points but never got them out because there were so many passionate people with great points, ideas, and questions. Here are the points around taking an incremental improvement to these codebases from my own experience with some horrible situations:

Abuse Stories – Mike Cohn brought this up during a presentation and they were not in his slide materials.  He has since added them and I believe them to be greatly important and easy to implement types of stories to describe the cost of not addressing what are usually technical debt or more likely architectural features. You can follow the link to an old blog entry I posted on this subject.

“Finding a common enemy” – I find teams are not usually motivated until they have a common purpose. One way to find a common purpose quickly is to find a common enemy. This may be a competitor’s product or another team (I hope not but hey?). This can bring a team together and focus their efforts to defeat the competitor. I have heard of companies who developed this focus and cornered their marketplace in a fairly short timeframe. This could also help to address technical debt since those issues will be fixed in order to continue on the path to defeating the common enemy.

Michael did a great job of describing how companies may not consider the value of their existing software enough. Code has value and organizations who understand this can make better decisions about how they treat their software assets. The idea is to prevent devaluation of a software asset when appropriate.

Ron Jeffries & Chet Hendrickson

OK, now this was fun. Ron and Chet put on a artistic show that put technical debt into a perspective easily understood, IMHO. I will attempt to recreate their drawings and the reasoning behind each one but I hope they write an article on this soon since they are incredibly apt to delivering a message succinctly.

As a team is building a system their velocity will either stay fairly consistent, decelerate, or accelerate.

Each feature implementation is developed using a combination of available system capabilities and newly developed capabilities.

A team can choose within their current envioronmental context to build a rats nest that is hard to work with later or a well-designed system that more easily changes with new business needs.

The punch line, from what I could gather, was the use of a negative term “debt” may be problematic from an emotional point of view. Rather than using a negative to discuss the topic it may be better to discuss how we can build upon what we have. Thus we can call technical debt something like “liquid assets”. We can use our existing code to develop new capabilities for our business quickly and with lower cost then doing so from scratch. I am not sure if this term will stick but I like the building upon what we have already developed point of view.

Chet and Ron also brought up the 4 basic principles of simplicity in code by Kent Beck:

  1. Tests all run
  2. No duplication
  3. Expresses all design ideas
  4. Minimize code

* These are in order of importance since each of last 3 are potentially in conflict with each other.

Wrap Up

There is so much more that I didn’t take down, remember, potentially understood the importance of, and whatever else that stopped me from recording it. The above content may seem somewhat haphazard since I did not create a coherent overview but rather just recorded what I heard from my perspective. I hope it is still something that others can get some ideas from and use effectively. Lets start reducing technical debt for the good of our organizations and customers and the morale of our team.

The “Wright Model” for Describing Incremental Architecture

Posted by on 02 Jun 2008 | Tagged as: Agile, Architecture, DotNet, Java, Product Owner, Scrum, TDD, XP

One of the most common questions in teaching and coaching agile processes to groups is:

“How do we design our software while delivering iterations of potentially shippable product increments?”

Scrum, an agile process that I teach about and coach organizations on implementation of, asks that each Sprint, analogous to an iteration, delivers a potentially shippable product increment. There is emphasis on potentially shippable since it is quite common to have releases that involve running multiple Sprints until there is enough value for your users. I usually describe potentially shippable product increment is that the software is of a quality that a releasable version would include. This means that each Product Backlog item that is implemented during the Sprint is tested, coded, integrated, documented, and verified. Scrum teams gain a better understanding of what deliverables are necessary to make this happen by creating a Definition of Done.

When I ask what design decisions are difficult in the Scrum process to deliver it usually revolves around high level architecture decisions or data modeling. There are other specific design areas that get brought up but lets focus on these for now. In order to help a Scrum team understand from a conceptual point of view how incremental design works across a release of their software product I use a diagram that I believe Mike Cohn created. Here is my interpretation of the diagram:

Incremental Design on New Product

This diagram attempts to describe how in new software product development efforts more emphasis in early Sprints is put into implementation of architecture elements. As the architecture is more fully defined and implemented in later Sprints emphasis is increasingly put into feature development. In the final Sprint of a release there may be little to no architecture implementation left to do. This diagram demonstrates the expectations of early release deliverables in terms of technical architecture implementation to support feature delivery. It also shows that each Sprint in the release should deliver features that a user can review.

In a software product team that delivers more than one release may have less architecture emphasis in early Sprints of following releases. This is shown in a modified version of the diagram below:

Incremental Design in Following Releases of Product

After describing the above diagrams to a developer named David Wright he approached me to validate his understanding. Within 10 minutes of my description of incremental architecture he had developed a new diagram perspective which I thought was brilliant. His diagram involved two axis with the x-axis representing the surface visible to users and the y-axis representing depth of the architecture. In Sprint 1 of a multiple Sprint release a portion of both the surface and architecture depth are realized. The figure below is a visual representation of the portions implemented of a fully releasable product version. The dark blue areas of the grid show implementation of the surface and depth in Sprint 1 while the empty grid elements represent what has not been implemented yet.

Sprint 1 Depth and Surface of Releasable Product
As a release progresses the amount of surface visible features and architectural depth implemented is incrementally built upon towards a fully releasable product version. The following diagram shows the incremental architecture progress later in the release cycle.

Depth and Surface of Incremental Architecture Later in Release

The adoption of an incremental architecture approach comes with a couple of potential issues:

  • Less up front design of the overall release architecture is known at the beginning and while the depth of the architecture is still being implemented
  • New knowledge may impact existing implementation of architecture elements

In order to manage these types of issues we must implement disciplined practices to allow our architecture to accept change as we gain more knowledge. This is why the Extreme Programming practices such as Test-Driven Design (TDD), Continuous Integration (CI), Pair Programming (or continuous code review), and Refactoring have become so common on Scrum teams. TDD gives our design an executable way to prove we have not broken existing functionality at the component and functional level. This does not negate the need for exploratory testing by a person but it will keep manual testing to a manageable level. CI automatically runs builds, automated tests, and deployment of our application then provides feedback to the team about the current state of the integrated system. Pair Programming increases knowledge transfer across the team and provides coherent communication of the products domain into the tests, code, and support artifacts. And finally, refactoring is defined by the guy who wrote the book on it, Martin Fowler, as:

“a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior”

Refactoring is best support by a high automated test coverage that allows a team to know when the external behavior has not been changed in response to changes deep in the architecture. Here is a visual representation of architectural elements, in red, which could be refactored in response to a new feature.

Depth and Surface of Release with Refactored Elements

Each of the refactorings may be small changes but lead to larger positive impact over the span of a release. Effective use of refactoring keeps our architecture flexible and therefore able to meet evolving business needs.

I have used David Wright’s model to describe incremental architecture to other clients in conjunction with the original diagram from Mike Cohn. It has helped provide a clearer picture of incremental design and how to incorporate it into their real world projects. With David’s permission I named it the “Wright Model” and will continue to use it in the future. Thank you, David.

Big design up front (BDUF) is based on the thought that we can lock down business requirements and design our software right the first time. The problem is that business needs change almost as soon as the requirement has been developed. Not only do they change but the functionality requested is better understood once a user touches the feature and provides their feedback. Incremental design of a system throughout a release allows us to incorporate this feedback and change our architecture to satisfy the user’s needs.

Inferior Tracks Lead to Superior Locomotives

Posted by on 05 Aug 2007 | Tagged as: Agile, Architecture, DotNet, Java, Leadership, Scrum, TDD, XP

Larry L. Peterson, professor and Chair of Computer Science, Princeton University, gave a great talk on PlanetLab: Evolution vs. Intelligent Design which I believe is interesting to people involved in emerging architecture. One of the Agile Manifesto for Software Development’s principles is:

The best architectures, requirements, and designs emerge from self-organizing teams.

Remember this principle while listening to the talk.  Larry points out many examples of when this is true and the issues that can occur as a result.  These issues are all resolvable but the resolution may not be what you initially expect.

One of the great discussion points, in my opinion, was on how inferior tracks lead to superior locomotives.  The story is that track standards across the American west were much more liberal than previously defined on the east coast and Europe. Therefore, European trains could not run well on American tracks and the locomotive industry in America had to cope with this issue. The American locomotive industry then created much more robust locomotives which dealt with the real world issues running trains on these tracks.  Coming from the Jini community of y’ore this reminds me of the 8 Fallacies of Distributed Computing and how much abstraction can be designed before increasing missing detail overhead in software development.

Managing Unit and Acceptance Tests Effectively

Posted by on 23 Jul 2007 | Tagged as: Acceptance Testing, Agile, Architecture, DotNet, Java, Scrum, TDD

In my experience, the use of Test-Driven Development (TDD) and automated acceptance testing on software projects makes for a powerful tool for flexible code and architectural management. When coaching teams on the use of TDD and acceptance testing there are some foundational test management techniques which I believe are essential for successful adoption. As a project progresses the amount of unit and acceptance tests grow tremendously and this can cause teams to become less effective with their test first strategy. A couple of these test management techniques are categorizing types of unit tests for running on different intervals and environments and structuring acceptance tests for isolation, iterative, and regression usage.

In 2003, I was working on a team developing features for a legacy J2EE application with extensive use of session and entity beans on IBM WebSphere. The existing code base lacked automated unit tests and had many performance issues. In order to tackle these performance issues the underlying architecture had to be reworked. I had been doing TDD for some time now and had become quite proficient with JUnit. The team discussed and agreed to use JUnit as an effective manner to incrementally migrate the application to our new architecture. After a few weeks the unit tests started to take too long to run for some of the developers including myself.

One night I decided to figure out if there would be a way to get these tests to run more quickly. I drew a picture of the current and proposed architecture on the white board and it hit me. We could separate concerns between interfacing layers by categorizing unit tests for business logic and integration points. Upon realizing this basic idea, I came up with the following naming convention that would be used by our Ant scripts for running different categories of unit tests:

  • *UnitTest.java – These would be fast running tests that did not need database, JNDI, EJB, J2EE container configuration, or any other external connectivity. In order to support this ideal we would need to stub out foundational interfaces such as the session and entity bean implementations.
  • *PersistanceTest.java – These unit tests would need access to a the database for testing configuration of entity beans to schema mappings.
  • *ContainerTest.java – These unit tests would run inside the container using a library called JUnitEE and test the container mappings for controller access to session beans and JNDI.

In our development environments we could run all of the tests ending with UnitTest.java when saving a new component implementation. These tests would run fast; anywhere from 3-5 seconds for the entire project. The persistance and container unit tests were run on an individual basis in a team member’s environment and the entire suite of these tests would be run by our Continuous Integration server each time we checked in code. These took a few minutes to run and our build server had to be configured with an existing WebSphere application server instance and DB2 relational database configured to work with the application.

In the “Psychology of Build Times”, Jeff Nielsen presented on the maximum amount of time builds, unit tests, and integration tests should take for a project. If builds and tests took too long than a team will be less likely to continue the discipline of TDD and Continuous Integration best practices. At Digital Focus, where Jeff worked, they had a similar unit test naming convention as the one I described above:

  • *Test.java – unit tests
  • *TestDB.java – database integration tests
  • *TestSRV.java – container integration tests

Another good source of information by Michael Feathers set out unit testing rules to help developers understand what unit tests are and are not. Here is a list of  “a test is not a unit test if”:

  • It talks to the database
  • It communicates across the network
  • It touches the file system
  • It can’t run at the same time as any of your other unit tests
  • You have to do special things to your environment (such as editing config files) to run it

Automated acceptance tests also have effective structures based upon the tools that you are using to capture and run them. For those using Fit I have found that structuring my tests into two categories, regression and iteration, supports near and long term development needs. The Fit tests which reside in the iteration directory can be ran each time code is updated to meet acceptance criteria for functionality being developed in the iteration. The regression Fit tests are ran in the Continuous Integration environment to give feedback to the team on any existing functionality which has broke with recent changes.

If you are using StoryTestIQ, a best practices structure for your automated acceptance tests has been defined by Paul Dupuy, creator of the tool. That structure looks like the following:

  • Integration Tests Tag Suite
  • Iteration Tests
    • Iteration 1 Tag Suite
    • Iteration 2 Tag Suite
  • Story Tests
    • User Story 1 Suite
    • User Story 2 Suite
  • Test Utilities
    • Login
      • Login As Steve
      • Login As Bob
    • Shopping Cart

You might have noticed a couple of descriptions above; Tag Suite and Suite. A tag suite is a collection of all test cases which have been tagged with a particular tag. This is helpful to allow for multiple views of the test cases for different environments such as development, QA, and Continuous Integration. The suite is usually used to collect a user stories acceptance tests together. There are other acceptance testing tools such as Selenium, Watir, and Canoo Web Test each with their own best practices on structuring.

Teams can effectively use TDD and automated acceptance tests without accruing overhead as the implemented functionality gets larger. It takes high levels of discipline to arrange your tests into effective categories for daily development, feedback, and regression. Tests should be treated as first class citizens along with the deliverable code. In order to do this they must be coded and refactored with care. The payoff for the effective use of automated unit, integration, and acceptance tests are tremendous in the quest for zero bugs and a flexible codebase. Happy coding.