O'Reilly Network    
 Published on O'Reilly Network (http://www.oreillynet.com/)
 See this if you're having trouble printing code examples


Bug Trackers: Do They Really All Suck?

by Matt Doar, author of Practical Development Environments
12/09/2005

The most complained-about development tool is often the bug tracking system. This fact prompted a friend to suggest the title for this article. To be fair, while they don't all suck, they are annoying. I've listed some of the most common frustrations with tracking bugs; you may have others to share, or even suggestions for fixing some of these annoyances.

More than most tools, bug trackers serve lots of different groups of people. Developers want to know which bugs need to be fixed. Testers want to know which bugs have been fixed in each build. Managers want answers to very different questions: "What kinds of bugs are there?" "Who should work on this bug?" and, "Is the number of critical bugs increasing or decreasing?"

There is some overlap between these different groups, but not that much. So one tool tries to please everyone and ends up never quite making anyone happy. Even so, some specific frustrations with most bug trackers pop up frequently enough to be worth describing:

One Bug, Multiple Releases

Different releases of a product typically use branches in the source control system. Often, there's a branch for each released version of a product (and its associated patch releases), and a mainline for developing the next release. This is shown in the figure below. Other kinds of graphs exist, but this seems to be the most common one.

figure 1

When a bug is found in one version of the product, it may well exist both in earlier and later versions of the product. Code inspection and testing can confirm when the bug was introduced and whether it has been unintentionally fixed later on. The figure also shows a bug that exists only in some parts of the tree of releases.

Names for Bugs

It's hard work to sell products that have known bugs, faults, or failures, but products with "unresolved issues," "anomalies," "artifacts," or even "potential defects" somehow sound better. You may even come across products that contain "design side effects." Be careful what you term you use, because the terms "defect" and "incident" can have unforeseen legal consequences.

Many bug tracking systems use the word "issue" because it can refer to feature requests or support tickets as well as bugs. Colloquially, "bug" still seems to be the most common way to refer to all of these things, so that's what this article uses.

The problem is how to keep track of all of the releases that a bug exists in. Bugs often have a field named something like Found In, to record which release the bug was originally found in. The three most common approaches to tracking multiple releases are:

It seems to me that some tool, maybe the bug tracker, needs to keep track of releases and how the source file that were used build them are related to each other. Then the information about exactly where the bug has been confirmed as existing, or confirmed as fixed, can be added. The key benefit of storing this information is that the answer to the question "Which bugs exist in release 2.0?" can include releases where the existence of the bug is only implied, not confirmed. The opposite question, "Which releases does this bug exist in?" can also be answered with greater confidence.

Practical Development Environments

Related Reading

Practical Development Environments
By Matt Doar

Integration with Source Control

Another question that is often hard to answer with existing bug tracking systems is: "Which files were affected by this bug?" The integration need to answer this question is becoming more commonplace in bug trackers. It usually happens by parsing comments in commit messages. For instance, when a developer is committing changes for bug 2345, he enters a commit message such as "Bug #2345, partial fix in the bounds checking," and this is connected in the bug tracking system to bug 2345.

There are two approaches to parsing such commit messages. The faster, but less reliable, approach is to scan them at commit time, and to record the information in the bug tracker using whatever API is available. This approach runs into problems if the bug tracker is not available at commit time, or the wrong bug number was used in the comment.

The second approach is to periodically download the history and commit messages of all possible source files, and to scan the commit messages in the downloaded file. This approach means that errors in the messages only have to be corrected in one place (the source repository), but the time between committing a change and the information appearing is longer. The amount of information for a large or long-lived project can also be substantial.

It seems to me that bug numbers ought to be associated with source files, however they are kept. Then, bug tracking systems could query the source control system for the information as needed.

Cleaning Up

As with many a filesystem, removing information is harder than adding it. The addition of a new user or release version to a bug tracking system is an obvious decision: when it has to happen, you just do it. However, realizing when the same information can be safely removed means defining a removal policy and regular action to make it happen.

One common place where this kind of problem appears is with web-based tools and their drop-down lists of release versions. It only takes a few new test releases per week to soon create lists that are too long to use without scrolling awkwardly to look for the right release. Removing releases entirely from the system isn't going to work, since there may well be bugs that refer to the older release.

A similar problem is removing users from a system. The information about who has worked on a bug is useful, even if that person is no longer part of the project. So making it as though they had never been involved is not a good idea, much as you might want to do this with some people.

What is needed is a way to deactivate these values. They should continue to exist, but new or changed bugs can't use them. They don't appear in drop-down lists or as valid entries in text fields. Deactivating a user should also guide an administrator or manager through the process of reassigning active bugs, recording a departure date, and making sure that no more email is sent to the user.

Most bug tracking systems provide ways to search bugs, but no way to do a "search and replace." Sure, you can probably do it in the back-end database, but that's not usually easy. Removing text from lots of bugs at once can be useful when someone starts overloading a field by adding "standard" text such as "CRITICAL" to the description, instead of using the field provided for that purpose. Public systems are also vulnerable to having spam added as comments to bugs, which then sends email containing the spam to the people associated with the bugs.

Automation Woes

The process of creating a release involves much more than just building the product from source files. One common requirement is an automatically produced list of the bugs that have been fixed in the new release, so that QA knows what to retest. The steps to produce such a list often include:

Automated steps such as these have to use the tool's API to add metadata, such as release versions, to search the bug data and to change the returned bugs. However, many bug tracking tools' APIs seem to be less full-featured and less well-tested than their graphical interfaces. This is somewhat ironic, since UIs are usually harder to test than APIs. Bug tracking APIs should also be available on multiple platforms, and work with multiple languages.

Nitpicks

Then there are the nitpicky things, the minor annoyances that keep on reappearing in different bug tracking tools.

Entering XML into a comment field for many web-based bug trackers will produce odd effects the next time that you try to view the bug. Sometimes you can't even see enough of the problem to delete the offending XML! There are plenty of ways to recognize and display XML as text rather than interpreting it as part of the HTML for the page. Telling users that they have to add their few lines of XML data to a bug as an attachment is a poor substitute for doing it right in the first place.

Another problem with web-based bug trackers is when a button is placed in the middle of a column in an HTML table. When a long line of text without spaces is entered in the column, the column width may expand off the edge of the screen, and the button will go with it. This leads to cries of "Where's that button gone?" and "I have to scroll sideways every time just to click one button?" Buttons aligned to the left may not be as pleasing, but at least you can find them.

Finally, tools with a fixed color scheme irritate me and at least five percent of all men. That's the percentage of men who have non-standard color vision. This "color-blindness" simply means that we find certain colors hard to distinguish. This is one reason why UI designers recommend that color should only be used to emphasize differences, not as the only way of detecting differences. For instance, mail from a bug tracking tool that shows what changed in a bug by coloring the changed fields red is just not good enough: the tool should make the changed text (or field names) italic, and use a color such as blue that is more distinguishable from black in small numbers of pixels. If in doubt, ask around, and you'll soon find someone qualified to test your color choices.

Conclusion

Rather than just leaving this article as a free-standing rant about bug tracking tools, why not speak up about what other bugs you've found in the bug tracking tools that you use? Even better, tell us your ideas about how some of the problems with bug tracking systems could be fixed.

Matt Doar is the author of Practical Development Environments.


Return to the O'Reilly Network

Copyright © 2009 O'Reilly Media, Inc.