(Reposted from my blog, at Steve Mallet’s request)
I was IMing with a friend today, and asked him if he was paying attention to the Coverity scan. He told me he wasn’t, and I jokingly commented that it had the potential to be as addictive as fantasy baseball.
A little bit later, I realized that I was speaking more truly than I’d thought. I often catch myself looking at different projects and their defect rates, comparing one against another. I’m registered to look at the Ruby results, and occasionally find myself wondering what the other projects look like.
It is possible to see some of the rolled up data both on a per project and on a sitewide basis. For example, although Coverity identifies eight different classes of defects, the three most common account for over 60% of the total tracked defects.
There are some things I’d like to see — a project’s velocity in defects corrected over time and the average lifetime of a defect after identification (by project, sitewide, or by class of problem) for example. Some of the data could probably be harvested from their website, but doing that would mean admitting that I’d crossed the line and was addicted to coverity scan data.
I guess I should be glad there isn’t a fantasy coverity project league out there.