After introduction of Test Driven Development (TDD) there was only one “testing related mantra” for developers:
write test, write code, refactor
and that was the point were all further discussion stopped. Of course, this is only the introduction to this topic as there are many other factors that should be considered in order to deliver quality software to your customers.
Finally, in past few months I ran across few articles (and blog entries) that broadens discussion on this very important topic. In this post I’ll collect them in one place and summarize their content hoping that they will be a valuable read to you (as they were to me).
For starters, in Untested code is the dark matter of software post Cedric Beust questions common agile-development statements that untested code is broken. He points that missing-deadline or shipping the product that doesn’t implement everything that was asked of you is much worse then shipping product that is not 90% covered with test cases.
In Randomizing testing post, Cameron Purdy further discuss how you can improve coverage of your test cases by randomizing test inputs. He provides a few very valuable tips on this topic that indeed could make your testing process much more efficient, such as:
- Always test near zero and near overflow
- Test random operations
- Multi-thread the test
But as Brian Goetz explains in Testing Concurrent Programs article, testing of concurrent programs is a subject of its own.
Testing concurrent programs is also harder than testing sequential ones. This is trivially true: tests for concurrent programs are themselves concurrent programs. But it is also true for another reason: the failure modes of concurrent programs are less predictable and repeatable than for sequential programs. Failures in sequential programs are deterministic; if a sequential program fails with a given set of inputs and initial state, it will fail every time. Failures in concurrent programs, on the other hand, tend to be rare probabilistic events.
Briefly, his advices could be put in two basic rules:
- Structure programs to limit concurrent interactions
- Test concurrent building blocks
Of course, test cases are not the only way which can assure you of the quality of your code base. Fuzz testing article, by Elliotte Harold explains how to create test data for your tests.
In fuzz testing, you attack a program with random bad data (aka fuzz), then wait to see what breaks. The trick of fuzz testing is that it isn’t logical: Rather than attempting to guess what data is likely to provoke a crash (as a human tester might do), an automated fuzz test simply throws as much random gibberish at a program as possible. The failure modes identified by such testing usually come as a complete shock to programmers because no logical person would ever conceive of them.
He also, suggests a few techniques of “defensive coding” that could help you code more robust solutions. These techniques include:
- Grammar based formats such as XML
- Verified code such as Java
Finally, in Code reviews article, Srivaths Sankaran talks about good old code reviews. This article introduces code reviews and explains basic purpose of the process, such as adherence of code to coding standards, addressing the requirements of the solution and using the process as a pedagogical aid.
As code review is a time-consuming process, he also proposes some advices on how to practice code review in your company. He specially focuses on two crucial questions: what to review and when to review.
As all these articles imply, code testing is all about balance of your time and effort to provide quality code and certainly no silver-bullet solution will help you with this.
So the question is: How do you approach testing and what are your experiences with different techniques?