Print

Testing Web Apps Effectively with twill

by Michele Simionato
11/03/2005

You have just finished your beautiful web application, with lots of pages, links, forms, and buttons; you have spent weeks making sure that everything works fine, that it handles the special cases correctly, that the user cannot crash your system no matter what she does.

Now you are happy and are ready to ship, but at the last minute the customer ask for a change. You have the time to apply the change, but you lack the time--and the will--to pass through another testing ordalia. You ship anyway, hoping that your last little fix didn't break some other part of the application. The result is that the hidden bug shows up on the first day of usage.

If you recognize yourself in this situation, then this article is for you. If not, I am sure you will find something interesting among the following topics:

  • How to separate unit tests from functional tests
  • How to test web applications (written in any language) using standard Python libraries
  • How to use twill, a nice and easy to learn web testing tool

To Test or Not to Test

Let me begin with a brief recollection of how I became interested in testing and what I have learned in the last couple of years.

Related Reading

Python Cookbook
By Alex Martelli, Anna Martelli Ravenscroft, David Ascher

I have been aware of the importance of testing from the beginning, and I have heard about automatic testing for years. However, having heard about automatic testing is not the same as doing automatic testing, nor is it the same as doing automatic testing well. It takes some time and experience to get into the testing mood, as well as to be able to challenge some widespread misconceptions.

For instance, when I began studying test-driven methods, I had gathered two wrong ideas:

  • Testing was all about unit testing.
  • The more you test, the better.

After some experience, I quickly realized that unit tests were not the only tool, nor were they the best tool to test my application effectively. (I am an early adopter and supporter of doctests. See, for instance, my talk at the ACCU conference.) To overcome the second misconception, I needed some help.

The help come from an XP seminar I attended last year, where I actually asked the question, "How do I test the user interface of a web application so that when the user click on a given page, she gets the expected result?"

The answer was, "You don't. Why do you want to test that your browser is working?"

The case for not testing everything

The answer made me rethink many things. Obviously I was well aware from the beginning that full test coverage is a myth, though still I thought one programmer should try to test as much as he can.

This isn't the right approach. Instead, it is important to discriminate about the infinite amount of things that could be tested, and focus on the things that are your responsibility.

If your customer wants feature x, you must be sure feature x is there. If in order to get feature x, you need to rely on features x1, x2 ... xn; you don't need to test for all of them. Test only the feature that you need to implement. Don't test that the browser is working--it's not your job.

For instance, in the case of a web application, you can interact with it indirectly via the HTTP protocol, or directly via the internal API. If you check that when the user clicks on button b the application calls method m and displays result r, you are testing both your application and the correctness of the HTTP protocol implementation, in both the browser and the server. This is way too much. You may rely on the HTTP protocol and just test the API; just test that calling method m returns the right result r.

Of course, a similar view can apply to GUIs. In the same vein, you must test that the interface to the database you wrote is working, but you don't need to test that the database itself is working--it's not your responsibility.

The basic point is to separate the indirect testing of the user interface--via the HTTP protocol--from the testing of the inner API. To this aim, it's important to write your application in such a way that you can test the logic independently of the user interface. Working in this way, you have the additional bonus of being able to change the user interface later, without having to change a single test for the logic part.

The problem is that typically a customer will give his specifications in terms of the user interface. He will tell you, "There must be a page where the user will enter her order, then she will enter her credit card number, and then the system must send a confirmation email ..."

This kind of specification is a very high-level test--a functional test--that you must convert to a low-level test. For example, you may have unit testing telling you that the ordered item has been registered in the database, that the send_confirmation_email method has been called, and so on.

The conversion requires some thinking and practice, and it is an art more than a science. Actually, I think that the art of testing is not in how to test but rather in what to test. The best advice and answer to "How do I test a web application?" is probably "Make a priority list of the things you would like to test, and test as little as possible".

For instance, never test the details of the implementation. If you make this mistake (as I did at the beginning), your tests will get in your way at refactoring time, having exactly the opposite of the intended effect. Generally speaking, some good advice is: don't spend time testing third-party software; don't waste time testing code that API is likely to change; and split the UI testing from the application logic testing.

Ideally, you should be able to determine the minimal set of tests you need to make your customer happy, and then restrict yourself to those tests.

Pages: 1, 2, 3, 4

Next Pagearrow