The authors are right when they say:
"Writing automated tests is harder than writing
the code itself, in many cases."
Sometimes, it can seem that you would be better off spending your creative energies (+ blood, sweat and tears) on the actual problem, rather than test code. And sometimes, this is absolutely true. And, after all, if you re-architect fundamentally enough, a lot of those tests have to be thrown out, anyway...
That said, writing tests can help you focus on the result to be achieved rather than getting prematurely caught up in how to do it. Getting this perspective can save you a *lot* of time. But there are other ways to do this.
The idea of *automating* tests is an excellent one - and that's what computers are good at, isn't it? For some cases, it's obvious how to test; and for some cases, working out how to do this is very challenging, particularly as the test must be simple enough to not have bugs in it itself (such bugs can give *very* time-wasting false alarms!).
Like everything, whether to test and how to test must be a tradeoff. And such compromises can only be made by understanding what is important and what is not for your particular problem - and this is often something you don't know when you start.
Extreme Programming has other ideas in it, such as "do the simplest thing that could possibly work" and "you ain't gonna need it". Yet these very pragmatic ideas are not applied to testing, which is given a special ideological and dogmatic status.
To say that you should "always test" is simply dogma, and like all dogma, it misleads. But testing is a form of infrastructure, and it is not always worth it.
The truth is you should "sometimes test" - but you will have to work out when to do it and when not to.