It hurts things in several ways:
1) It means you now have to load balance across more web servers, which means you need more IP addresses mapped to your DNS entry (assuming you don't have a hardware-based web load balancer, which are expensive), which means changing your DNS entry every time you need more horsepower, which gets to be a pain to maintain.
2) You now have more webservers to maintain, rather than N webservers (N constant), and many more "generic" application machines. If you've every tried to maintain more than 10 webservers at a time, you'll realize this is a royal pain.
3) If you have a DMZ (which you probably do if you're in an enterprise situation), you don't really want your machines running your business logic in the DMZ (after all, we all know a DMZ is untrusted, right?). You especially don't want to multiply the number of DMZ machines because you're not getting the performance you need.
4) The assumption here is that the business logic is fairly non-trivial, i.e., that the hit you take from the network hop is trivial compared to the performance increase you get from being able to amortize the load across N machines.
5) It's MUCH easier to load balance back-end machines than webservers, simply because the discovery process for the webserver is DNS, whereas you can use something like a location service to find machines in the backend that can service your request. Even better, this means that you can add machines ON THE FLY in the backend without your front-end presence having to change at all. It's truly plug-and-play.
Folks, this is the reason for three-tiered architecture. Believe it or not, people didn't just make it up to sell more machines. It's a real solution to a real problem - how do you scale an application where the display processing is relatively trivial, but the business logic is where most of the processing occurs.