I think it's a matter of perspective, with a perhaps a bit of C-programmer conceit thrown in.
C's strong point was primarily that it was a 'high level assembler language'. It let one manipulate the underlying hardware while not having to bother with some of the details of a native assembler and not being tied to a particular instruction set. Understanding the hardware is, indeed, an important part of writing in assember or other languages which map closely to a particular hardware instruction set.
One still needs to understand the constraints of the execution environment in higher level languages. These days, however, that is often a virtual machine rather than a physical one.
Java programmers, for instance, have little need to understand the hardware they're running on, but can benefit from knowledge of the Java Runtime Environment. The JRE is to Java what the hardware is to C.
Programmers using a functional language have almost no need to understand the underlying hardware except in general terms.
Most commercial quality compilers are quite good optimizing code for their target, whether virtual or real. As long as the code is reasonable well structured, the compiler will take care of best use of the target resources (e.g. register vs. memory ops on hardware, load/store sequencing, etc). Modern CISC processors will also reorder instructions in the pipeline.
So... I guess the point of all this is to say that, yes, it can be helpful to understand your target environment when using certain languages, but it's more important to understand the problem at hand and the tools (languages, algorithms, non-computer solutions) available to solve it.
And yes, Premature Optimization is t3h 3vil ;)