Thanks for the reply.
Yes, by 'low mem' I meant ZONE_NORMAL or the kernel memory area, i.e., the first 1GB (or 896MB) if you have up to 4GB on a 32-bit system (I'm just learning this stuff, so apologies in advance if some of my questions are a bit daft).
When you say that the kernel 'always trusts itself', what do you mean exactly?
Is it the case that (theoretically) the kernel will never allocate more RAM than is available in ZONE_NORMAL? You mentioned that it is possible/normal for the glibc memory allocator to overcommit memory allocation for user-space processes. Thus an application could allocate 3GB but the machine only has 80MB memory available (phys RAM + SWAP), and it can get away with this because the application may not actually end up *using* the memory. Now: does the memory allocator or the kernel apply a stricter policy on ZONE_NORMAL? Could it ever over-allocate memory in ZONE_NORMAL? Assuming that all of the 896MB of memory was allocated, would new kernel-space processes then not be able to start (e.g., loading a module) OR userspace applications would also fail, since they need some kernel memory too. If the kernel does overcommit, I imagine it could crash the machine if it used up all the phys low RAM (896MB). Part of my interest is to see whether low memory conditions can cause the machine to actually crash, as opposed to just causing user-space applications to fail. Also, I wonder if swap can assist system (not application) stability. If only user-space processes can be swapped out, and a user-space process running out of memory will not cause the machine to crash, just the application, then from a system stability point of view, swap is unnecessary, but it would help application stability, since it can allow more memory pages to be file-backed. Only in the case of heavy swapping or very large overcommitment would relying on swap be an issue. As you point out, memory allocation and commitment levels can be tuned.