When Linux Runs Out of Memory
Subject:   kernel memory
Date:   2007-07-04 09:45:19
From:   mulyadi_santosa
Response to: kernel memory


big sorry, late reply. Uhm, "low mem"? I don't exactly understand what you do refer here. Maybe lowmem memory zone a.k.a ZONE_NORMAL?

But anyway, in general, memory could be used up (until the last drop). This is especially true when you do it in kernel mode. Nothing stops you in this case, because (Linux) kernel always trust itself. Of course, any sane kernel developers should catch this quirk at the first place before releasing any stable releases.

And..about user processes which allocate kernel mode memory. Actually, implicitly you already do that everytime. When you start a program, the kernel also allocates small amount of memory to store its task descriptor. When you're doing system call, usually some user memory content are copied to kernel memory area before further processed.

There is a more explicit example , assuming you know a bit about sound programming. IIRC if you prepare a PCM channel and ask for some amount of buffer, actually you are requesting kernel mode pages.

I hope it clarifies your doubts.



Main Topics Oldest First

Showing messages 1 through 1 of 1.

  • kernel memory
    2007-07-09 04:56:17  campbellmc [View]

    Hi Mulyadi,

    Thanks for the reply.

    Yes, by 'low mem' I meant ZONE_NORMAL or the kernel memory area, i.e., the first 1GB (or 896MB) if you have up to 4GB on a 32-bit system (I'm just learning this stuff, so apologies in advance if some of my questions are a bit daft).

    When you say that the kernel 'always trusts itself', what do you mean exactly?

    Is it the case that (theoretically) the kernel will never allocate more RAM than is available in ZONE_NORMAL? You mentioned that it is possible/normal for the glibc memory allocator to overcommit memory allocation for user-space processes. Thus an application could allocate 3GB but the machine only has 80MB memory available (phys RAM + SWAP), and it can get away with this because the application may not actually end up *using* the memory. Now: does the memory allocator or the kernel apply a stricter policy on ZONE_NORMAL? Could it ever over-allocate memory in ZONE_NORMAL? Assuming that all of the 896MB of memory was allocated, would new kernel-space processes then not be able to start (e.g., loading a module) OR userspace applications would also fail, since they need some kernel memory too. If the kernel does overcommit, I imagine it could crash the machine if it used up all the phys low RAM (896MB). Part of my interest is to see whether low memory conditions can cause the machine to actually crash, as opposed to just causing user-space applications to fail. Also, I wonder if swap can assist system (not application) stability. If only user-space processes can be swapped out, and a user-space process running out of memory will not cause the machine to crash, just the application, then from a system stability point of view, swap is unnecessary, but it would help application stability, since it can allow more memory pages to be file-backed. Only in the case of heavy swapping or very large overcommitment would relying on swap be an issue. As you point out, memory allocation and commitment levels can be tuned.