When Linux Runs Out of Memory
Subject:   kernel memory
Date:   2007-06-28 07:25:56
From:   campbellmc
Hi Mulyadi,

Excellent article - many thanks! Just curious: is it ever possible that low mem could get used up? I am guessing it would have to be bad coding in a driver or some other piece of kernel code, causing a memory leak or something, which is (hopefully) highly unlikely. I've heard that user-space processes also need some kernel memory, but I am guessing the kernel's memory manager would deny any requests that it could not fulfil, and the application would simply fail.

Thanks again,


Full Threads Oldest First

Showing messages 1 through 3 of 3.

  • kernel memory
    2007-07-04 09:45:19  mulyadi_santosa [View]


    big sorry, late reply. Uhm, "low mem"? I don't exactly understand what you do refer here. Maybe lowmem memory zone a.k.a ZONE_NORMAL?

    But anyway, in general, memory could be used up (until the last drop). This is especially true when you do it in kernel mode. Nothing stops you in this case, because (Linux) kernel always trust itself. Of course, any sane kernel developers should catch this quirk at the first place before releasing any stable releases.

    And..about user processes which allocate kernel mode memory. Actually, implicitly you already do that everytime. When you start a program, the kernel also allocates small amount of memory to store its task descriptor. When you're doing system call, usually some user memory content are copied to kernel memory area before further processed.

    There is a more explicit example , assuming you know a bit about sound programming. IIRC if you prepare a PCM channel and ask for some amount of buffer, actually you are requesting kernel mode pages.

    I hope it clarifies your doubts.



    • kernel memory
      2007-07-09 04:56:17  campbellmc [View]

      Hi Mulyadi,

      Thanks for the reply.

      Yes, by 'low mem' I meant ZONE_NORMAL or the kernel memory area, i.e., the first 1GB (or 896MB) if you have up to 4GB on a 32-bit system (I'm just learning this stuff, so apologies in advance if some of my questions are a bit daft).

      When you say that the kernel 'always trusts itself', what do you mean exactly?

      Is it the case that (theoretically) the kernel will never allocate more RAM than is available in ZONE_NORMAL? You mentioned that it is possible/normal for the glibc memory allocator to overcommit memory allocation for user-space processes. Thus an application could allocate 3GB but the machine only has 80MB memory available (phys RAM + SWAP), and it can get away with this because the application may not actually end up *using* the memory. Now: does the memory allocator or the kernel apply a stricter policy on ZONE_NORMAL? Could it ever over-allocate memory in ZONE_NORMAL? Assuming that all of the 896MB of memory was allocated, would new kernel-space processes then not be able to start (e.g., loading a module) OR userspace applications would also fail, since they need some kernel memory too. If the kernel does overcommit, I imagine it could crash the machine if it used up all the phys low RAM (896MB). Part of my interest is to see whether low memory conditions can cause the machine to actually crash, as opposed to just causing user-space applications to fail. Also, I wonder if swap can assist system (not application) stability. If only user-space processes can be swapped out, and a user-space process running out of memory will not cause the machine to crash, just the application, then from a system stability point of view, swap is unnecessary, but it would help application stability, since it can allow more memory pages to be file-backed. Only in the case of heavy swapping or very large overcommitment would relying on swap be an issue. As you point out, memory allocation and commitment levels can be tuned.


      • kernel memory
        2007-07-13 04:52:51  mulyadi_santosa [View]

        OK, to answer your first question. "Kernel trusts itself" means the kernel won't do any complicated check when it asks for memory. For example, you ask for 256 MB memory block (using kmalloc(), kernel-space version of malloc()). Then the allocator will give it to you if there are such amount of free pages. No allocation delay at all. Another example, you can allocate a big chunk and forget not to free it later. There isn't any garbage collector exists in kernel land, so this chunk will still marked as used until the end of life of the kernel.

        Now the second question, could the kernel over-allocate? In practice, no. What you see as overcommit action actually just exists in user space. Recall that the actual page allocation only happens in the page fault (be it soft or hard one, "hard" means data must be read from backing storage). In kernel space, when you ask for RAM pages, you will either get them all at once or get nothing (in case of low free pages or heavy fragmented memory).

        About the policy, I can't recall anything specific here. I just remember that in each zone (dma, normal and highmem), some % of free pages are reserved. No user mode allocation is allowed to drain this reserved pages, unless its effective user ID is root. Another policy that I could recall is the way the allocator prioritize the zones. IIRC, first it tries to grab pages from highmem zone, then normal. As the last resort, it will try DMA zone.

        About the importance of swap, this is kinda subjective answer. Theoritically, you won't need swap if you own very big RAM, let's say 64GB RAM (it can be addressed in 32 bit using PAE mode). But that's rare. Nowadays, most PC owns 256 MB - 2GB RAM. Sure it's big, but the applications also grow bigger too and consumes more RAM. So, 2GB is likely eaten fast in certain workloads. If you don't own swap, once that 2GB is used, you're out of luck. No more allocation is possible. Swap is acting as life saver here, allows you to allocates a bit more without being rejected. It also permits the kernel to swap out inactive pages, so RAM pagea are freed up for more important jobs.

        Does this clear your doubts?