28

The only program relevant I know of is pmap, but this only prints the memory of one process.

I would like to see how the physical memory is occupied and by which processes/library, including the kernel, of the entire physical RAM (as opposed to that from the process' POV with pmap).

Ideally also with a graphical interface.

Do you know if there's any such tool?

I know about the ambiguity introduced by libraries. If it's the case, it could display a 1-pixel wide line and an arrow to the real location of that library.

What do I need this for? To view the RAM fragmentation.

unwind
  • 364,555
  • 61
  • 449
  • 578
Flavius
  • 12,501
  • 13
  • 76
  • 118
  • Why would you want to see that? Unless you are developping kernel code or have a NUMA system it should not be of importance. Are you perhaps confusing physical memory fragmentation with memory fragmentation introduced by userspace memory allocators? – thkala Feb 01 '11 at 14:31
  • 4
    No I'm not confusing anything. I have say 2Gb of RAM on my system, I want to see how this chip is really used by the kernel. I just want to see it, who knows what questions may be raised afterwards. – Flavius Feb 01 '11 at 14:32
  • 1
    Ah, learning for learning's sake :-) +1 – thkala Feb 01 '11 at 14:33
  • 1
    @thkala it's called exploring :-) – Flavius Jul 24 '12 at 10:54

1 Answers1

35

Memory Fragmentation

When a Linux system has been running for a while memory fragmentation can increase which depends heavily on the nature of the applications that are running on it. The more processes allocate and free memory, the quicker memory becomes fragmented. And the kernel may not always be able to defragment enough memory for a requested size on time. If that happens, applications may not be able to allocate larger contiguous chunks of memory even though there is enough free memory available. Starting with the 2.6 kernel, i.e. RHEL4 and SLES9, memory management has improved tremendously and memory fragmentation has become less of an issue.

To see memory fragmentation you can use the magic SysRq key. Simply execute the following command:

# echo m > /proc/sysrq-trigger

This command will dump current memory information to /var/log/messages. Here is an example of a RHEL3 32-bit system:

Jul 23 20:19:30 localhost kernel: 0*4kB 0*8kB 0*16kB 1*32kB 0*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 1952kB)
Jul 23 20:19:30 localhost kernel: 1395*4kB 355*8kB 209*16kB 15*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 12244kB)
Jul 23 20:19:31 localhost kernel: 1479*4kB 673*8kB 205*16kB 73*32kB 21*64kB 847*128kB 473*256kB 92*512kB 164*1024kB 64*2048kB 28*4096kB = 708564kB)

The first line shows DMA memory fragmentation. The second line shows Low Memory fragmentation and the third line shows High Memory fragmentation. The output shows memory fragmentation in the Low Memory area. But there are many large memory chunks available in the High Memory area, e.g. 28 4MB.

If memory information was not dumped to /var/log/messages, then SysRq was not enabled. You can enable SysRq by setting sysrq to 1:

# echo 1 > /proc/sys/kernel/sysrq

Starting with the 2.6 kernel, i.e. RHEL4 and SLES9, you don’t need SysRq to dump memory information. You can simply check /proc/buddyinfo for memory fragmentation.

Here is the output of a 64-bit server running the 2.6 kernel:

# cat /proc/buddyinfo

Node 0, zone DMA 5 4 3 4 3 2 1 0 1 1 2
Node 0, zone Normal 1046 527 128 36 17 5 26 40 13 16 94
# echo m > /proc/sysrq-trigger
# grep Normal /var/log/messages | tail -1
Jul 23 21:42:26 localhost kernel: Normal: 1046*4kB 529*8kB 129*16kB 36*32kB 17*64kB 5*128kB 26*256kB 40*512kB 13*1024kB 16*2048kB 94*4096kB = 471600kB
#

In this example I used SysRq again to show what each number in /proc/buddyinfo is referring to.

Source: http://www.puschitz.com/pblog/

  • 2
    One important detail: _applications_ do not allocate contiguous blocks of memory anyway - they have access to virtual memory *only*. Only the kernel needs contiguous physical blocks - and that is where physical RAM fragmentation may be an issue. – thkala Feb 01 '11 at 14:39
  • 2
    Here's a little test to check this: `char *p = malloc(256 * 1024 * 1024);` It should succeed quite nicely on most modern systems, as long as they are not on the brink of memory exaustion. – thkala Feb 01 '11 at 14:42
  • @Vlad Lazarenko: Thanks, +1. @thkala: Thanks +1 – Flavius Feb 01 '11 at 14:45
  • 2
    Hmmm... technically speaking you also need a `memset()` call or something to avoid overcommiting... – thkala Feb 01 '11 at 14:46
  • 1
    @thkala: some userspace requests may end up needing contiguous memory, an example are AF_UNIX dgram/seqpacket sends, see http://stackoverflow.com/questions/4729315/what-is-the-max-size-of-af-unix-datagram-message-that-can-be-sent-in-linux/4822037#4822037 – ninjalj Feb 01 '11 at 20:06
  • 1
    @ninjalj: true, there quite a few kernel services that need large contiguous allocations - which is why memory compaction was introduced in linux-2.6.35. Some of them are even due to broken code - some time ago I had to debug a V4L driver that was using kmalloc instead of vmalloc for its video buffer... – thkala Feb 01 '11 at 20:17
  • 1
    @thkala: O_O so it was probably crashing very often, wasn't it? – ninjalj Feb 01 '11 at 20:20
  • 1
    @ninjalj: let's just say that I hadn't seen so many Oops messages for quite some time... – thkala Feb 01 '11 at 20:43