How to Check Whether a System is Under Memory Pressure
In this Document
Goal
Fix
1. Check memory been that is cached/buffered
2. Check if swap space been used heavily
3. Check /proc/meminfo
4. Check ulimit
5. Check if OOM ("Out of Memory") Killer has been invoked
References
APPLIES TO:
Linux OS - Version 2.6.18-8 and later
Linux x86-64
Linux x86
GOAL
To establish whether a system is really under memory pressure by correct interpretation of reported usage.
FIX
Note: this is not a comprehensive tuning document; it serves to highlight some frequently raised issues
1. Check memory been that is cached/buffered
Some performance monitor utilities (for example, sar utility) may report memory almost used up, or almost no free memory, for example:
# free
total used free shared buffers cached
Mem: 131830528 106777580 25052948 0 15288 87108520
-/+ buffers/cache: 19653772 112176756
Swap: 30703608 14763544 15940064
A simple calculation shows only 19% memory is free:
free/total = 25052948/131830528 = 19%
But this is not the truth - the buffers and cached memory are designed to speed up memory access. The expected behavior of Linux is to try and use as much cached memory as possible.
The Linux memory management allows for quickly reclaiming memory from what is used as cached when it is needed for other processes, thus when calculating free memory this needs to be taken into account.
The correct calculation is:
(free+buffers+cached)/total = (25052948+15288+87108520)/131830528 = 85%
In above example, the system still has 85% memory available, so no memory pressure at all.
Although the cached memory can be freed manually, this operation is not recommended and should only be carried out if instructed by a Support engineer:
# sync; sync; sync;
# echo 3 > /proc/sys/vm/drop_caches
This will likely impair performance - all the information that had been cached that might still be needed will have to be re-cached. It is better to let the Linux kernel manage the memory.
2. Check if swap space been used heavily
Swap space is allocated on storage disk but managed by the kernel virtual memory mechanisms. Accesses to swap space are vastly slower than to RAM. In most situations, the system should not use much swap space.
If swap space is heavily used, this might indicate memory pressure.
A common question is "why has swap space been used when the system is not short of memory?"
The answer is, in many situations, there was some transient peak use of memory, leading to some being swapped out.
But after the peak, the swapped-out memory has not yet been referenced, thus there has been no need to swap it back into RAM. This is normal behavior.
The /etc/sysctl.conf kernel parameter "vm.swappiness" can adjust the swapping behavior.
This control is used to define how aggressively the kernel will swap out memory pages - this occurs even if there is no memory pressure.
Higher values will increase the likelihood of LRU ("least recently used") pages being swapped out, lower values decrease this.
The default value is 60. Any adjustment should only be made as result of a performance tuning exercise.
The /proc/<pid>/smaps can be checked to see if a particular process references swapped-out pages.
3. Check /proc/meminfo
Use the command to display the information:
# cat /proc/meminfo
Pay attention to following items. Values used are examples:
AnonPages: 2245380 kB
High "AnonPages" means too much memory been allocated (mostly by malloc call) but not released yet. Check whether there are excessive processes or threads using up the memory. Another cause is "memory leak" - process(es) that allocate but never free memory.
HugePages_Total: 5321
HugePages_Free: 5321
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
The above example shows altogether the system has 2048 * 5321 = 10897408 kB (almost 11gb) HugePage memory allocated but not used.
HugePages are resident in RAM, not eligible to be swapped out, thus this represents potentially wasted ram. They are allocated manually by the /etc/sysctl.conf parameter "vm.nr_hugepages"
Check whether the application (e.g. Oracle database instance) for which the allocation is intended has been started. If so, it is likely some change was made such that the allocation is no longer sufficient and memory has been allocated from non-hugepage memory. This will lead to severe memory pressure if the hugepages allocation is a large proportion of available RAM.
For further information on hugepages allocation refer to "Shell Script to Calculate Values Recommended Linux HugePages / HugeTLB Configuration [Document 401749.1]" and the documents it refers to.
4. Check ulimit
Pay attention to following values:
# ulimit -a
......
max locked memory (kbytes, -l) 6400
This value affects processes that issue the "mlock" call to make pages unswappable. If this value is set high and too many processes utilise it then it can lead to memory pressure.
The setting is on a "per userid", not "per process".
See man page setrlimit (section "RLIMIT_MEMLOCK") for further information
5. Check if OOM ("Out of Memory") Killer has been invoked
Refer to documents:
Linux: Out-of-Memory (OOM) Killer [Document 452000.1]
32bit EL5 Running With 64G RAM Invoked OOM Killer [Document 1083551.1]
In this Document
Goal
Fix
1. Check memory been that is cached/buffered
2. Check if swap space been used heavily
3. Check /proc/meminfo
4. Check ulimit
5. Check if OOM ("Out of Memory") Killer has been invoked
References
APPLIES TO:
Linux OS - Version 2.6.18-8 and later
Linux x86-64
Linux x86
GOAL
To establish whether a system is really under memory pressure by correct interpretation of reported usage.
FIX
Note: this is not a comprehensive tuning document; it serves to highlight some frequently raised issues
1. Check memory been that is cached/buffered
Some performance monitor utilities (for example, sar utility) may report memory almost used up, or almost no free memory, for example:
# free
total used free shared buffers cached
Mem: 131830528 106777580 25052948 0 15288 87108520
-/+ buffers/cache: 19653772 112176756
Swap: 30703608 14763544 15940064
A simple calculation shows only 19% memory is free:
free/total = 25052948/131830528 = 19%
But this is not the truth - the buffers and cached memory are designed to speed up memory access. The expected behavior of Linux is to try and use as much cached memory as possible.
The Linux memory management allows for quickly reclaiming memory from what is used as cached when it is needed for other processes, thus when calculating free memory this needs to be taken into account.
The correct calculation is:
(free+buffers+cached)/total = (25052948+15288+87108520)/131830528 = 85%
In above example, the system still has 85% memory available, so no memory pressure at all.
Although the cached memory can be freed manually, this operation is not recommended and should only be carried out if instructed by a Support engineer:
# sync; sync; sync;
# echo 3 > /proc/sys/vm/drop_caches
This will likely impair performance - all the information that had been cached that might still be needed will have to be re-cached. It is better to let the Linux kernel manage the memory.
2. Check if swap space been used heavily
Swap space is allocated on storage disk but managed by the kernel virtual memory mechanisms. Accesses to swap space are vastly slower than to RAM. In most situations, the system should not use much swap space.
If swap space is heavily used, this might indicate memory pressure.
A common question is "why has swap space been used when the system is not short of memory?"
The answer is, in many situations, there was some transient peak use of memory, leading to some being swapped out.
But after the peak, the swapped-out memory has not yet been referenced, thus there has been no need to swap it back into RAM. This is normal behavior.
The /etc/sysctl.conf kernel parameter "vm.swappiness" can adjust the swapping behavior.
This control is used to define how aggressively the kernel will swap out memory pages - this occurs even if there is no memory pressure.
Higher values will increase the likelihood of LRU ("least recently used") pages being swapped out, lower values decrease this.
The default value is 60. Any adjustment should only be made as result of a performance tuning exercise.
The /proc/<pid>/smaps can be checked to see if a particular process references swapped-out pages.
3. Check /proc/meminfo
Use the command to display the information:
# cat /proc/meminfo
Pay attention to following items. Values used are examples:
AnonPages: 2245380 kB
High "AnonPages" means too much memory been allocated (mostly by malloc call) but not released yet. Check whether there are excessive processes or threads using up the memory. Another cause is "memory leak" - process(es) that allocate but never free memory.
HugePages_Total: 5321
HugePages_Free: 5321
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
The above example shows altogether the system has 2048 * 5321 = 10897408 kB (almost 11gb) HugePage memory allocated but not used.
HugePages are resident in RAM, not eligible to be swapped out, thus this represents potentially wasted ram. They are allocated manually by the /etc/sysctl.conf parameter "vm.nr_hugepages"
Check whether the application (e.g. Oracle database instance) for which the allocation is intended has been started. If so, it is likely some change was made such that the allocation is no longer sufficient and memory has been allocated from non-hugepage memory. This will lead to severe memory pressure if the hugepages allocation is a large proportion of available RAM.
For further information on hugepages allocation refer to "Shell Script to Calculate Values Recommended Linux HugePages / HugeTLB Configuration [Document 401749.1]" and the documents it refers to.
4. Check ulimit
Pay attention to following values:
# ulimit -a
......
max locked memory (kbytes, -l) 6400
This value affects processes that issue the "mlock" call to make pages unswappable. If this value is set high and too many processes utilise it then it can lead to memory pressure.
The setting is on a "per userid", not "per process".
See man page setrlimit (section "RLIMIT_MEMLOCK") for further information
5. Check if OOM ("Out of Memory") Killer has been invoked
Refer to documents:
Linux: Out-of-Memory (OOM) Killer [Document 452000.1]
32bit EL5 Running With 64G RAM Invoked OOM Killer [Document 1083551.1]
No comments:
Post a Comment