Analysis of Linux memory exhaustion

  
                

The use of Linux memory needs to be maintained within a certain ratio. If the memory usage is too high, the system can also run, but it will affect the speed. This article will introduce how to analyze the memory exhaustion in Linux?

testing NAS performance for a long time to write fstest, analyze the causes of poor performance, find the host server memory usage is high.

1. First look at memory # top -M

top - 14:43:12 up 14 days, 6 min, 1 user, load average: 8.36, 8.38, 8.41

Tasks: 419 total, 1 Running, 418 sleeping, 0 stopped, 0 zombie

Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.0%id, 0.7%wa, 0.0%hi, 0.0%si, 0.0 %st

Mem: 63.050G total, 62.639G used, 420.973M free, 33.973M buffers

Swap: 4095.996M total, 0.000k used, 4095.996M free, 48.889G cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND

111 root 20 0 0 0 0 S 2.0 0.0 0:25.52 ksoftirqd/11

5968 root 20 0 15352 1372 828 R 2.0 0.0 0:00.01 top

13273 root 20 0 0 0 0 D 2.0 0.0 25:54.02 nfsd

17765 root 0 -20 0 0 0 S 2.0 0.0 0:11.89 Kworker/5:1H

1 root 20 0 19416 1436 1136 S 0.0 0.0 0:01.88 init

. . . . .

Found that the memory is basically used up, what is the process occupied? The top command finds that the first-ranked %MEM is only a fraction of a second.

2. Use the vmstat -m command to view the memory usage of the kernel space. # vmstat -m

Cache Num Total Size Pages

xfs_dqtrx 0 0 384 10

xfs_dquot 0 0 504 7

xfs_buf 91425 213300 384 10

fstrm_item 0 0 24 144

xfs_mru_cache_elem 0 0 32 112

xfs_ili 7564110 8351947 224 17

xfs_Linux/1672.html‘ target=’_blank‘ Inode 7564205 8484180 1024 4

xfs_efi_item 257 390 400 10

xfs_efd_item 237 380 400 10

xfs_buf_item 1795 2414 232 17

xfs_log_item_desc 830 1456 32 112< Br>

xfs_trans 377 490 280 14

xfs_ifork 0 0 64 59

xfs_da_state 0 0 488 8

xfs_btree_cur 342 437 208 19

xfs_bmap_free_item 89 288 24 144

xfs_log_ticket 717 966 184 21

xfs_ioend 726 896 120 32

rbd_segment_name 109 148 104 37

rbd_obj_request 1054 1452 176 22

rbd_img_request 1037 1472 120 32

ceph_osd_request 548 693 872 9

ceph_msg_data 1041 1540 48 77

ceph_msg 1197 1632 232 17

nfsd_drc 19323 33456 112 34

nfsd4_delegations 0 0 368 10

nfsd4_stateids 855 1024 120 32

nfsd4_files 802 1050 128 30

nfsd4_lockowners 0 0 384 10

nfsd4_openowners 15 50 392 10

rpc_inode_cache 27 30 640 6

rpc_buffers 8 8 2048 2

rpc_tasks 8 15 256 15

fib6_nodes 22 59 64 59

pte_list_desc 0 0 32 112
< Ext>_4_allocation_context 0 0 0 0 0 0 0 28

ext4_prealloc_space 42 74 104 37

ext4_system_zone 0 0 40 92

Cache Num Total Size Pages

ext4_io_end 0 0 64 59

ext4_extent_status 1615 5704 40 92

jbd2_transaction_s 30 30 256 15

jbd2_inode 254 539 48 77

. . . . . . . .

These two values ​​are found to be high: xfs_ili xfs_inode takes up a lot of memory. Previous12Next page Total 2 pages

Copyright © Windows knowledge All Rights Reserved