Go语言 K8S Pod OOM因明显的内存泄漏而死亡,内存去了哪里?

rfbsl7qr  于 2022-12-07  发布在  Go
关注(0)|答案(2)|浏览(181)

我有一个问题与K8S POD得到OOM杀死,但与一些奇怪的条件和观察。
该pod是基于golang 1.15.6的REST服务,在X86 64位体系结构上运行。当pod在基于VM的群集上运行时,一切正常,服务行为正常。当服务在直接在硬件上预配的节点上运行时,它似乎遇到内存泄漏,并最终获得OOMed。
观察结果表明,当在有问题的配置上运行时,“kubectl top pod”将报告不断增加的内存利用率,直到达到定义的限制(64 MiB),此时将调用OOM killer。
使用“top”从pod内部进行的观察表明,pod内部各种进程的内存使用情况是稳定的,使用约40 MiB RSS。top报告的VIRT、RES、SHR值随时间保持稳定,只有轻微波动。
我已经对golang代码进行了广泛的分析,包括获取内存配置文件(pprof)。在实际的golang代码中没有泄漏的迹象,这与基于VM的环境中的正确操作和从顶部观察相吻合。
下面的OOM消息还表明,Pod使用的RSS总量仅为38.75MiB(总和/RSS = 9919页 *4k = 38.75MiB)。

kernel: [651076.945552] xxxxxxxxxxxx invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=999
kernel: [651076.945556] CPU: 35 PID: 158127 Comm: xxxxxxxxxxxx Not tainted 5.4.0-73-generic #82~18.04.1
kernel: [651076.945558] Call Trace:
kernel: [651076.945567]  dump_stack+0x6d/0x8b
kernel: [651076.945573]  dump_header+0x4f/0x200
kernel: [651076.945575]  oom_kill_process+0xe6/0x120
kernel: [651076.945577]  out_of_memory+0x109/0x510
kernel: [651076.945582]  mem_cgroup_out_of_memory+0xbb/0xd0
kernel: [651076.945584]  try_charge+0x79a/0x7d0
kernel: [651076.945585]  mem_cgroup_try_charge+0x75/0x190
kernel: [651076.945587]  __add_to_page_cache_locked+0x1e1/0x340
kernel: [651076.945592]  ? scan_shadow_nodes+0x30/0x30
kernel: [651076.945594]  add_to_page_cache_lru+0x4f/0xd0
kernel: [651076.945595]  pagecache_get_page+0xea/0x2c0
kernel: [651076.945596]  filemap_fault+0x685/0xb80
kernel: [651076.945600]  ? __switch_to_asm+0x40/0x70
kernel: [651076.945601]  ? __switch_to_asm+0x34/0x70
kernel: [651076.945602]  ? __switch_to_asm+0x40/0x70
kernel: [651076.945603]  ? __switch_to_asm+0x34/0x70
kernel: [651076.945604]  ? __switch_to_asm+0x40/0x70
kernel: [651076.945605]  ? __switch_to_asm+0x34/0x70
kernel: [651076.945606]  ? __switch_to_asm+0x40/0x70
kernel: [651076.945608]  ? filemap_map_pages+0x181/0x3b0
kernel: [651076.945611]  ext4_filemap_fault+0x31/0x50
kernel: [651076.945614]  __do_fault+0x57/0x110
kernel: [651076.945615]  __handle_mm_fault+0xdde/0x1270
kernel: [651076.945617]  handle_mm_fault+0xcb/0x210
kernel: [651076.945621]  __do_page_fault+0x2a1/0x4d0
kernel: [651076.945625]  ? __audit_syscall_exit+0x1e8/0x2a0
kernel: [651076.945627]  do_page_fault+0x2c/0xe0 
kernel: [651076.945628]  page_fault+0x34/0x40
kernel: [651076.945630] RIP: 0033:0x5606e773349b 
kernel: [651076.945634] Code: Bad RIP value.
kernel: [651076.945635] RSP: 002b:00007fbdf9088df0 EFLAGS: 00010206
kernel: [651076.945637] RAX: 0000000000000000 RBX: 0000000000004e20 RCX: 00005606e775ce7d
kernel: [651076.945637] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007fbdf9088dd0
kernel: [651076.945638] RBP: 00007fbdf9088e48 R08: 0000000000006c50 R09: 00007fbdf9088dc0
kernel: [651076.945638] R10: 0000000000000000 R11: 0000000000000202 R12: 00007fbdf9088dd0
kernel: [651076.945639] R13: 0000000000000000 R14: 00005606e7c6140c R15: 0000000000000000
kernel: [651076.945640] memory: usage 65536kB, limit 65536kB, failcnt 26279526
kernel: [651076.945641] memory+swap: usage 65536kB, limit 9007199254740988kB, failcnt 0
kernel: [651076.945642] kmem: usage 37468kB, limit 9007199254740988kB, failcnt 0
kernel: [651076.945642] Memory cgroup stats for /kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe:
kernel: [651076.945652] anon 25112576
kernel: [651076.945652] file 0
kernel: [651076.945652] kernel_stack 221184
kernel: [651076.945652] slab 41406464
kernel: [651076.945652] sock 0
kernel: [651076.945652] shmem 0
kernel: [651076.945652] file_mapped 2838528
kernel: [651076.945652] file_dirty 0
kernel: [651076.945652] file_writeback 0 
kernel: [651076.945652] anon_thp 0
kernel: [651076.945652] inactive_anon 0
kernel: [651076.945652] active_anon 25411584
kernel: [651076.945652] inactive_file 0
kernel: [651076.945652] active_file 536576
kernel: [651076.945652] unevictable 0
kernel: [651076.945652] slab_reclaimable 16769024
kernel: [651076.945652] slab_unreclaimable 24637440
kernel: [651076.945652] pgfault 7211542
kernel: [651076.945652] pgmajfault 2895749
kernel: [651076.945652] workingset_refault 71200645
kernel: [651076.945652] workingset_activate 5871824
kernel: [651076.945652] workingset_nodereclaim 330
kernel: [651076.945652] pgrefill 39987763
kernel: [651076.945652] pgscan 144468270 
kernel: [651076.945652] pgsteal 71255273 
kernel: [651076.945652] pgactivate 27649178
kernel: [651076.945652] pgdeactivate 33525031
kernel: [651076.945653] Tasks state (memory values in pages):
kernel: [651076.945653] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name   
kernel: [651076.945656] [ 151091]     0 151091      255        1    36864        0          -998 pause  
kernel: [651076.945675] [ 157986]     0 157986       58        4    32768        0           999 dumb-init  
kernel: [651076.945676] [ 158060]     0 158060    13792      869   151552        0           999 su  
kernel: [651076.945678] [ 158061]  1234 158061    18476     6452   192512        0           999 yyyyyy
kernel: [651076.945679] [ 158124]  1234 158124     1161      224    53248        0           999 sh  
kernel: [651076.945681] [ 158125]  1234 158125   348755     2369   233472        0           999 xxxxxxxxxxxx
kernel: [651076.945682] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=a0027a4fe415aa7a6ad54aa3fbf553b9af27c61043d08101931e985efeee0ed7,mems_allowed=0-3,oom_memcg=/kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe,task_memcg=/kubepods/burstable/pod34ffde14-8e80-4b3a-99ac-910137a04dfe/a0027a4fe415aa7a6ad54aa3fbf553b9af27c61043d08101931e985efeee0ed7,task=yyyyyy,pid=158061,uid=1234
kernel: [651076.945695] Memory cgroup out of memory: Killed process 158061 (yyyyyy) total-vm:73904kB, anon-rss:17008kB, file-rss:8800kB, shmem-rss:0kB, UID:1234 pgtables:188kB oom_score_adj:999
kernel: [651076.947429] oom_reaper: reaped process 158061 (yyyyyy), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

OOM消息清楚地表明usage = 65536 kB,limit = 65536 kB,但是我不知道RSS下没有考虑的大约25 MiB的内存去了哪里。
我看到slab_unreclaimable = 24637440,(24 MiB),这大约是似乎未考虑的内存量,但不确定其中是否有任何意义。
寻找任何关于内存在哪里使用的建议。任何输入都是最受欢迎的。

ttcibm8c

ttcibm8c1#

I see slab_unreclaimable = 24637440, (24MiB), which is approximately the amount of memory that appears to be unaccounted for...
有关slab的详细信息,您可以尝试命令slabinfo或执行cat /proc/slabinfo。该表可以指出内存的去向。

j0pj023g

j0pj023g2#

这也发生在我这边。它是python web服务,它在一个vm节点上运行得很好。但是在pod中,我看到内存间歇性地突然连续攀升,直到它被oom信号杀死。我在原始服务器上做了负载测试,没有发现内存泄漏。我猜在pod中的应用程序本身之外有什么事情发生。

相关问题