“Oom导致进程消失”的版本间的差异
跳到导航
跳到搜索
(创建页面,内容为“=问题= <pre> java哥反映有个进程老是不见了 [root@prodo-java03 log]# cat messages | grep oom Sep 20 05:32:10 prod-hello-java03 kernel: java invoke…”) |
|||
(未显示同一用户的5个中间版本) | |||
第19行: | 第19行: | ||
Sep 24 00:01:22 prod-hello-java03 kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name | Sep 24 00:01:22 prod-hello-java03 kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name | ||
+ | |||
+ | |||
+ | 可见是 java | ||
+ | |||
+ | Sep 24 00:01:22 prod-hello-java03 kernel: Out of memory: Kill process 46473 (java) score 250 or sacrifice child | ||
+ | Sep 24 00:01:22 prod-hello-java03 kernel: Killed process 46473 (java) total-vm:5333504kB, anon-rss:2000260kB, file-rss:0kB, shmem-rss:0kB | ||
+ | Sep 24 00:01:22 prod-hello-java03 kernel: java invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0 | ||
+ | Sep 24 00:01:22 prod-hello-java03 kernel: java cpuset=/ mems_allowed=0 | ||
+ | Sep 24 00:01:22 prod-hello-java03 kernel: CPU: 1 PID: 54533 Comm: java Kdump: loaded Not tainted 3.10.0-957.1.3.el7.x86_64 #1 | ||
+ | Sep 24 00:01:22 prod-hello-java03 kernel: Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006 | ||
+ | Sep 24 00:01:22 prod-hello-java03 kernel: Call Trace: | ||
+ | |||
+ | ##2021 redis oom | ||
+ | |||
+ | Nov 16 02:27:34 prod--db kernel: redis-server invoked oom-killer: gfp_mask=0x10200da, order=0, oom_score_adj=0 | ||
+ | Nov 16 02:27:34 prod--db kernel: [<ffffffffc02e271a>] ? virtballoon_oom_notify+0x2a/0x70 [virtio_balloon] | ||
+ | Nov 16 02:27:34 prod--db kernel: [<ffffffff8c7ba524>] oom_kill_process+0x254/0x3d0 | ||
+ | Nov 16 02:27:34 prod--db kernel: [<ffffffff8c7b9fcd>] ? oom_unkillable_task+0xcd/0x120 | ||
+ | Nov 16 02:27:34 prod--db kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name | ||
+ | Nov 24 02:31: | ||
+ | |||
+ | |||
+ | 四、总结 | ||
+ | 这次的故障属于典型的内存不足,导致发生OOM错误的情况。从系统状态和日常的监控以及系统日志的报错等都可发现踪迹,但缺没有引起足够的重视,最终导致死机。 | ||
+ | 另外,mrtg运行出错,而发送大量垃圾邮件,引起amavisd调用clamscan被卡的问题也使系统不堪负载。 | ||
+ | 类似的问题,需要针对系统进行监控以分析原因才能采取有效的措施。同时,管理员也不能忽视日常的维护工作,以免出现无可挽救的故障。 | ||
+ | |||
+ | ※关闭OOM killer机制 | ||
+ | 在特定情况下,可能我们不希望核心执行OOM killer的工作。例如,排错。 | ||
+ | 则可以修改/etc/sysctl.conf,增加: | ||
+ | 引用 | ||
+ | vm.oom-kill = 0 | ||
+ | |||
+ | 重启或执行sysctl -p即可生效。 | ||
+ | 若只是临时修改,可执行: | ||
+ | echo 0 > /proc/sys/vm/oom-kill | ||
</pre> | </pre> | ||
+ | |||
+ | =Linux进程被杀掉oom= | ||
+ | <pre> | ||
+ | grep "Out of memory" /var/log/messages | ||
+ | Nov 14 11:35:56 VM-0-9 kernel: Out of memory: Kill process 27506 (mongod) score 303 or sacrifice child | ||
+ | |||
+ | |||
+ | |||
+ | egrep -i -r 'killed process' /var/log | ||
+ | 匹配到二进制文件 /var/log/journal/d858b31d95a446e491fe879388912c40/system.journal | ||
+ | /var/log/messages:Nov 14 11:35:56 VM-0-9- kernel: Killed process 27506 (mongod), UID 995, total-vm:2779764kB, anon-rss:1173456kB, file-rss:0kB, shmem-rss:0kB | ||
+ | |||
+ | </pre> | ||
+ | |||
+ | |||
+ | =see also= | ||
+ | |||
+ | [http://www.linuxfly.org/post/166/ oom死机处理] | ||
+ | |||
+ | [https://blog.csdn.net/ggh5201314/article/details/105053545 oom killer日志分析] | ||
+ | |||
+ | |||
+ | [https://www.cnblogs.com/duanxz/p/10185946.html Linux进程被杀掉(OOM killer),查看系统日志 ] | ||
+ | |||
+ | |||
+ | [https://cloud.tencent.com/developer/article/1403389 linux out of memory分析(OOM)] | ||
+ | |||
+ | [[category:shell]] [[category:devops]] | ||
+ | |||
+ | [[category:ops]] |
2022年12月5日 (一) 02:49的最新版本
问题
java哥反映有个进程老是不见了 [root@prodo-java03 log]# cat messages | grep oom Sep 20 05:32:10 prod-hello-java03 kernel: java invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0 Sep 20 05:32:10 prod-hello-java03 kernel: [<ffffffffbd1ba4e4>] oom_kill_process+0x254/0x3d0 Sep 20 05:32:10 prod-hello-java03 kernel: [<ffffffffbd1b9f8d>] ? oom_unkillable_task+0xcd/0x120 Sep 20 05:32:10 prod-hello-java03 kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name Sep 24 00:01:22 prod-hello-java03 kernel: tuned invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0 Sep 24 00:01:22 prod-hello-java03 kernel: [<ffffffffbd1ba4e4>] oom_kill_process+0x254/0x3d0 Sep 24 00:01:22 prod-hello-java03 kernel: [<ffffffffbd1b9f8d>] ? oom_unkillable_task+0xcd/0x120 Sep 24 00:01:22 prod-hello-java03 kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name Sep 24 00:01:22 prod-hello-java03 kernel: java invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0 Sep 24 00:01:22 prod-hello-java03 kernel: [<ffffffffbd1ba4e4>] oom_kill_process+0x254/0x3d0 Sep 24 00:01:22 prod-hello-java03 kernel: [<ffffffffbd1b9f8d>] ? oom_unkillable_task+0xcd/0x120 Sep 24 00:01:22 prod-hello-java03 kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name 可见是 java Sep 24 00:01:22 prod-hello-java03 kernel: Out of memory: Kill process 46473 (java) score 250 or sacrifice child Sep 24 00:01:22 prod-hello-java03 kernel: Killed process 46473 (java) total-vm:5333504kB, anon-rss:2000260kB, file-rss:0kB, shmem-rss:0kB Sep 24 00:01:22 prod-hello-java03 kernel: java invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0 Sep 24 00:01:22 prod-hello-java03 kernel: java cpuset=/ mems_allowed=0 Sep 24 00:01:22 prod-hello-java03 kernel: CPU: 1 PID: 54533 Comm: java Kdump: loaded Not tainted 3.10.0-957.1.3.el7.x86_64 #1 Sep 24 00:01:22 prod-hello-java03 kernel: Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006 Sep 24 00:01:22 prod-hello-java03 kernel: Call Trace: ##2021 redis oom Nov 16 02:27:34 prod--db kernel: redis-server invoked oom-killer: gfp_mask=0x10200da, order=0, oom_score_adj=0 Nov 16 02:27:34 prod--db kernel: [<ffffffffc02e271a>] ? virtballoon_oom_notify+0x2a/0x70 [virtio_balloon] Nov 16 02:27:34 prod--db kernel: [<ffffffff8c7ba524>] oom_kill_process+0x254/0x3d0 Nov 16 02:27:34 prod--db kernel: [<ffffffff8c7b9fcd>] ? oom_unkillable_task+0xcd/0x120 Nov 16 02:27:34 prod--db kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name Nov 24 02:31: 四、总结 这次的故障属于典型的内存不足,导致发生OOM错误的情况。从系统状态和日常的监控以及系统日志的报错等都可发现踪迹,但缺没有引起足够的重视,最终导致死机。 另外,mrtg运行出错,而发送大量垃圾邮件,引起amavisd调用clamscan被卡的问题也使系统不堪负载。 类似的问题,需要针对系统进行监控以分析原因才能采取有效的措施。同时,管理员也不能忽视日常的维护工作,以免出现无可挽救的故障。 ※关闭OOM killer机制 在特定情况下,可能我们不希望核心执行OOM killer的工作。例如,排错。 则可以修改/etc/sysctl.conf,增加: 引用 vm.oom-kill = 0 重启或执行sysctl -p即可生效。 若只是临时修改,可执行: echo 0 > /proc/sys/vm/oom-kill
Linux进程被杀掉oom
grep "Out of memory" /var/log/messages Nov 14 11:35:56 VM-0-9 kernel: Out of memory: Kill process 27506 (mongod) score 303 or sacrifice child egrep -i -r 'killed process' /var/log 匹配到二进制文件 /var/log/journal/d858b31d95a446e491fe879388912c40/system.journal /var/log/messages:Nov 14 11:35:56 VM-0-9- kernel: Killed process 27506 (mongod), UID 995, total-vm:2779764kB, anon-rss:1173456kB, file-rss:0kB, shmem-rss:0kB