On 6/18/20 7:36 PM, Mark Sapiro wrote:
Check your system logs to see if there's anything about the OS killing the process. Also check the Postfix log for deliveries to mailman just before it died. Just stabbing in the dark, but maybe some huge message caused it to grow beyond some memory limit and get killed by the OS.
I am replying off list at this point. It is definitely a OOM issue. Here are the log entries right before ltmp crashed:
Jun 18 00:00:12 mm4 kernel: [19548549.158382] postgres invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0 Jun 18 00:00:12 mm4 kernel: [19548549.160719] postgres cpuset=/ mems_allowed=0 Jun 18 00:00:12 mm4 kernel: [19548549.161524] CPU: 0 PID: 7243 Comm: postgres Not tainted 4.19.0-6-amd64 #1 Debian 4.19.67-2 Jun 18 00:00:12 mm4 kernel: [19548549.163047] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.0-0-ga698c8995f-prebuilt.qemu.org 04/01/2014 Jun 18 00:00:12 mm4 kernel: [19548549.164885] Call Trace: Jun 18 00:00:12 mm4 kernel: [19548549.165402] dump_stack+0x5c/0x80 Jun 18 00:00:12 mm4 kernel: [19548549.166097] dump_header+0x6b/0x283 Jun 18 00:00:12 mm4 kernel: [19548549.167019] ? do_try_to_free_pages+0x2ec/0x370 Jun 18 00:00:12 mm4 kernel: [19548549.167864] oom_kill_process.cold.30+0xb/0x1cf Jun 18 00:00:12 mm4 kernel: [19548549.168760] ? oom_badness+0x23/0x140 Jun 18 00:00:12 mm4 kernel: [19548549.169534] out_of_memory+0x1a5/0x430 Jun 18 00:00:12 mm4 kernel: [19548549.170174] __alloc_pages_slowpath+0xbd8/0xcb0 Jun 18 00:00:12 mm4 kernel: [19548549.171060] __alloc_pages_nodemask+0x28b/0x2b0 Jun 18 00:00:12 mm4 kernel: [19548549.171973] filemap_fault+0x3bd/0x780 Jun 18 00:00:12 mm4 kernel: [19548549.172643] ? alloc_set_pte+0xf2/0x560 Jun 18 00:00:12 mm4 kernel: [19548549.173369] ? filemap_map_pages+0x1ed/0x3a0 Jun 18 00:00:12 mm4 kernel: [19548549.174404] ext4_filemap_fault+0x2c/0x40 [ext4] Jun 18 00:00:12 mm4 kernel: [19548549.175259] __do_fault+0x36/0x130 Jun 18 00:00:12 mm4 kernel: [19548549.175930] __handle_mm_fault+0xe6c/0x1270 Jun 18 00:00:12 mm4 kernel: [19548549.176765] handle_mm_fault+0xd6/0x200 Jun 18 00:00:12 mm4 kernel: [19548549.177644] __do_page_fault+0x249/0x4f0 Jun 18 00:00:12 mm4 kernel: [19548549.178407] ? async_page_fault+0x8/0x30 Jun 18 00:00:12 mm4 kernel: [19548549.179156] async_page_fault+0x1e/0x30 Jun 18 00:00:12 mm4 kernel: [19548549.179915] RIP: 0033:0x55d875a3fe10 Jun 18 00:00:12 mm4 kernel: [19548549.180569] Code: Bad RIP value. Jun 18 00:00:12 mm4 kernel: [19548549.181181] RSP: 002b:00007ffcad32f248 EFLAGS: 00010206 Jun 18 00:00:12 mm4 kernel: [19548549.182132] RAX: 000055d877d512b0 RBX: 000055d877d60b93 RCX: 00007ffcad32f2c0 Jun 18 00:00:12 mm4 kernel: [19548549.183365] RDX: 0000000000000005 RSI: 000055d877d60b93 RDI: 000055d877d512b0 Jun 18 00:00:12 mm4 kernel: [19548549.184712] RBP: 00007ffcad32f270 R08: 0000000000000001 R09: 00007ffcad32f388 Jun 18 00:00:12 mm4 kernel: [19548549.186338] R10: 00007f2585002d40 R11: 0000000000000000 R12: 0000000000000005 Jun 18 00:00:12 mm4 kernel: [19548549.187583] R13: 000055d877d4b3a0 R14: 00007f2581ad05b8 R15: 0000000000000000 Jun 18 00:00:12 mm4 kernel: [19548549.188952] Mem-Info: Jun 18 00:00:12 mm4 kernel: [19548549.189468] active_anon:197899 inactive_anon:206498 isolated_anon:0 Jun 18 00:00:12 mm4 kernel: [19548549.189468] active_file:253 inactive_file:278 isolated_file:23 Jun 18 00:00:12 mm4 kernel: [19548549.189468] unevictable:0 dirty:0 writeback:0 unstable:0 Jun 18 00:00:12 mm4 kernel: [19548549.189468] slab_reclaimable:21867 slab_unreclaimable:55017 Jun 18 00:00:12 mm4 kernel: [19548549.189468] mapped:17008 shmem:24450 pagetables:4890 bounce:0 Jun 18 00:00:12 mm4 kernel: [19548549.189468] free:13184 free_pcp:340 free_cma:0 Jun 18 00:00:12 mm4 kernel: [19548549.195739] Node 0 active_anon:791596kB inactive_anon:825992kB active_file:1012kB inactive_file:1112kB unevictable:0kB isolated(anon):0kB isolated(file):92kB mapped:68032kB dirty$ Jun 18 00:00:12 mm4 kernel: [19548549.200559] Node 0 DMA free:8152kB min:352kB low:440kB high:528kB active_anon:1056kB inactive_anon:5048kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB prese$ Jun 18 00:00:12 mm4 kernel: [19548549.204993] lowmem_reserve[]: 0 1950 1950 1950 1950 Jun 18 00:00:12 mm4 kernel: [19548549.205877] Node 0 DMA32 free:44584kB min:44700kB low:55872kB high:67044kB active_anon:790540kB inactive_anon:820944kB active_file:1012kB inactive_file:1112kB unevictable:0kB wri$ Jun 18 00:00:12 mm4 kernel: [19548549.211109] lowmem_reserve[]: 0 0 0 0 0 Jun 18 00:00:12 mm4 kernel: [19548549.211826] Node 0 DMA: 8*4kB (UE) 31*8kB (UME) 20*16kB (UME) 22*32kB (UME) 13*64kB (UME) 5*128kB (UME) 5*256kB (UM) 4*512kB (UM) 2*1024kB (UM) 0*2048kB 0*4096kB = 8152kB Jun 18 00:00:12 mm4 kernel: [19548549.214414] Node 0 DMA32: 378*4kB (UMEH) 472*8kB (UEH) 412*16kB (UEH) 196*32kB (UMEH) 63*64kB (UMEH) 63*128kB (UE) 16*256kB (UE) 4*512kB (ME) 0*1024kB 2*2048kB (M) 1*4096kB (M) =$ Jun 18 00:00:12 mm4 kernel: [19548549.217336] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB Jun 18 00:00:12 mm4 kernel: [19548549.218937] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB Jun 18 00:00:12 mm4 kernel: [19548549.220513] 39729 total pagecache pages Jun 18 00:00:12 mm4 kernel: [19548549.221286] 14715 pages in swap cache Jun 18 00:00:12 mm4 kernel: [19548549.222022] Swap cache stats: add 10809560, delete 10794845, find 18524127383/18527544787 Jun 18 00:00:12 mm4 kernel: [19548549.223482] Free swap = 0kB Jun 18 00:00:12 mm4 kernel: [19548549.224081] Total swap = 524284kB Jun 18 00:00:12 mm4 kernel: [19548549.224712] 524154 pages RAM Jun 18 00:00:12 mm4 kernel: [19548549.225222] 0 pages HighMem/MovableOnly Jun 18 00:00:12 mm4 kernel: [19548549.225894] 13385 pages reserved
Then there was this little bit of info:
Jun 18 00:00:12 mm4 kernel: [19548549.442555] Out of memory: Kill process 29676 (python3) score 37 or sacrifice child Jun 18 00:00:12 mm4 kernel: [19548549.444508] Killed process 29676 (python3) total-vm:192224kB, anon-rss:93084kB, file-rss:0kB, shmem-rss:0kB Jun 18 00:00:12 mm4 kernel: [19548549.464907] oom_reaper: reaped process 29676 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
I suspect the above process was ltmp.
So the server has 2 gig of ram with about a dozen MM3 lists on it. Is this an issue with tuning performance of the Postgresql server or do I need to add more memory. I am curious how much ram your server is running and if you had run into any OOM of problems.
If you want me to post this to the list, I will.
-- Please let me know if you need further assistance.
Thank you for your business. We appreciate our clients. Brian Carpenter EMWD.com
-- EMWD's Knowledgebase: https://clientarea.emwd.com/index.php/knowledgebase
EMWD's Community Forums http://discourse.emwd.com/