[erlang-questions] Garbage Collection, BEAM memory and Erlang memory

Anton Lebedevich mabrek@REDACTED
Wed Jan 28 15:08:07 CET 2015

> The number of reductions stay constant:
> https://cldup.com/SiwAVf1VT5-3000x3000.png
> So does the number of GC:
> https://cldup.com/9Q5ac7Q5eK-3000x3000.png
> The process count is always the same:
> https://cldup.com/tsuorbj06u-3000x3000.png
> And the maximum message box length of any process in the system is always
> between 1-3 (despite the peak at the start):
> https://cldup.com/x4KYsykOrH-3000x3000.png
> ...Any other ideas? I'm getting appalled.

Time scale is different so it's not possible to correlate process
memory with number of reductions or number of GCs.

It seems that some process (or processes) starts allocating memory
much faster than before and linux OOM kills the beam when it runs out
of memory on the box. You can try setting a watchdog process
(something like "while true; check memory usage and kill -USR1
beam.smp when it's close to the limit; sleep 1") to get crash dump
before OOM kills beam.smp

Is there anything unusual in logs at the momeng when memory usage is
jumping? Maybe something gets printed to stdout.


More information about the erlang-questions mailing list