[erlang-questions] Huge erl_crash.dump (2 gigs) - looking for advice

David Welton davidnwelton@REDACTED
Thu Jul 3 16:08:43 CEST 2014


>> The kind people in #erlang have given me some suggestions, but I'm
>> going to write here to appeal to a wider audience.  I've got a huge
>> erl_crash.dump, that's larger than 2 gigs
>
> Assuming for a second that this doesn't contain actual dumps of data it
> feels like it might contain a long list of _somethings_ . Can you say (by
> looking at a few random points) what something is ?

I'm not sure I follow...  I don't know what the proc_heap section
contains - I don't know what all the lines in it mean.

Robert writes:
> one thing to be aware of when using sasl and error_logger is that process crashes get logged by sasl with the complete state of the process that just died, and error_logger tries to pretty print that state; this can take huge amounts of memory if the crashed process state is large.

> I would recommend monitoring processes with high memory usage and figuring out the usage patterns. In general it is a good idea to try and limit the amount of state a process holds. But this is obviously application dependent.

I don't think any of the processes grew in size so much that it was
anywhere near the kind of size needed to trigger an out of memory
error were its stated dumped and printed.  We'll keep an eye on them
just in case one of them is stealthily growing, but for the moment we
have not been able to reproduce the error.

Thanks
-- 
David N. Welton

http://www.welton.it/davidw/

http://www.dedasys.com/



More information about the erlang-questions mailing list