[erlang-questions] memory leak.

Richard Andrews <>
Sun Apr 19 12:59:01 CEST 2009


If your have processes which create large (or many) binaries then it may be the generational garbage collector taking too long to do a full sweep. If this is the case some small number of processes should grow very large. A combination of processes() and process_info() (eg. sort by heap_size) might provide some insight. You can interactively and harmlessly call garbage_collect(Pid) on suspects to find out - they should release what is unneeded and the change will show up in the erlang node VM size.

If you find some class of process releases a lot after garbage_collect(Pid), try spawning those with {fullsweep_after,0}. See the hints in the documentation for spawn_opt/4.

appmon might also be useful if the processes generally have long lifetimes. appmon allows you to use a GUI to examine the processes as they run.




----- Original Message ----
> I don't think, that a problem in process leak.
> First restart count process: 8400, next: 13082, next 11110
> 
> > Chances are, a programming error on your part.
> >
> > The fact that you're getting "system_limit" errors when creating new
> > processes leads me to believe that you might not have a memory leak
> > per se, but a process leak.  Meaning you are dynamically creating
> > processes, but don't shut them down.
> >
> > You can find the number of processes on your node using
> >
> > length(processes()).
> >
> > Try to see if this number increases constantly during operation.


      Enjoy a better web experience. Upgrade to the new Internet Explorer 8 optimised for Yahoo!7. Get it now.



More information about the erlang-questions mailing list