Michael, thanks for the quick reply. We have not changed the priority of our processes. It's probably also worth including this output since it seems to imply that binaries are not the culprit as we originally suspected:<div>
<br></div><div><div>(jsweb@prod-ws000)25> memory().</div><div>[{total,7382580240},</div><div> {processes,7289606750},</div><div> {processes_used,7289603405},</div><div> {system,92973490},</div><div> {atom,248121},</div>
<div> {atom_used,216648},</div><div> {binary,46419248},</div><div> {code,5018541},</div><div> {ets,370224}]</div></div><br><div class="gmail_quote">On Tue, Oct 16, 2012 at 7:58 PM, Michael Truog <span dir="ltr"><<a href="mailto:mjtruog@gmail.com" target="_blank">mjtruog@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><u></u>
<div bgcolor="#ffffff" text="#000000">
If you have changed some of your busy Erlang processes to run with
priority "high", that seems like it could cause the behavior you are
seeing.<div><div class="h5"><br>
<br>
<br>
On 10/16/2012 07:50 PM, Kris Rasmussen wrote:
</div></div><blockquote type="cite"><div><div class="h5">
Hi All,
<div><br>
</div>
<div>We are observing error_logger processes that use many
gigabytes of memory over time. Unfortunately, I don't know how
much time it takes for them to get into this state yet, but the
memory drops to virtually nothing when we trigger a manual gc
cycle. Here is some information about one such process that was
using 2+GB of memory before we GC'd it:</div>
<div><br>
</div>
<div>
<div>(jsweb@prod-ws000)23>
process_info(whereis(error_logger), [memory, heap_size,
total_heap_size, messages, binary, message_queue_len,
stack_size, garbage_collection]).</div>
<div>[{memory,2586762616},</div>
<div> {heap_size,38263080},</div>
<div> {total_heap_size,323345205},</div>
<div> {messages,[]},</div>
<div> {binary,[{139749309541344,25907,2},</div>
<div> {139749309541344,25907,2},</div>
<div> {139749250739928,26114,2},</div>
<div> {139749250739928,26114,2},</div>
<div> {139749309588072,793,2},</div>
<div> {139749309588072,793,2},</div>
<div> {139749292946832,4927,2},</div>
<div> {139749292946832,4927,2},</div>
<div> {139749308657216,122,2},</div>
<div> {139749308657216,122,2},</div>
<div> {139749266244472,122,2},</div>
<div> {139749266244472,122,2},</div>
<div> {139749258694744,1123,2},</div>
<div> {139749258694744,1123,2},</div>
<div> {139749310288824,123,2},</div>
<div> {139749310288824,123,2},</div>
<div> {139749310160072,122,2},</div>
<div> {139749310160072,122,2},</div>
<div> {139749309587840,122,2},</div>
<div> {139749309587840,122,...},</div>
<div> {139749258436600,...},</div>
<div> {...}|...]},</div>
<div> {message_queue_len,0},</div>
<div> {stack_size,8},</div>
<div> {garbage_collection,[{min_bin_vheap_size,46368},</div>
<div> {min_heap_size,233},</div>
<div> {fullsweep_after,65535},</div>
<div> {minor_gcs,3}]}]</div>
</div>
<div><br>
</div>
<div>As you can see, the heap size is much smaller than the total
memory. We've read elsewhere that binaries can create problems
like this, but I'll admit I don't fully understand the reason.
Does anyone have any idea how the error_logger could be using so
much memory and what the best approach is to ensure it runs a GC
cycle more often? We could simply GC the error_logger with
another process more frequently, but I'd rather understand what
we are doing to put it in this state first.</div>
<div><br>
</div>
<div>In case it's significant, we have a custom report handler
that forwards error_logger messages to log4erl.</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Kris</div>
</div></div><pre><fieldset></fieldset>
_______________________________________________
erlang-questions mailing list
<a href="mailto:erlang-questions@erlang.org" target="_blank">erlang-questions@erlang.org</a>
<a href="http://erlang.org/mailman/listinfo/erlang-questions" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a>
</pre>
</blockquote>
<br>
</div>
</blockquote></div><br>