[erlang-questions] Garbage Collection, BEAM memory and Erlang memory
Fri Jan 23 10:09:19 CET 2015
In our case, a key insight is something Robert mentioned earlier.
To paraphrase it, "Your memory is not going to be reclaimed till *every*
process that touched it has either been GC'd, or sleeps with the fishes".
And our code had a few router processes that, quite literally, did nothing
but pass binaries on from point A to point B - no reason to worry about
*those* right? (hint: wrong)
In the short run, walking the process chain and doing full-sweeps on
*every* process that might (!!!) touch the binary. (we did that)
In the longer run, see if you can swap out the routing processes with
versions that respawn themselves in some form or the other (we did that
too. eventually. was harder)
On Fri, Jan 23, 2015 at 12:14 AM, Dan Gudmundsson <dangud@REDACTED> wrote:
> On Thu, Jan 22, 2015 at 6:38 PM, Roberto Ostinelli <roberto@REDACTED>
>> Here's something I've tried which is successful in avoiding the memory
>> increase for binaries.
>> Inside a loop, I used to have:
>> <<Body:Len/binary, "\r\n", Rest/binary>> = Data,
>> loop(Body, Rest);
>> Now I force a binary copy to ensure that the reference to the original
>> full binary is easily removed:
>> <<Body0:Len/binary, "\r\n", Rest0/binary>> = Data,
>> Body = binary:copy(Body0),
>> Rest = binary:copy(Rest0),
>> loop(Body, Rest);
>> This seems to have stabilized the memory usage reported by
>> - I believe this can only work if the copied binary are *heap* and
>> not *ref-c*, is this correct?
>> Binary copy makes a new copy of the binary regardless of size, but it is
> only useful on ref-c binaries,
> and should be used to avoid keeping references to large binaries, when you
> keep and use a small part
> of the original binary. But in this case you make a copy of Data (-2
> bytes) and thus will double up the memory until Data is gc'ed and refc
> reaches zero. So I do not see the point of the above nor of making it a
> list which will
> only explode your memory even more.
>> - Unfortunately, the BEAM process reported RES memory sill keeps
>> Any other ideas?
>> On Thu, Jan 22, 2015 at 6:11 PM, Roberto Ostinelli <roberto@REDACTED>
>>> Thank you Robert.
>>> I'm going to try a selective fullsweep_after.
>>> Could this also justify the process memory increase (which is more
>>> On Thu, Jan 22, 2015 at 6:00 PM, Robert Virding <rvirding@REDACTED>
>>>> One thing you can see is that the size of the binary data is growing.
>>>> This space contains the large binaries (> 64 bytes) which are sent in
>>>> messages between processes. While this means that the messages become
>>>> (much) smaller and faster to send it takes a much longer time to detect
>>>> that they are no longer alive and can be reclaimed. Basically it takes
>>>> until all the processes they have passed through does a full garbage
>>>> collection. Setting fullsweep_after to 0 and doing explicit garbage
>>>> collects speeds up reclaiming the binaries.
>>>> You could be much more selective in which processes you set
>>>> fullsweep_after to 0 and which ones you explicitly garbage collect.
>>>> I don't know if the is *the* problem but it is *a* problem you have.
>> erlang-questions mailing list
> erlang-questions mailing list
tall bald Indian guy..*
*Google+ <https://plus.google.com/u/0/108074935470209044442/posts> | Blog
<http://dieswaytoofast.blogspot.com/> | Twitter
<https://twitter.com/dieswaytoofast> | LinkedIn
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the erlang-questions