Tracing large binary allocations

Dániel Szoboszlay dszoboszlay@REDACTED
Wed Apr 8 20:53:18 CEST 2020


Hi,

Even though you cannot trace on allocating large binaries, you may try
tracing garbage collections, and look for GC-s that clean up a lot of
off-heap binary data. This could at least narrow done the search for some
processes, although it won't tell you where the allocation happens. But
maybe once you know which processes are guilty you will be able to add more
targeted tracing until you find the root cause.

Cheers,
Daniel

On Wed, 8 Apr 2020 at 16:14, Lukas Larsson <lukas@REDACTED> wrote:

> Hello,
>
> On Tue, Apr 7, 2020 at 9:26 AM Devon Estes <devon.c.estes@REDACTED>
> wrote:
>
>> Hi all,
>>
>> I’m seeing some cases in my application where our off-process binary heap
>> allocation jumps by several orders of magnitude and then goes down right
>> after. I’m sure this is something that’s in our app just loading dozens of
>> huge binaries into memory at once and not a bug in anything underlying or a
>> binary leak, but finding where these allocations are happening so I can
>> make some changes to avoid this has so far not yielded any results. Ideally
>> I’d like to be able to set a trace with something like erlang:trace/3 on
>> some function that sends a tracer message whenever a binary over 30MB is
>> allocated and includes the call stack or even just the calling function
>> that allocated the binary in the trace message.
>>
>
>> Going through the binary vheap and getting a list of the processes that
>> have references to those binaries won’t help in this case.
>>
>> Is such a trace possible? Is there some flag I can set when starting my
>> BEAM process to give me some kind of debug output that would give me this
>> information? I’d imagine this is all in C, so it might be a bit tricky...
>>
>
> No it is not possible without modifying the VM. I can't think of any good
> way to get this information without scanning the process' vheap, which as
> you say would not help much in this case.
>
>
>> Thanks in advance for the help!
>>
>> Cheers,
>> Devon
>> --
>>
>> _________________
>> Devon Estes
>> 203.559.0323
>> www.devonestes.com
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20200408/d9d0ef23/attachment.htm>


More information about the erlang-questions mailing list