Much larger old_heap than heap
Loïc Hoguin
essen@REDACTED
Thu Mar 18 20:15:55 CET 2021
Right let me be more accurate about this part. Sorry about the lack of
details of the previous messages.
When I do a garbage collect it does free up memory. I was not able to
see it because the memory is filled again very quickly. I managed to see
the effect of erlang:garbage_collect/1 by running the calls directly one
after the other. But in the grand scheme of things the memory is only
reclaimed very temporarily.
The process also does its own forced erlang:garbage_collect/0 calls from
time to time. But same situation here, the memory usage goes back up
almost immediately after that.
This is a queue process in RabbitMQ of a specific type that I am
optimizing for low memory usage in order to allow having more of them on
the same node (also looking into the memory usage being more predictable).
I am testing further to confirm but it seems like in those cases
fullsweep_after=0 would work just fine and could be enabled only for
users that have concerns with memory.
The problem I am faced with however is that it does not seem possible to
configure fullsweep_after AFTER the process has already started. In
RabbitMQ, the queue processes only know what type of queue they are
after they've started, and so I cannot easily set fullsweep_after=0 only
for those queue types.
Is there a technical reason as to why fullsweep_after cannot be set
after the process has started? Is it possible to add? For some reason
the heap flags can be set via process_flag but not this one.
Eager to try a patch or help test one if it can be done.
Cheers,
On 18/03/2021 19:53, Dan Gudmundsson wrote:
> You said in the first mail, that you had done a garbage collect,
> at least I assumed that you erlang:garbage_collect() which does a full
> sweep,
> and after that you should only have live data left?
>
>
> On Thu, Mar 18, 2021 at 7:13 PM Loïc Hoguin <essen@REDACTED
> <mailto:essen@REDACTED>> wrote:
>
> Hello,
>
> I was able to dig deeper today. It's not that there was living data
> (first thing I checked of course). It's that the process was processing
> so much data that the old heap quickly gets full of no longer useful
> data. It sits long enough in memory to make it to old heap, but not
> very
> long in the grand scheme of things.
>
> Setting fullsweep_after to 0 reduces the heap size by 2-10 times
> depending on the current state size.
>
> Cheers,
>
> On 18/03/2021 18:56, Björn-Egil Dahlberg wrote:
> > Ehum?
> >
> > total_heap_size = heap_size + old_heap_size. Meaning 1st gen
> heap + 2nd
> > gen heap. So total_heap_size /should/ be equal, or more probable,
> higher
> > than the heap_size.
> >
> > The reason you don't see that it shrinks during a garbage collect
> is ofc
> > that there's still living data on the heap.
> >
> > Den ons 17 mars 2021 kl 21:53 skrev Loïc Hoguin
> <lhoguin@REDACTED <mailto:lhoguin@REDACTED>
> > <mailto:lhoguin@REDACTED <mailto:lhoguin@REDACTED>>>:
> >
> > Hello,
> >
> > I am trying to understand why the total_heap_size of a few
> processes
> > is so much higher than heap_size. As can be seen in the following
> > snippet, the old_heap is responsible for the discrepancy:
> >
> > > erlang:process_info(QPid, garbage_collection_info).
> > {garbage_collection_info,[{old_heap_block_size,1439468},
> > {heap_block_size,196650},
> > {mbuf_size,289},
> > {recent_size,11674},
> > {stack_size,35},
> > {old_heap_size,940791},
> > {heap_size,86028},
> > {bin_vheap_size,36483},
> > {bin_vheap_block_size,46422},
> > {bin_old_vheap_size,34148},
> > {bin_old_vheap_block_size,46422}]}
> >
> > Why?
> >
> > How can I reduce this? Garbage collecting does nothing.
> >
> > Cheers,
> >
> > --
> > Loïc Hoguin
> >
>
> --
> Loïc Hoguin
> https://ninenines.eu
>
--
Loïc Hoguin
https://ninenines.eu
More information about the erlang-questions
mailing list