lies, damn lies and erlang memory use
Rickard Green
rickard.green@REDACTED
Mon Oct 21 15:51:49 CEST 2002
Matthias Lang wrote:
>
> mml> The underlying problem is that the Erlang VM appears to grow, slowly,
> mml> without apparent bound. As always, I have no idea why.
> mml> If I run the instrumented VM on it, it seems to show me rather more
> mml> holes than I expected
>
> Mats> if i add up erlang:system_info(allocated_areas), total heap memory
> Mats> ([process_info(P,memory)||P<-processes()]) and total ets memory
> Mats> ([ets:info(T,memory)||T<-ets:all()]) i get between 99% and 0% of
> Mats> what the os reports, depending on the history of the node. i would
> Mats> expect that the "holes" of matthias accounts for much (all?) of the
> Mats> rest. what do you get if you do that addition, matthias? we have
> Mats> tried many different mallocs, none has been outstanding.
>
> The odd thing is that I measured the memory use while hitting the node
> with requests; the 55% holes appeared to be steady-state.
>
> There are several levels of memory management involved; as far as I
> can tell, all combinations of slab allocator (+Sr2, +Sr1, -Se false)
> and malloc allocator (+m elib, +m libc) are reasonable.
>
> In our particular case, the defaults (+Sr1, +m libc) give us really
> awful memory use characteristics. Using +Sr2 +m libc works really
> well.
>
> Someone on the inside care to comment? Is there some gotcha with
> +Sr2---otherwise why isn't it the default?
>
> What do I mean by "good" and "awful"? Here's what the memory use looks
> like while running the system stress test on R8B-2:
>
> Elapsed time Memory use with: +Sr1 +Sr2
> ------------------------------------------------------------
> 0 5.8M 5.6M
> 1 minute 6.3M 6.2M
> 10 minutes 9.6M 6.2M
> 1 hour 13.8M 6.4M
> 5 hours (out-of-memory crash) 6.2M
>
> Matthias
Hi,
sl_alloc was introduced in order to address the same type of memory
problems that you have experienced.
The idea is to separate relatively short-lived memory blocks from more
long-lived ones in order to reduce memory fragmentation and by this
reduce the overall memory consumption (no of mapped pages). sl_alloc
has the job of managing the short-lived blocks and malloc() the
long-lived ones.
* sl_alloc 1 (+Sr1) is very simple (and will be removed sometime the
future). It places large blocks in separate (mmap()ped) memory
segments (one block; one segment) and malloc()ate all other blocks.
* sl_alloc 2 (+Sr2) places all blocks in separate (mmap()ped) memory
segments. Small blocks coexist in "multiblock segments" and large
blocks have their own memory segments. sl_alloc 2 is default in
R9B.
Today, mainly Erlang heaps and message buffers are classified as
short-lived memory blocks and almost everything else as long-lived.
For more info see the sl_alloc(3) man page.
Regards,
Rickard Green (Erlang/OTP)
More information about the erlang-questions
mailing list