lies, damn lies and erlang memory use
Matthias Lang
matthias@REDACTED
Mon Oct 21 10:43:38 CEST 2002
mml> The underlying problem is that the Erlang VM appears to grow, slowly,
mml> without apparent bound. As always, I have no idea why.
mml> If I run the instrumented VM on it, it seems to show me rather more
mml> holes than I expected
Mats> if i add up erlang:system_info(allocated_areas), total heap memory
Mats> ([process_info(P,memory)||P<-processes()]) and total ets memory
Mats> ([ets:info(T,memory)||T<-ets:all()]) i get between 99% and 0% of
Mats> what the os reports, depending on the history of the node. i would
Mats> expect that the "holes" of matthias accounts for much (all?) of the
Mats> rest. what do you get if you do that addition, matthias? we have
Mats> tried many different mallocs, none has been outstanding.
The odd thing is that I measured the memory use while hitting the node
with requests; the 55% holes appeared to be steady-state.
There are several levels of memory management involved; as far as I
can tell, all combinations of slab allocator (+Sr2, +Sr1, -Se false)
and malloc allocator (+m elib, +m libc) are reasonable.
In our particular case, the defaults (+Sr1, +m libc) give us really
awful memory use characteristics. Using +Sr2 +m libc works really
well.
Someone on the inside care to comment? Is there some gotcha with
+Sr2---otherwise why isn't it the default?
What do I mean by "good" and "awful"? Here's what the memory use looks
like while running the system stress test on R8B-2:
Elapsed time Memory use with: +Sr1 +Sr2
------------------------------------------------------------
0 5.8M 5.6M
1 minute 6.3M 6.2M
10 minutes 9.6M 6.2M
1 hour 13.8M 6.4M
5 hours (out-of-memory crash) 6.2M
Matthias
More information about the erlang-questions
mailing list