[erlang-questions] simple question about list memory consumption

CGS <>
Mon Jul 9 16:12:21 CEST 2012

On Mon, Jul 9, 2012 at 2:13 PM, Ulf Wiger <> wrote:

> On 9 Jul 2012, at 12:34, CGS wrote:
> > About the two lists you were speaking about (I suppose you were
>> referring to the output of lists:map/2 and lists:seq/2), the second is
>> destroyed as soon as lists:map/2 exits.
>> No it isn't.  It is only reclaimed when the garbage collector runs.
> That is a bit puzzling. Looking at the code for lists:map/2:
> map(F, [H|T]) ->
>     [F(H)|map(F, T)];
> map(F, []) when is_function(F, 1) -> [].
> I can see that when lists:map/2 exits, the only remaining list is an empty
> list. I might be wrong, but I see only one word remaining for the garbage
> collector to take care of. That means, at the end, there is only 1 full
> list remaining (1 word really can be neglected compared with 214 MB which I
> got for 10^7 characters long list).
> I believe Richard was referring to:
> a) the list created using lists:seq(1,10000000)
> b) the list that results from the call to lists:map/2
> Actually, when you do this in the shell, the second list will be
> remembered in the command history (as well as in the variable bindings).
> The first list will, as Richard said, disappear when the garbage collector
> runs.

Agree. Only that I have the impression that the first list is an empty list
when lists:map/2 exits. As I have no way to stabilize my OS RAM consumption
at the level of bytes, I don't really care about few words error and
therefore I went for 10^6 - 10^7 characters list. The fact that I got in
between 2 and 3 words per character supports my first impression. I might
be wrong though.

> The quickest and most precise way of finding out how much RAM is needed
> for a particular data structure is:


> Eshell V5.9  (abort with ^G)
> 1> L = lists:map(fun(_) -> 107 end,lists:seq(1,10000000)), ok.
> ok
> 2> erts_debug:flat_size(L).
> 20000000

I am a bit skeptical to use erts_debug:flat_size/1 as it is using the
internal representation to compute the size and it doesn't report the
actual physical memory consumption (see lib/hipe/cerl/erl_bif_types.erl and
the list type definition in erl_types.erl from the same path), while `free'
is providing the RAM usage as it is. Using erts_debug:flat_size(List) is
quite the same with length(List)*(2 words) because for each list element
([Type,Term]) it adds 2 words (2*t_integer()) and not checking the actual
memory consumption.

> In this case, 20M words, which is very much in line with the documentation.

Obviously. :)

> If you want to figure out how much memory is actually used by the VM, this
> is more complicated. This will depend on how the data structure was built -
> i.e. how it exercised the garbage collector - and how much temporary
> fragmentation that caused. The Erlang VM is not designed to keep the memory
> footprint as small as possible, but, in a sense, to maintain a reasonably
> low level of fragmentation in a non-stop system without ruining the
> soft-real-time characteristics. If you perform an operation such as
> creating a list with 10M elements, and from that creating another list of
> similar size, this can be pretty hard on Erlang's garbage collector, since
> it can't predict when you are going to quit accumulating data.

That is what I was referring to lower level memory management.

> Let's try a very unscientific experiment:
> 3> exit(foo).
> ** exception exit: foo
> 4> timer:tc(fun() -> lists:map(fun(_) -> 107 end,lists:seq(1,10000000)),
> ok end).
> {10025590,ok}
> %% create a fun that evaluates F() in a fresh process with a really big
> heap
> 5> RPC = fun(F) -> {_,R} = spawn_opt(fun() -> exit({ok,F()})
> end,[{min_heap_size,50000000},monitor]), receive {'DOWN',R,_,_,Res} ->
> {ok,Return} = Res, Return end end.
> #Fun<erl_eval.6.111823515>
> 6> exit(foo).
> ** exception exit: foo
> 7> timer:tc(fun() -> RPC(fun() -> lists:map(fun(_) -> 107
> end,lists:seq(1,10000000)), ok end) end).
> {5378929,ok}
> (I did, in fact, run the timing operations several times and kept the best
> result.)

Yeah, it's not easy to get the accurate time of execution. I usually
perform the execution several times and make a weighted average value (by
eliminating the extrema exceeding a certain error) to get an idea about the
execution time.

> Of course, by creating a slave process, we add some processing overhead,
> compared to calling the function directly, but in this case it's
> insignificant, due to the long execution time. We almost reduce the time by
> half by eliminating a lot of expensive GC.
> (If we actually return the big list from the slave process, this increases
> the cost a bit, but not by nearly as much, since the VM knows how much it
> needs to grow the heap in order to make room for the result.)


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20120709/d2be5cc9/attachment.html>

More information about the erlang-questions mailing list