<br><br><div class="gmail_quote">2011/6/1 Richard Carlsson <span dir="ltr"><<a href="mailto:carlsson.richard@gmail.com">carlsson.richard@gmail.com</a>></span><br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Garbage collection is not triggered by any particular event (except an explicit call to garbage_collect()), but rather, when the code tries to do something that requires more memory, e.g., to create a tuple or cons cell, than what is currently easily available on the heap. It then calls the garbage collector to try to get some more free space from the newest generation - this moves the used memory to one end and all the free memory to the other end. If this creates enough contiguous space, the code can continue with the allocation. Otherwise, the system will try to garbage collect the next older generation, and so on. If all generations have been garbage collected and there's still not enough memory for the allocation, the Erlang runtime system will enlarge the process' heap (by allocating more memory from the operating system).<br>
<br>
Thus, how often garbage collection is triggered depends on how quickly you create tuples and other data structures, and the size of the process heap depends on whether or not it allocates new data faster than it releases old data. If it releases data at the same speed or faster, then it will stay at the same size (or even shrink), because garbage collection will always be able to reclaim enough space from the existing heap.<br>
<br>
In your example above, the original list has no more references to it after the call to lists:keyreplace(), so it might get collected at that point, or at any later point, depending on whether your program needs to allocate more data structures and how much space is currently free on the process heap. A process that does not try to allocate more data does not waste time doing garbage collection either.<br>
<br>
/Richard<br></blockquote><div><br>thank you richard and jesper for these very helpful insights.<br><br>r.<br></div></div>