Garbage collection

Patrick Logan patrickdlogan@REDACTED
Fri Sep 3 08:38:42 CEST 1999


David Gould writes:
 > > 
 > > A nice benefit of a separate heap per process is that you can
 > > just throw out all the data for that process when it's killed.
 > > Sharing a single heap would potentially result in more frequent
 > > collections.
 > 
 > This should not be a problem for a global heap garbage
 > collector. Real programs do not allocate memory in a steady way or
 > even in any kind of random distribution. They tend to have strong
 > "phase" behaviour, allocating large numbers of similar objects,
 > working for bit, and then freeing most of the previous allocations
 > and allocating some new kind of object.
 > 
 > So any realistic memory manager has to cope with this kind of
 > thing. Process deallocation is just one kind of bursty
 > allocator/collector workload.

Sorry for jumping into this conversation without having followed it
all too closely. But I have a few thoughts.

A problem with a heap per process seems to be how much heap to
allocate to each process, and handling the processes that outgrow
their heap.

Would there be a way to implement a "Big Bag of Pages" (BIBOP) where a 
"page" is dedicated to a process but any process' pages would be
scattered around the heap. When a process goes away you'd want to give 
those pages back to the system somehow without too much effort.

There was a paper out of Indiana University a few years ago called
something like "Don't Stop the BIBOP" which described using BIBOP
pages to indicate information about the kinds of things allocated on
any given page. Process information may fit into this scheme.

I don't know. Just a thought. I am not a GC implementor.

-- 
Patrick Logan    patrickdlogan@REDACTED



More information about the erlang-questions mailing list