how to run out of memory: let binaries leverage the heap

Matthias Lang matthias@REDACTED
Thu Sep 26 16:59:00 CEST 2002


Hi,

A problem that bites me a couple of times a year is this: some piece
of code involving binaries causes the Erlang VM to expand its memory
footprint enormously for no particularly good reason. The Erlang VM
then dies because it can't allocate more ram.

Here's how to do it:

      1. Make the heap grow by doing something memory-intensive. Cross
         your fingers and hope that you get left with a half-full heap.

      2. Do something tail-recursive which involves creating a lot
         binary garbage.

      3. Watch the VM expand to many times the original heap size.

The problem at the bottom of this is that the size of binaries created
(or referenced) by a process has very little effect on when that
process will be GCed next. Binaries can be large, the references to
them are small.

I've attached a little program which demonstrates this. On my machine
(linux, x86), it'll happily eat 60M before a being GCed back down to
6M. On an embedded system with just 32M, this is a bit of a bummer.

What's the solution? Specifying {fullsweep_after, 0} in the spawn
option doesn't help much (as the attached example
demonstrates). Forcing a GC immediately before a "binary-intensive"
series of operations helps a lot. Any cleaner or better ideas?

Does the (HIPE) unified heap put binaries on that heap?

Matthias

-------------- next part --------------
A non-text attachment was scrubbed...
Name: eatmem.erl
Type: application/octet-stream
Size: 2012 bytes
Desc: not available
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20020926/0d080dcd/attachment.obj>


More information about the erlang-questions mailing list