how to run out of memory: let binaries leverage the heap

Laszlo Varga Laszlo.Varga@REDACTED
Thu Sep 26 17:37:11 CEST 2002

sorry for my stupidity, but I've been thought that
tail recursion needs constant space (as it is a loop).

Then why binary garbage?


mailto: ethvala@REDACTED

> X-Original-Recipient: <erlang-questions@REDACTED>
> MIME-Version: 1.0
> Content-Transfer-Encoding: 7bit
> Date: Thu, 26 Sep 2002 16:59:00 +0200
> From: Matthias Lang <matthias@REDACTED>
> To: erlang-questions@REDACTED
> Subject: how to run out of memory: let binaries leverage the heap
> Hi,
> A problem that bites me a couple of times a year is this: some piece
> of code involving binaries causes the Erlang VM to expand its memory
> footprint enormously for no particularly good reason. The Erlang VM
> then dies because it can't allocate more ram.
> Here's how to do it:
>       1. Make the heap grow by doing something memory-intensive. Cross
>          your fingers and hope that you get left with a half-full heap.
>       2. Do something tail-recursive which involves creating a lot
>          binary garbage.
>       3. Watch the VM expand to many times the original heap size.
> The problem at the bottom of this is that the size of binaries created
> (or referenced) by a process has very little effect on when that
> process will be GCed next. Binaries can be large, the references to
> them are small.
> I've attached a little program which demonstrates this. On my machine
> (linux, x86), it'll happily eat 60M before a being GCed back down to
> 6M. On an embedded system with just 32M, this is a bit of a bummer.
> What's the solution? Specifying {fullsweep_after, 0} in the spawn
> option doesn't help much (as the attached example
> demonstrates). Forcing a GC immediately before a "binary-intensive"
> series of operations helps a lot. Any cleaner or better ideas?
> Does the (HIPE) unified heap put binaries on that heap?
> Matthias

More information about the erlang-questions mailing list