Per process memory limits?

Thomas Lindgren thomasl_erlang@REDACTED
Fri Jul 16 21:33:33 CEST 2004


--- Vance Shipley <vances@REDACTED> wrote:
> On Thu, Jul 15, 2004 at 02:56:08PM -0700, Thomas
> Lindgren wrote:
> }  
> }  should the process be permitted to GC when it
> runs out? 
> 
> I would say it should just exit with {'EXIT',
> out_of_memory}.

This has the unintuitive consequence that once the
max. limit is reached (perhaps after many garbage
collections), the process will be killed when memory
runs out, regardless of whether it actually is using
it or not. Nor is it obvious to the programmer when
this death spiral occurs.

If a process size restriction was introduced, I would
suggest doing it as follows:
- specify a max. heap size when the process is spawned
- when a garbage collection would occur, the process
is instead killed.

That way, the programmer can at least know how much
memory can be allocated before the jig's up. The
drawback is potential overallocation of memory.

There are other approaches; one I once favoured was to
split the VM heap into a number of regions and provide
"isolation" by spawning processes to run in specified
regions and killing everyone in the region if it got
too full. (You could, for example, have long-lived
servers in region A and workers in region B, so
runaway workers couldn't kill the system.) But Per's
suggestion of using many nodes seemed to get most of
the benefit of that without having to do the work.

> }  How do we estimate size in a shared heap? Should
> we?)
> 
> I would say no, if it's shared it would behave as it
> does now.

In that case, the problem of runaway processes
remains?

Anyway, as you might guess (:-) I think the issue is a
bit thorny at present.

Best,
Thomas



		
__________________________________
Do you Yahoo!?
New and Improved Yahoo! Mail - Send 10MB messages!
http://promotions.yahoo.com/new_mail 



More information about the erlang-questions mailing list