<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">I dug out what I wrote a year ago ..<br>
<br>
eep-draft:<br>
<a class="moz-txt-link-freetext" href="https://github.com/psyeugenic/eep/blob/egil/system_limits/eeps/eep-00xx.md">https://github.com/psyeugenic/eep/blob/egil/system_limits/eeps/eep-00xx.md</a><br>
<br>
Reference implementation:<br>
<a class="moz-txt-link-freetext" href="https://github.com/psyeugenic/otp/commits/egil/limits-system-gc/OTP-9856">https://github.com/psyeugenic/otp/commits/egil/limits-system-gc/OTP-9856</a><br>
Remember, this is a prototype and a reference implementation.<br>
<br>
There is a couple of issues not addressed or at least open-ended.<br>
<br>
* Should processes be able to set limits on other processes? I
think not though my draft argues for it. It introduces unnecessary
restraints on erts and hinders performance. 'save_calls' is such
an option.<br>
<br>
* ets - if your table increases beyond some limit. Who should we
punish? The inserter? The owner? What would be the rationale? We
cannot just punish the inserter, the ets table is still there
taking a lot of memory and no other process could insert into the
table. They would be killed as well. Remove the owner and hence
the table (and potential heir)? What kind of problems would arise
then? Limits should be tied into a supervision strategy and
restart the whole thing.<br>
<br>
* In my draft and reference implementation I use soft limits. Once
a process reaches its limit it will be marked for termination by
an exit signal. The trouble here is there is no real guarantee for
how long this will take. A process can continue appending a binary
for a short while and ending the beam with OOM still. (If I
remember it correctly you have to schedule out to terminate a
process in SMP thus you need to bump all reduction. But, not all
things handle return values from the garbage collector, most
notably within the append_binary instruction). There may be other
issues as well.<br>
<br>
* Message queues. In the current implementation of message queues
we have two queues. An inner one which is locked by the receiver
process while executing and an outer one which other processes
will use and thus not compete for a message queue lock with the
executing process. When the inner queue is depleted the receiver
process will lock the outer queue and move the entire thing to the
inner one. Rinse and repeat. The only guarantee we have to ensure
with our implementation is: signal order between two processes.
So, in the future we might have several queues to improve
performance. If you introduce monitoring of the total number
messages in the abstracted queue (all the queues) this will most
probable kill any sort of scalability. For instance a sender would
not be allowed to check the inner queue for this reason. Would a
"fast" counter check in the inner queue be allowed? Perhaps if it
is fast enough, but any sort of bookkeeping costs performance. If
we introduce even more queues for scalability reasons this will
cost even more.<br>
<br>
* What about other memory users? Drivers? NIFs?<br>
<br>
I do believe in increments in development as long it is path to
the envisioned goal.<br>
And to reiterate, i'm not convinced that limits on just processes
is the way to go. I think a complete monitoring system should be
envisioned, not just for processes.<br>
<br>
// Björn-Egil<br>
<br>
On 2013-02-06 23:03, Richard O'Keefe wrote:<br>
</div>
<blockquote
cite="mid:85819AB4-AD04-401F-B814-88157B71BE07@cs.otago.ac.nz"
type="cite">
<pre wrap="">Just today, I saw Matthew Evans'
This pertains to a feature I would like to see
in Erlang. The ability to set an optional
"memory limit" when a process and ETS table is
created (and maybe a global optional per-process
limit when the VM is started). I've seen a few
cases where, due to software bugs, a process size
grows and grows; unfortunately as things stand
today the result is your entire VM crashing -
hopefully leaving you with a crash_dump.
Having such a limit could cause the process to
terminate (producing a OOM crash report in
erlang.log) and the crashing process could be
handled with supervisor rules. Even better you
can envisage setting the limits artificially low
during testing to catch these types of bugs early on.
in my mailbox. I have seen too many such e-mail messages.
Here's a specific proposal. It's time _something_ was done
about this kind of problem. I don't expect that my EEP is
the best way to deal with it, but at least there's going to
be something for people to point to.
</pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
eeps mailing list
<a class="moz-txt-link-abbreviated" href="mailto:eeps@erlang.org">eeps@erlang.org</a>
<a class="moz-txt-link-freetext" href="http://erlang.org/mailman/listinfo/eeps">http://erlang.org/mailman/listinfo/eeps</a>
</pre>
</blockquote>
<br>
</body>
</html>