<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 28, 2016 at 1:27 AM, Richard A. O'Keefe <span dir="ltr"><<a href="mailto:ok@cs.otago.ac.nz" target="_blank">ok@cs.otago.ac.nz</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><br>
<br>
On 27/04/16 7:12 PM, Lukas Larsson wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
<a href="https://github.com/erlang/otp/pull/1032" rel="noreferrer" target="_blank">https://github.com/erlang/otp/pull/1032</a><br>
</blockquote>
<br>
Where is the documentation?<br></blockquote><div><br></div><div>The documentation is part of the pull request, the options and behavior are described here: <a href="https://github.com/erlang/otp/pull/1032/commits/629c6c0a4aea094bea43a74ca1c1664ec1041e43#diff-0fc9fb0d3e12721dd1574a543916e8c6R4349">https://github.com/erlang/otp/pull/1032/commits/629c6c0a4aea094bea43a74ca1c1664ec1041e43#diff-0fc9fb0d3e12721dd1574a543916e8c6R4349</a></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
- what are the units in which 'size' is measured?<br>
It would be *very* nasty if a program that worked fine in a 32-bit<br>
environment stopped working in a 64-bit environment because the<br>
size was in bytes.<br>
<br></blockquote><div><br></div><div>It is the same as min_heap_size and all other options that we that effect the process heap, the internal word size, erlang:system_info({wordsize,internal}).</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
- are size, kill, and error_logger the only suboptions?<br>
<br></blockquote><div><br></div><div>at the moment yes.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
- what is the performance cost of this?<br></blockquote><div><br></div><div>If not enabled, it costs one branch per gc. If enabled, most of the calculations have to be done anyways in order to allocate the new to space for the collector, so not much extra calculations needed there either. It also costs one machine word of memory per process.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
(Presumably it gets checked only after a garbage collection,<br>
prior to increasing the size of a process.)<br></blockquote><div><br></div><div>It gets checked after what may be called the initialization phase of the GC. The first thing the GC does it to calculate how large a to space is needed for the GC to do it's job. After this calculation is done, the new code checks to see if the total heap size during collection will exceed the max heap size, if it does the appropriate action is taken before the collector starts. If that action is that the process should be killed, the collection does not start at all.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
- I get twitchy when a parameter that can result in the death of<br>
a process is defined as the sum of something that *is* under<br>
the process's control and something that is *not*, and further,<br>
how much of that uncontrolled stuff is counted is determined<br>
by yet another flag that wasn't there in 18.2. As it is, the<br>
effect is that a process can be killed because *another* process<br>
is doing something bad. What, if anything, can be done to<br>
prevent that needs to be explained.<br>
<br></blockquote><div><br></div><div>Yes indeed. This is one of the reasons that I'm skeptical about the usefulness of the option. It can be used to effectively protect against heap growth caused by bad code in the process, for instance doing binary_to_term on something unexpectedly large that someone on the internet sent you. It may catch some cases when the message queue grows huge, but if you have processes that may grow to have huge message queues you probably want to use the new `off_heap` message queue data option anyways which means that the messages in the queue are guaranteed to not be part of the heap which in turn means that they will not be counted towards the max_heap_size.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
- Guidelines about how to choose sensible sizes would be valuable.<br>
No, wait. They are *indispensable*.<br>
<br>
Of *course* this is useful, but it's starting to smell like<br>
pthread_attr_setstacksize(), where there is *no* way, given even<br>
complete knowledge of the intended behaviour of the code and<br>
sensible bounds on the amount of data to be processed, that<br>
you can determine a *safe* stack size by anything other than<br>
experiment. You are entirely at the mercy of the implementation,<br>
and the C implementation has no mercy.<br>
<br></blockquote><div><br></div><div>The main reason that setstacksize is so hard to do is that you don't want to put the limit too high as you would then waste that memory. So you want to put it as close as possible to you actual max stack size, but have to make very sure that you don't give too little. The analogy to ulimit seems more appropriate, as you can put the limit well above (one or two orders of magnitude) what you expect the process to use and still catch it before the VM is brought down due to out of memory.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
I'd personally be happier with something like ulimit(), where there<br>
is a hard limit (the one where you kill the process) and a soft<br>
limit (where you raise() an exception in the process to let it know<br>
there's going to be a problem soon).<br>
<br></blockquote><div><br></div><div>We talked quite a lot about having the option of raising an exception when the max heap size is reached but decided against it as it would mean that all code that you write and libraries you use has to expect the exception. So any old code that has a catch all would catch the max_heap_size exception and possibly hide that the error happened. The semantics also become very convoluted once we started looking at the details of how such an exception might work.</div><div><br></div><div>I'm unsure how useful having a softlimit that sends a message would be. It is however something that we may add in the future if a good use-case for it is presented.</div><div><br></div><div>Lukas</div></div></div></div>