<div dir="ltr">One use I can see for this option is during stress testing where if one was left scratching one's head about where a problem lay, adding this option into some of the suspect processes might give some clues. Or, adding this option to some long lived processes might highlight hotspots in the system where one wasn't expecting a hotspot.<div><br></div><div>I must admit I'm finding it difficult to imagine using this option sensibly. Partly because in a language where a user is not expected to do any memory management, it is hard to really understand how the heap size is growing. And as memory management in beam becomes more and more complex (quite rightly), it will be harder to keep track of this.</div><div><br></div><div>Admittedly, for high performance systems there is no shying away from understanding all this but still this option seems a bit 'artificial'. In the ticket on Github I raised the issue of bounded message queues (again). I think for the average Erlang programmer, that is a more intuitive thing to understand. Too many messages in a message queue means that my process is a bottleneck and I need to redesign. Off-heap message queues which are coming in R19 don't really help with this - regardless of how efficient we make the behaviour of a process with large message queues, it is not helping us fix the root cause of the problem which is lack of flow control for message passing.</div><div><br></div><div>Again, I understand that there is no easy way to deliver this feature, but in my view that would be more useful.</div><div><br></div><div>regards,</div><div>Chandru</div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 28 April 2016 at 10:56, Valentin Micic <span dir="ltr"><<a href="mailto:v@pharos-avantgard.com" target="_blank">v@pharos-avantgard.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><br><div><div>On 27 Apr 2016, at 3:14 PM, Ulf Wiger wrote:</div><br><blockquote type="cite"><div style="word-wrap:break-word"><span class=""><br><div><blockquote type="cite"><div>On 27 Apr 2016, at 14:23, Valentin Micic <<a href="mailto:v@pharos-avantgard.com" target="_blank">v@pharos-avantgard.com</a>> wrote:</div><br><div><div style="font-family:Helvetica;font-size:12px;font-style:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px">Are you saying that a process will be terminated regardless of how much actual memory we have within a system as a whole?</div><div style="font-family:Helvetica;font-size:12px;font-style:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px"><br></div><div style="font-family:Helvetica;font-size:12px;font-style:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px">Well, how useful would it be to kill a mission critical process just because it is using slightly more memory that we have originally envisaged?</div><br></div></blockquote></div><br></span><div>Note that you are in full control of this - per process!</div><div><br></div><div>If you have a mission-critical process that shouldn’t be killed, you simply don’t add this process flag.</div><div><br></div><div>A great future addition would be allowing multiple tracers, so that you could e.g. put a GC tracer on processes that may need to be allowed to exceed a max size and not just be brutally killed for it.</div><div><br></div><div>BR,</div><div>Ulf W</div></div></blockquote></div><div><br></div><div>A while back I've been testing a behavior of the Erlang's queue module "under stress conditions": I would enqueue, say, 500,000 elements; then remove them all, and subsequently enqueue another 100 elements.</div><div>My prediction was that the memory will grow up to the point (as required by 500,000 entries), and then stop growing; that is to say, upon dequeueing all 500,000 entries, the subsequent enqueueing of 100 elements would not cause new memory to be allocated.</div><div><br></div><div>This turned up to be a naive expectation as the memory just kept growing (and, in some way, that made sense, but let me stick to the topic). (*)</div><div><br></div><div>For a solution to this problem I ended up doing something similar to what Lukas suggested in his reply to my earlier comment on this thread ( <span style="color:rgb(33,83,209)"><i>You can build this using erlang:system_monitor(self(), [{large_heap, Sz}]) </i></span>)</div><div><br></div><div>As I knew (more or less) what the size of the individual element was, I was able to track memory consumption as a function of number of elements in the queue, and thus establish anomalies where I had extraordinary amount of memory relative to the size of the queue, and induce GC in order to free the excess memory. </div><div><br></div><div>* * *</div><div><br></div><div>I had two reasons for writing the above introduction.</div><div><br></div><div>First, I wanted to show a specific situation where "trigger-happy-kill-it" approach would not do us any justice (and, yeah, yeah... I know. I don't have to use it).</div><div><br></div><div>Second, I would really like to learn about specific situation where killing the process for exceeding the memory quota would actually make sense.</div><div>In my current view:</div><div><br></div><div> - if you can afford to kill a process, maybe you shouldn't be running it in a first place;</div><div><div> - Nobody learned much from fixing problems by switching things on and off;</div></div><div> - if you don't know what you're doing, maybe you should learn how to do it before doing it.</div><div> </div><div>My suggestion to let the process know and decide what to do next was motivated by these views.</div><div><br></div><div>Ulf, for what is worth, your suggestion to add additional tracers resonates quite well with me.</div><div>Using error_logger for that purpose is actually presuming that reaching a memory quota is an actual error and it doesn't require any leap of faith from there to conclude it would be logical to terminate the "offender".</div><div>In my view, this situation should be rather classified as an "operating condition", and if we were to spend any system resources in tracking that condition, maybe we should consider a scenario which has a chance to provide the most bang for the CPU buck.</div><div><br></div><div>Kind regards</div><div><br></div><div>V/</div><div><br></div><div><br></div><div>(*) To be fair, I did this testing about 8 years ago and I may have missed something out then, and there is strong possibility that the current VM handles memory management in a different way. </div><div><br></div><div><br></div></div><br>_______________________________________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>
<a href="http://erlang.org/mailman/listinfo/erlang-questions" rel="noreferrer" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
<br></blockquote></div><br></div>