<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On 27 Apr 2016, at 3:14 PM, Ulf Wiger wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><meta http-equiv="Content-Type" content="text/html charset=utf-8"><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On 27 Apr 2016, at 14:23, Valentin Micic <<a href="mailto:v@pharos-avantgard.com" class="">v@pharos-avantgard.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class="">Are you saying that a process will be terminated regardless of how much actual memory we have within a system as a whole?</div><div style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=""><br class=""></div><div style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class="">Well, how useful would it be to kill a mission critical process just because it is using slightly more memory that we have originally envisaged?</div><br class="Apple-interchange-newline"></div></blockquote></div><br class=""><div class="">Note that you are in full control of this - per process!</div><div class=""><br class=""></div><div class="">If you have a mission-critical process that shouldn’t be killed, you simply don’t add this process flag.</div><div class=""><br class=""></div><div class="">A great future addition would be allowing multiple tracers, so that you could e.g. put a GC tracer on processes that may need to be allowed to exceed a max size and not just be brutally killed for it.</div><div class=""><br class=""></div><div class="">BR,</div><div class="">Ulf W</div></div></blockquote></div><div><br></div><div>A while back I've been testing a behavior of the Erlang's queue module "under stress conditions": I would enqueue, say, 500,000 elements; then remove them all, and subsequently enqueue another 100 elements.</div><div>My prediction was that the memory will grow up to the point (as required by 500,000 entries), and then stop growing; that is to say, upon dequeueing all 500,000 entries, the subsequent enqueueing of 100 elements would not cause new memory to be allocated.</div><div><br></div><div>This turned up to be a naive expectation as the memory just kept growing (and, in some way, that made sense, but let me stick to the topic). (*)</div><div><br></div><div>For a solution to this problem I ended up doing something similar to what Lukas suggested in his reply to my earlier comment on this thread ( <span class="Apple-style-span" style="color: rgb(33, 83, 209); "><i>You can build this using erlang:system_monitor(self(), [{large_heap, Sz}]) </i></span>)</div><div><br></div><div>As I knew (more or less) what the size of the individual element was, I was able to track memory consumption as a function of number of elements in the queue, and thus establish anomalies where I had extraordinary amount of memory relative to the size of the queue, and induce GC in order to free the excess memory. </div><div><br></div><div>* * *</div><div><br></div><div>I had two reasons for writing the above introduction.</div><div><br></div><div>First, I wanted to show a specific situation where "trigger-happy-kill-it" approach would not do us any justice (and, yeah, yeah... I know. I don't have to use it).</div><div><br></div><div>Second, I would really like to learn about specific situation where killing the process for exceeding the memory quota would actually make sense.</div><div>In my current view:</div><div><br></div><div> - if you can afford to kill a process, maybe you shouldn't be running it in a first place;</div><div><div> - Nobody learned much from fixing problems by switching things on and off;</div></div><div> - if you don't know what you're doing, maybe you should learn how to do it before doing it.</div><div> </div><div>My suggestion to let the process know and decide what to do next was motivated by these views.</div><div><br></div><div>Ulf, for what is worth, your suggestion to add additional tracers resonates quite well with me.</div><div>Using error_logger for that purpose is actually presuming that reaching a memory quota is an actual error and it doesn't require any leap of faith from there to conclude it would be logical to terminate the "offender".</div><div>In my view, this situation should be rather classified as an "operating condition", and if we were to spend any system resources in tracking that condition, maybe we should consider a scenario which has a chance to provide the most bang for the CPU buck.</div><div><br></div><div>Kind regards</div><div><br></div><div>V/</div><div><br></div><div><br></div><div>(*) To be fair, I did this testing about 8 years ago and I may have missed something out then, and there is strong possibility that the current VM handles memory management in a different way. </div><div><br></div><div><br></div></body></html>