[erlang-questions] Max heap size

Valentin Micic v@REDACTED
Thu Apr 28 11:56:34 CEST 2016


On 27 Apr 2016, at 3:14 PM, Ulf Wiger wrote:

> 
>> On 27 Apr 2016, at 14:23, Valentin Micic <v@REDACTED> wrote:
>> 
>> Are you saying that a process will be terminated regardless of how much actual memory we have within a system as a whole?
>> 
>> Well, how useful would it be to kill a mission critical process just because it is using slightly more memory that we have originally envisaged?
>> 
> 
> Note that you are in full control of this - per process!
> 
> If you have a mission-critical process that shouldn’t be killed, you simply don’t add this process flag.
> 
> A great future addition would be allowing multiple tracers, so that you could e.g. put a GC tracer on processes that may need to be allowed to exceed a max size and not just be brutally killed for it.
> 
> BR,
> Ulf W


A while back I've been testing a behavior of the Erlang's queue module "under stress conditions": I would enqueue, say,  500,000 elements; then remove them all, and subsequently enqueue another 100 elements.
My prediction was that the memory will grow up to the point (as required by 500,000 entries), and then stop growing; that is to say, upon dequeueing all 500,000 entries, the subsequent enqueueing of 100 elements would not cause new memory to be allocated.

This turned up to be a naive expectation as the memory just kept growing (and, in some way, that made sense, but let me stick to the topic). (*)

For a solution to this problem  I ended up doing something similar to what Lukas suggested in his reply to my earlier comment on this thread ( You can build this using erlang:system_monitor(self(), [{large_heap, Sz}])  )

As I knew (more or less) what the size of the individual element was, I was able to track memory consumption as a function of number of elements in the queue, and thus establish anomalies where I had extraordinary amount of memory relative to the size of the queue, and induce GC in order to free the excess memory. 

* * *

I had two reasons for writing the above introduction.

First, I wanted to show a specific situation where "trigger-happy-kill-it" approach would not do us any justice (and, yeah, yeah... I know. I don't have to use it).

Second, I would really like to learn about specific situation where killing the process for exceeding the memory quota would actually make sense.
In my current view:

   - if you can afford to kill a process, maybe you shouldn't be running it in a first place;
   - Nobody learned much from fixing problems by switching things on and off;
   - if you don't know what you're doing, maybe you should learn how to do it before doing it.
   
My suggestion to let the process know and decide what to do next was motivated by these views.

Ulf, for what is worth, your suggestion to add additional tracers resonates quite well with me.
Using error_logger for that purpose is actually presuming that reaching a memory quota is an actual error and it doesn't require any leap of faith from there to conclude it would be logical to terminate the "offender".
In my view, this situation should be rather classified as an "operating condition", and if we were to spend any system resources in tracking that condition, maybe we should consider a scenario which has a chance to provide the most bang for the CPU buck.

Kind regards

V/


(*) To be fair, I did this testing about 8 years ago and I may have missed something out then, and there is strong possibility  that the current VM handles memory management in a different way.  


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20160428/eeebcf1f/attachment.htm>


More information about the erlang-questions mailing list