[erlang-questions] Max heap size

Chandru chandrashekhar.mullaparthi@REDACTED
Thu Apr 28 12:34:02 CEST 2016


One use I can see for this option is during stress testing where if one was
left scratching one's head about where a problem lay, adding this option
into some of the suspect processes might give some clues. Or, adding this
option to some long lived processes might highlight hotspots in the system
where one wasn't expecting a hotspot.

I must admit I'm finding it difficult to imagine using this option
sensibly. Partly because in a language where a user is not expected to do
any memory management, it is hard to really understand how the heap size is
growing. And as memory management in beam becomes more and more complex
(quite rightly), it will be harder to keep track of this.

Admittedly, for high performance systems there is no shying away from
understanding all this but still this option seems a bit 'artificial'. In
the ticket on Github I raised the issue of bounded message queues (again).
I think for the average Erlang programmer, that is a more intuitive thing
to understand. Too many messages in a message queue means that my process
is a bottleneck and I need to redesign. Off-heap message queues which are
coming in R19 don't really help with this - regardless of how efficient we
make the behaviour of a process with large message queues, it is not
helping us fix the root cause of the problem which is lack of flow control
for message passing.

Again, I understand that there is no easy way to deliver this feature, but
in my view that would be more useful.

regards,
Chandru



On 28 April 2016 at 10:56, Valentin Micic <v@REDACTED> wrote:

>
> On 27 Apr 2016, at 3:14 PM, Ulf Wiger wrote:
>
>
> On 27 Apr 2016, at 14:23, Valentin Micic <v@REDACTED> wrote:
>
> Are you saying that a process will be terminated regardless of how much
> actual memory we have within a system as a whole?
>
> Well, how useful would it be to kill a mission critical process just
> because it is using slightly more memory that we have originally envisaged?
>
>
> Note that you are in full control of this - per process!
>
> If you have a mission-critical process that shouldn’t be killed, you
> simply don’t add this process flag.
>
> A great future addition would be allowing multiple tracers, so that you
> could e.g. put a GC tracer on processes that may need to be allowed to
> exceed a max size and not just be brutally killed for it.
>
> BR,
> Ulf W
>
>
> A while back I've been testing a behavior of the Erlang's queue module
> "under stress conditions": I would enqueue, say,  500,000 elements; then
> remove them all, and subsequently enqueue another 100 elements.
> My prediction was that the memory will grow up to the point (as required
> by 500,000 entries), and then stop growing; that is to say, upon dequeueing
> all 500,000 entries, the subsequent enqueueing of 100 elements would not
> cause new memory to be allocated.
>
> This turned up to be a naive expectation as the memory just kept growing
> (and, in some way, that made sense, but let me stick to the topic). (*)
>
> For a solution to this problem  I ended up doing something similar to what
> Lukas suggested in his reply to my earlier comment on this thread ( *You
> can build this using erlang:system_monitor(self(), [{large_heap, Sz}])  *)
>
> As I knew (more or less) what the size of the individual element was, I
> was able to track memory consumption as a function of number of elements in
> the queue, and thus establish anomalies where I had extraordinary amount of
> memory relative to the size of the queue, and induce GC in order to free
> the excess memory.
>
> * * *
>
> I had two reasons for writing the above introduction.
>
> First, I wanted to show a specific situation where "trigger-happy-kill-it"
> approach would not do us any justice (and, yeah, yeah... I know. I don't
> have to use it).
>
> Second, I would really like to learn about specific situation where
> killing the process for exceeding the memory quota would actually make
> sense.
> In my current view:
>
>    - if you can afford to kill a process, maybe you shouldn't be running
> it in a first place;
>    - Nobody learned much from fixing problems by switching things on and
> off;
>    - if you don't know what you're doing, maybe you should learn how to do
> it before doing it.
>
> My suggestion to let the process know and decide what to do next was
> motivated by these views.
>
> Ulf, for what is worth, your suggestion to add additional tracers
> resonates quite well with me.
> Using error_logger for that purpose is actually presuming that reaching a
> memory quota is an actual error and it doesn't require any leap of faith
> from there to conclude it would be logical to terminate the "offender".
> In my view, this situation should be rather classified as an "operating
> condition", and if we were to spend any system resources in tracking that
> condition, maybe we should consider a scenario which has a chance to
> provide the most bang for the CPU buck.
>
> Kind regards
>
> V/
>
>
> (*) To be fair, I did this testing about 8 years ago and I may have missed
> something out then, and there is strong possibility  that the current VM
> handles memory management in a different way.
>
>
>
> _______________________________________________
> erlang-questions mailing list
> erlang-questions@REDACTED
> http://erlang.org/mailman/listinfo/erlang-questions
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20160428/4ac03fb2/attachment.htm>


More information about the erlang-questions mailing list