[erlang-questions] Supervisor post start update of restart intensity and period

Michael Wright <>
Tue Oct 20 17:15:14 CEST 2015

I think most of the time the isolation is, as you say, exactly what one

The supervisor is great at allowing you to structure a supervision tree
(supervisors supervising other supervisors), and great at letting you
define appropriate behaviour for a set of related / interacting /
interoperating / dependent processes (by way of the different restart
strategies), but in both these cases the number of children is fixed.

Supervisor is also great for many simple_one_for_one cases where the number
of childen is dynamic, but the capability for being able to set ideal (at
least ideologically ideal) restart intensity is weakened when one doesn't
know how many children there will be, and when the other conditions from my
original email are met I'm stuck with a real compromise (fine if not too
many children crash) or supervising the children another way.

Where the children are not homogeneous they should probably be split into 2
simple_one_to_one supervisors supervised by another supervisor with a
strategy appropriate to the relationship of the 2 dynamic sets of children,
so then the supervisor as it is MAY be optimal. Where the criticality is
not spread (i.e. 10 children has similar over all value in terms of service
provision as 100 children) then another solution may be appropriate (like
less variation in numbers of children probably).

It wouldn't be terribly difficult to write a module to supervise precisely
as I want, but since supervisor would do what I wanted with the proposed
modification I considered it worth gauging interest in the addition. No one
as yet seems greatly troubled by the absence of the feature though I must


On Tue, Oct 20, 2015 at 2:01 PM, zxq9 <> wrote:

> On 2015年10月20日 火曜日 12:33:34 Michael Wright wrote:
> > Hi Torben,
> >
> > I did wonder about this as a solution, but I'm not terribly keen.
> >
> > Take the case of 10 sup_10 supervisors with a restart intensity of 10,
> each
> > with 10 children. If there are 11 child deaths for children concentrated
> on
> > one of those supervisors, it will trigger a sup_10 restart, but if the 11
> > children that die are distributed across 2 or more sup_10 supervisors, it
> > won't... The sup_10 restart probably isn't a problem of course, but the
> > number of total deaths in a period of time that will cause a sup_sup to
> > restart is now variable, depending on exactly which of the children
> across
> > the sup_10 supervisors die.
> >
> > In fact, in this situation, 11 child deaths could cause a sup_10 death,
> or
> > 100 child deaths could just about cause no sup_10 to die.
> With your initial post I thought "hrm, that is sort of odd that it isn't
> dynamically configurable" but the only scenarios I could think of off-hand
> for actual systems I would maybe actually use this were ones where I want
> precisely the sort of isolation you view as problematic.
> As it stands, Torben's suggestion where a sup_sup can spawn dynamically
> configurable supervisors seems ideal -- especially considering that I could
> retire an existing sup (with the "wrong" configuration) and direct all new
> child creation to the new one (with the "right" configuration) -- and, hot
> updates aside, probably smoothly transition a running process' state to a
> new process under the new supervisor. There could easily be edge cases
> where that wouldn't work, but the general case seems straightforward.
> It would be nice to abstract this all away for the general case, of
> course, and that doesn't seem to require making any adjustments to OTP.
> But I lack imagination. In what case would this not work?
> -Craig
> _______________________________________________
> erlang-questions mailing list
> http://erlang.org/mailman/listinfo/erlang-questions
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20151020/133ab3b8/attachment.html>

More information about the erlang-questions mailing list