[erlang-questions] Why system_flag(scheduler_bind_type) is deprecated?

Rickard Green <>
Tue Jan 8 10:32:55 CET 2013

The following is the idea of how it should work some time in the future. Note that this won't happen in R16, and that it has not implemented yet and may be subject to changes.

When the runtime system boots, a fixed cpu-topology is set. The cpu-topology set is either automatically determined, or configured by the user on the command line. Schedulers are then bound to logical processors according to the bind type argument passed. This mapping between scheduler and logical processor will after this be fixed and will never change.

This does, however, *not* mean that from this point on every logical processor must remain online. It will still be possible to take processors offline as well as online. The state of the scheduler mapped to a logical processor should normally follow the state of the logical processor. That is, when a processor is taken offline, so is its scheduler... The cpu-topology used should match the actual hardware topology of the machine when it is fully equipped, and the amount of schedulers started should equal the amount of logical processors in that cpu-topology.

This way we will be able to handle reasonable changes of the cpu-topology while we will be able to have a fixed mapping between schedulers and logical processors. This fixed mapping is important since it simplifies implementations of things like better NUMA support, load-balancing that take cpu-topology into account, etc. My guess is that using this strategy we will be able to handle hot-plug, and power management scenarios on real hardware for quite some time whiteout running into trouble.

As it is today, when cpu-topology and/or scheduler bind type is changed in runtime, the mapping between schedulers and logical processors can completely transform into something that looks nothing similar to what it was before. This unnecessarily complicates things a lot. I don't think that we will see hardware that can be physically transformed this way in the near future. When that happens we will just have to deal with it. Such transformations might perhaps occur in virtualized environments, but you don't want to bind schedulers if the cpu-topology doesn't match the actual physical hardware topology. This will just get more and more important as the runtime system utilize more information about the cpu-topology.

Another scenario when you typically don't want to bind schedulers is when all schedulers aren't guaranteed to get the major part of the cpu time of the processors that they are bound to. We have in some cases seen severe performance degradation in such scenarios (which is why we do not bind schedulers by default anymore). In a virtualized environment it might also be hard to get such guarantees.

Rickard Green, Erlang/OTP, Ericsson AB

On Jan 4, 2013, at 12:54 PM, Angel J. Alvarez Miguel <> wrote:

> hi
> I look a this :
> "...since we do not want to change this configuration in runtime..."
> This sounds a bit limiting
> Current platforms can (and will) shutdown CPU cores in response to power management policies 
> and erlang deamonized software must adapt to this changes without needing a restart.
> While it seems at first an unprobable scenario right now, on server side it really sounds good when 
> you run erlang on laptop/mobile platforms.
> Even some virtualization products are planin to use hot-plug facilities to make resource management easier on current
> platforms so its not really crazy today to think that cpu cores will come and leave at unexpected times on tipical cloud scenarios.
> IMHO the ability to sense this on user code and facilities to control or alter the scheduler behaviour will be more important 
> in the near future , thus having to restart the entire VM for this seems overkill and a backward movement from the current setup.
> I have recently experimiented this during the summer case when i unplug my laptop my power management script 
> shut down half of the cores when battery starts to run out and lowers de cpu frecuency and erlang responded 
> on stderrr something about scheduler failure trying to bind to the dead cores, then i managed to make some tests trying to
> monitor this from erlang code and to put schedulers offline until cores reapear again.
> Even i can depict scenarios where depending on licensing feaures application code can control the amount of paralelism 
> the VM will exhibit by tweaking schedulers affinity or just shutting them down to limit to the cores licensed...
> You should think about this even if you plan to make other improvements
> Regards, Angel
> On Jueves, 27 de Diciembre de 2012 00:45:05 Rickard Green escribió:
>> On Dec 26, 2012, at 5:21 PM, Max Lapshin <> wrote:
>>> Why system_flag(scheduler_bind_type, How) is deprecated in favor of +sbt
>>> flag?
>>> This sbt flag is different when I have to launch via escript or via erl
>>> and this is why it is less convenient than using system_flag call.
>>> Also system_flag call can be called according to some system
>>> configuration file and sbt needs full restart.
>> Both the 'cpu_topology' and the 'scheduler_bind_type' arguments of the
>> system_flag/2 BIF are deprecated and will be removed since we do not want
>> to change this configuration in runtime. With the support for this we got
>> today the runtime configuration change isn't that problematic, but it
>> prevents future planned improvements from being implemented.
>> Regards,
>> Rickard Green, Erlang/OTP, Ericsson AB
>> _______________________________________________
>> erlang-questions mailing list
>> http://erlang.org/mailman/listinfo/erlang-questions
> _______________________________________________
> erlang-questions mailing list
> http://erlang.org/mailman/listinfo/erlang-questions

More information about the erlang-questions mailing list