[erlang-questions] About Erlang SMP scheduler
Emilio De Camargo Francesquini
francesquini@REDACTED
Thu May 31 13:29:04 CEST 2012
Hello,
> check_balance() sets up
> migration paths and migration limits that are used in order to balance
> the load between schedulers. Without balancing, processes with the
> same assigned priority can get very different effective priorities.
I have some questions relating to that.
First, after the decision to rebalance has been made, what are the
criteria to select which processes from each queue will be migrated?
The last ones in the queue, the first, at random, ...?
Second, how are these migration paths and limits defined? Does the
definition of the migration paths take into consideration the
architecture (SMP, multisocket SMP, NUMA, ...) of the machine? If we
consider that the lifespan of the majority of the process is quite
short, and if we are running on a NUMA machine for example, migrating
processes at random to another numa-node might hurt the performance
more than letting the execution queues unbalanced.
Thanks!
Best Regards
Emilio Francesquini
2012/5/8 Rickard Green <rickard@REDACTED>:
> The work stealing is there to quickly distribute work between
> schedulers, but it doesn't balance the load. check_balance() sets up
> migration paths and migration limits that are used in order to balance
> the load between schedulers. Without balancing, processes with the
> same assigned priority can get very different effective priorities.
>
> Example: With 4 schedulers and 400 cpu bound processes executing the
> same code, each scheduler will eventually end up with 100 processes to
> manage. That is, all processes will have the same access to the cpu.
> If you disable check_balance(), you may end up with three schedulers
> only having one process each, and one scheduler with 397 processes.
> These 397 processes will effectively have a much lower priority than
> the other three processes.
>
> Regards,
> Rickard Green, Erlang/OTP, Ericsson AB
>
> 2012/5/2 Siyao Zheng(郑思遥) <zhengsyao@REDACTED>:
>> Hi,
>>
>> Yes, that's true. But the "try_steal_task()" afterwards is also controlled
>> by ERTS_SMP macro. It doesn't make any sense to talk about workload balance
>> outside multi-processor environment. I'm just wondering why "check_balance"
>> is needed while there exists work stealing(try_steal_task).
>>
>> Cheers
>>
>> On May 2, 2012, at 12:04 PM, xu yu wrote:
>>
>> Hi,
>>
>> "check_balance" controlled by "ifdef ERTS_SMP", so...
>>
>>
>> 2012/4/27 "Siyao Zheng(郑思遥)" <zhengsyao@REDACTED>
>>>
>>> Hi,
>>>
>>> schedule(), in which check_balance() is only called.
>>>
>>> In schedule(), when check_balance_reds reaches zero, scheduler decides to
>>> "check_balance". But later in schedule(), when scheduler finds run queue
>>> being empty, it will try to steal task from other schedulers (by calling
>>> try_steal_task()), which is also a load balance mechanism. I'm just
>>> wondering what the purpose of check_balance().
>>>
>>> Cheers
>>> Siyao
>>>
>>> On Apr 27, 2012, at 9:40 PM, Zabrane Mickael wrote:
>>>
>>> > Hi Siyao,
>>> >
>>> > Which "check_balance" you've commented out?
>>> >
>>> > Here is what I found so far:
>>> >
>>> > $ find /otp_src_R15B01 -type f | xargs grep check_balance | grep -v
>>> > matches
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c: int
>>> > forced_check_balance;
>>> >
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:check_balance(ErtsRunQueue
>>> > *c_rq)
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:# error
>>> > check_balance() assumes ERTS_MAX_PROCESS < (1 << 27)
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:
>>> > c_rq->check_balance_reds = INT_MAX;
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:
>>> > c_rq->check_balance_reds = INT_MAX;
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:
>>> > rq->check_balance_reds = ERTS_RUNQ_CALL_CHECK_BALANCE_REDS;
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c: *
>>> > check_balance() is never called in more threads
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c: forced =
>>> > balance_info.forced_check_balance;
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:
>>> > balance_info.forced_check_balance = 0;
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:
>>> > c_rq->check_balance_reds = INT_MAX;
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:
>>> > rq->check_balance_reds = INT_MAX;
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:
>>> > rq->check_balance_reds = ERTS_RUNQ_CALL_CHECK_BALANCE_REDS;
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:
>>> > rq->check_balance_reds = ERTS_RUNQ_CALL_CHECK_BALANCE_REDS;
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:
>>> > balance_info.forced_check_balance = 0;
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:
>>> > (RQ)->check_balance_reds = ERTS_RUNQ_CALL_CHECK_BALANCE_REDS; \
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:
>>> > balance_info.forced_check_balance = 1;
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:
>>> > ERTS_RUNQ_IX(0)->check_balance_reds = 0;
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c: if
>>> > (rq->check_balance_reds <= 0)
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.c:
>>> > check_balance(rq);
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.h: int
>>> > check_balance_reds;
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.h:
>>> > (RQ)->check_balance_reds -= (REDS); \
>>> > /opt/otp_src_R15B01/erts/emulator/beam/erl_process.h:
>>> > (RQ)->check_balance_reds -= (REDS);
>>> > [...]
>>> >
>>> > For OTP Team: is this dangerous ?
>>> >
>>> > Regards,
>>> > Zabrane
>>> >
>>> >
>>> > On Apr 27, 2012, at 1:23 PM, Siyao Zheng(郑思遥) wrote:
>>> >
>>> >> Hi,
>>> >>
>>> >> In SMP version of Erlang, every scheduler periodically call
>>> >> "check_balance()" to check load balance among all schedulers, then it might
>>> >> migrate some processes to balance load, or shut down some schedulers with
>>> >> low load. Does anyone know why scheduler should do this? check_balance() is
>>> >> quite a big one, and it has to lock every run queue when it checks the run
>>> >> queue. I think it's quite a big cost here. The work stealing at each
>>> >> schedule step afterwards actually does work load balance very well. Actually
>>> >> after I comment out the check_balance() step and run BigBang and Hackbench
>>> >> benchmarks, they are really faster!
>>> >>
>>> >> So, I wonder what is the purpose of check_balance() step here, if it is
>>> >> related to scalability on many core processors or something, and if there is
>>> >> any other benchmark I can run to prove its usefulness.
>>> >>
>>> >> Cheers!
>>> >>
>>> >> Siyao
>>> >> _______________________________________________
>>> >> erlang-questions mailing list
>>> >> erlang-questions@REDACTED
>>> >> http://erlang.org/mailman/listinfo/erlang-questions
>>> >
>>> >
>>>
>>> _______________________________________________
>>> erlang-questions mailing list
>>> erlang-questions@REDACTED
>>> http://erlang.org/mailman/listinfo/erlang-questions
>>
>>
>>
>>
>> _______________________________________________
>> erlang-questions mailing list
>> erlang-questions@REDACTED
>> http://erlang.org/mailman/listinfo/erlang-questions
>>
> _______________________________________________
> erlang-questions mailing list
> erlang-questions@REDACTED
> http://erlang.org/mailman/listinfo/erlang-questions
More information about the erlang-questions
mailing list