[erlang-patches] Pollset per scheduler and bind port to scheduler

Lukas Larsson lukas@REDACTED
Tue Aug 14 09:56:07 CEST 2012


On 13/08/12 08:46, Wei Cao wrote:
> 2012/8/8 Lukas Larsson <lukas@REDACTED>:
>> Hello!
>>
>> Sorry for the lack of replies, I've been away from the office the last two
>> weeks.
>>
>> We have done some changes in the master which could affect the performance
>> of the work you did:
>> https://github.com/erlang/otp/commit/8457bb3aeb335733d22ab6d517fe173cd90b4f55
>> Could you check and see if these changes affect any of your benchmarks?
> Awesome, in my pressure test to non-keepalive http server(reaching 15k
> requests per second), the total times schedulers be woken up from poll
> wait to processing aux works is reduced from 205k per second to 83k
> per second, a dramatical decrement.
>
> According to my profiling, the big saving primarily comes from using
> thread progress instead of scheduling misc aux work
> https://github.com/erlang/otp/commit/88126e785de24f5f41068c610bc13840dcab4a7d.
>
> However, compared to the times schedulers be woken up to handle actual
> io tasks (about 10k), 83k is still a big number, and I observed half
> of them, about 40k, is contributed by
> erts_alloc_notify_delayed_dealloc, Can you eliminate it further more?
It is definitely possible to reduce this further. For instance you could 
introduce a reduction counter to limit the number of times which 
handle_delayed_aux_work_wakeup is called.

Another idea might be to let the schedulers sleep in select and only 
wake them after a timeout or a certain number of reductions have happened.

Or a combination of both.

Unfortunately this is not prioritized work for us right now, but we 
would welcome any patches in the area. If you have any questions let me 
know and I'll do my best to dig out answers.

Lukas

>
>> It would indeed be better to wake interrupt the poll less often if possible.
>> However measuring time is generally quite time consuming, so we want to
>> avoid it as much as possible, maybe use a scheduler specific counter? Its a
>> fine balance though as we do not want to delay dealloc of memory for too
>> long.
>>
>> Lukas
>>
>>
>>
>> On 24/07/12 11:57, Wei Cao wrote:
>>> 2012/7/11 Lukas Larsson <lukas@REDACTED>:
>>>> Hi,
>>>>
>>>> The reason I'm skeptical about anything which binds processes/ports to
>>>> scheduler is that it feels like a temporary solution and would much
>>>> rather
>>>> do a proper solution where the scheduler takes care of these things for
>>>> you.
>>>> But as I said, internally we need to talk this over when it is not in the
>>>> middle of summer vacation.
>>>>
>>>> I did some benchmarking using ab and found basically the same figures as
>>>> you. The below is with keep-alive and the values are requests per second:
>>>>
>>>>                      not-bound        bound
>>>>
>>>> R15B01                 44k        37k
>>>>
>>>> master                  44k        35k
>>>>
>>>> master+mp          48k        49k
>>>>
>>>> master+mp+pb    49k        55k
>>>>
>>>> [mp]: multi-poll patch
>>>> [pb]: port bind patch
>>>> [bound]: Used {scheduler,I} to spread load
>>>>
>>>> Unfortunately I also found that when doing the non-keep alive benchmark
>>>> the
>>>> performance is seriously degraded.
>>>>
>>>> R15B01 not-bound                  8255
>>>> master+mp+pb not-bound    7668
>>>> master+mp+pb bound           5765
>>>>
>>> I found why performance degrates in non-keep alive benchmark. It's
>>> caused by waking schedulers up to do aux works too frequently.
>>>
>>> After applying pollset per scheduler patch, each scheduler tends to
>>> wait in poll operation if there is no heavy load. So if an aux work
>>> arrives at this time, erts_check_io_interrupt() will be called and
>>> write a byte to a pipe fd to wake up the scheduler from poll
>>> operation.
>>>
>>> Unfortunately, there're quite a lot of aux works in Erlang VM, such as
>>> delayed dealloc, so schedulers're frequently woken up from poll to
>>> process these tasks, say, deallocate a memory block, and go into poll
>>> again.
>>>
>>> In a non-keep alive benchmark with 15k QPS, schedulers're woken up
>>> about 300k times per second (I hacked the code adding a atomic counter
>>> to record it), it's really time cosuming.
>>>
>>> After commenting out the code which wakes up scheduler to process aux
>>> work, the QPS increases to 20k, which proves above suspect.
>>>
>>> I think it's really CPU time wasting to wake up scheduler from polling
>>> to deallocate some memory blocks etc, may be it's more suitable to
>>> wake schedulers up periodically (say, 1 millisecond) to process them,
>>> or doing aux tasks in the aux_thread only. Any ideas to fix this?
>>>
>>>
>>>> I did some gprof runs but could not find anything obvious that is going
>>>> wrong.
>>>>
>>>> Lukas
>>>>
>>>>
>>>> On 11/07/12 04:21, Wei Cao wrote:
>>>>> I added a macro to conditional compile the patch because I think it
>>>>> can be more selectable, I can remove the macro, fix the compilation
>>>>> error and test on mingw platform in later version.
>>>>>
>>>>> how about provide another BIF named port_flag (like process_flag) to
>>>>> let user bind port to a given scheduler?
>>>>>
>>>>>
>>>
>
>




More information about the erlang-patches mailing list