[erlang-questions] [ANN] Erlang/OTP 17.0-rc1 has been released.
Daniel Goertzen
daniel.goertzen@REDACTED
Sat Feb 1 18:33:39 CET 2014
Thank you for all the details. I think I have a handle on how dirty
schedulers work now.
You mentioned that you are working on non-SMP support for dirty schedules.
I note that enif_send() currently requires SMP when used from a non-NIF
calling thread; would that requirement also be dropped by your work or is
that a completely separate area of code?
I run Erlang on smaller single core embedded systems and I currently have
to explicitly force use of SMP to make my nifs work (erlang picks the
non-SMP scheduler by default on single core systems). Not a big deal for
my case, but it could cause portability problems for other NIF-using apps
(app developed on multicore system breaks on single core system without
extra VM flags)
Cheers,
Dan.
On Sat, Feb 1, 2014 at 10:40 AM, Steve Vinoski <vinoski@REDACTED> wrote:
>
>
>
> On Sat, Feb 1, 2014 at 10:58 AM, Daniel Goertzen <
> daniel.goertzen@REDACTED> wrote:
>
>> Excellent!
>>
>> I have been using maps from Egil's branch for a while now and map pattern
>> matching has proven to be very useful. Going back to not having maps would
>> be *very* hard.
>>
>> Regarding dirty schedulers:
>>
>> - Just to confirm my understanding: When I use the dirty scheduler API I
>> can write NIFs that grind the CPU or wait on IO for minutes at a time,
>> right?
>>
>
> Yes, the dirty schedulers don't run any jobs that aren't specifically
> marked as dirty, nor do they steal from regular schedulers if they're idle.
> Please test this long-running job aspect and let us know if you hit any
> problems.
>
> Be aware of the following:
>
> 1) Dirty jobs are queued on dirty run queues, where there's one run queue
> for each dirty scheduler type (CPU or I/O). If you have so many
> long-running jobs that all your dirty scheduler threads are busy, new dirty
> jobs will sit in the appropriate run queue until a dirty scheduler thread
> becomes available.
>
> 2) You can't have more dirty CPU schedulers than normal schedulers, to
> keep them from interfering too much with normal schedulers. In RC1 there's
> no support for changing the number of dirty CPU schedulers online at
> run-time, but I believe I'll have that available in time for RC2.
>
> 3) You can have as many dirty I/O schedulers as you like, but just like
> the async thread pool, this number is fixed at boot time and will not have
> support for changing the number online at run-time like dirty CPU
> schedulers will. Basically the dirty I/O schedulers come up at boot time
> and stay up.
>
> 4) Dirty schedulers in RC1 do not suspend when multi-scheduling is
> blocked. (But that's OK as nobody in their right mind blocks
> multi-scheduling for real apps on purpose anyway.) Hopefully I'll have this
> working in time for RC2.
>
> 5) Currently, dirty schedulers are tied together with SMP support. In a
> future release this restriction will be removed, so that you can have dirty
> schedulers even if normal SMP schedulers are not configured at build time.
>
> 6) In the future, drivers and BIFs will also be able to use dirty
> schedulers, but this isn't part of RC1, and likely won't be in RC2 either.
>
> - When using the API you have to indicate if you are IO or CPU bound.
>> What is the consequence of getting that wrong (ie, you are CPU bound when
>> you said you would be IO bound or vice versa.)
>>
>
> If you have a lot of dirty I/O schedulers and you put CPU-bound jobs on
> them, you might interfere with normal schedulers, but to what extent I
> don't know. If you put I/O-bound jobs on dirty CPU schedulers, and they
> take up those threads for a long time waiting for I/O, it might cause your
> CPU-bound jobs to back up in the dirty CPU scheduler run queue.
>
>
>> - Are dirty schedulers a stepping stone to native processes?
>>
>
> I'd say yes, but Rickard Green is the best one to answer that. Either way,
> see
> http://www.erlang-factory.com/upload/presentations/377/RickardGreen-NativeInterface.pdffor more info.
>
> --steve
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20140201/fb33288e/attachment.htm>
More information about the erlang-questions
mailing list