<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Sat, Feb 1, 2014 at 10:58 AM, Daniel Goertzen <span dir="ltr"><<a href="mailto:daniel.goertzen@gmail.com" target="_blank">daniel.goertzen@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">Excellent!<div><br></div><div>I have been using maps from Egil's branch for a while now and map pattern matching has proven to be very useful. Going back to not having maps would be *very* hard.</div>
<div>
<br></div><div>Regarding dirty schedulers:</div><div><br></div><div>- Just to confirm my understanding: When I use the dirty scheduler API I can write NIFs that grind the CPU or wait on IO for minutes at a time, right?</div>
</div></blockquote><div><br></div><div>Yes, the dirty schedulers don't run any jobs that aren't specifically marked as dirty, nor do they steal from regular schedulers if they're idle. Please test this long-running job aspect and let us know if you hit any problems.</div>
<div><br></div><div>Be aware of the following:</div><div><br></div><div>1) Dirty jobs are queued on dirty run queues, where there's one run queue for each dirty scheduler type (CPU or I/O). If you have so many long-running jobs that all your dirty scheduler threads are busy, new dirty jobs will sit in the appropriate run queue until a dirty scheduler thread becomes available.</div>
<div><br></div><div>2) You can't have more dirty CPU schedulers than normal schedulers, to keep them from interfering too much with normal schedulers. In RC1 there's no support for changing the number of dirty CPU schedulers online at run-time, but I believe I'll have that available in time for RC2.</div>
<div><br></div><div>3) You can have as many dirty I/O schedulers as you like, but just like the async thread pool, this number is fixed at boot time and will not have support for changing the number online at run-time like dirty CPU schedulers will. Basically the dirty I/O schedulers come up at boot time and stay up.</div>
<div><br></div><div>4) Dirty schedulers in RC1 do not suspend when multi-scheduling is blocked. (But that's OK as nobody in their right mind blocks multi-scheduling for real apps on purpose anyway.) Hopefully I'll have this working in time for RC2.</div>
<div><br></div><div>5) Currently, dirty schedulers are tied together with SMP support. In a future release this restriction will be removed, so that you can have dirty schedulers even if normal SMP schedulers are not configured at build time.</div>
<div><br></div><div>6) In the future, drivers and BIFs will also be able to use dirty schedulers, but this isn't part of RC1, and likely won't be in RC2 either.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr">
<div>- When using the API you have to indicate if you are IO or CPU bound. What is the consequence of getting that wrong (ie, you are CPU bound when you said you would be IO bound or vice versa.)<br></div></div></blockquote>
<div><br></div><div>If you have a lot of dirty I/O schedulers and you put CPU-bound jobs on them, you might interfere with normal schedulers, but to what extent I don't know. If you put I/O-bound jobs on dirty CPU schedulers, and they take up those threads for a long time waiting for I/O, it might cause your CPU-bound jobs to back up in the dirty CPU scheduler run queue.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div></div><div>- Are dirty schedulers a stepping stone to native processes?<br>
</div></div></blockquote><div><br></div><div>I'd say yes, but Rickard Green is the best one to answer that. Either way, see <a href="http://www.erlang-factory.com/upload/presentations/377/RickardGreen-NativeInterface.pdf">http://www.erlang-factory.com/upload/presentations/377/RickardGreen-NativeInterface.pdf</a> for more info.</div>
<div><br></div><div>--steve</div></div></div></div>