<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On 31 October 2014 10:49, Lukas Larsson <span dir="ltr"><<a href="mailto:lukas@erlang.org" target="_blank">lukas@erlang.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><div class="h5">On Fri, Oct 31, 2014 at 11:06 AM, Chandru <span dir="ltr"><<a href="mailto:chandrashekhar.mullaparthi@gmail.com" target="_blank">chandrashekhar.mullaparthi@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div><div><div class="gmail_extra"><br><div class="gmail_quote">On 31 October 2014 10:01, Lukas Larsson <span dir="ltr"><<a href="mailto:lukas@erlang.org" target="_blank">lukas@erlang.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><div>On Fri, Oct 31, 2014 at 10:51 AM, Chandru <span dir="ltr"><<a href="mailto:chandrashekhar.mullaparthi@gmail.com" target="_blank">chandrashekhar.mullaparthi@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">Thank you Lukas.<br><div class="gmail_extra"><br><div class="gmail_quote"><div><div>On 31 October 2014 09:37, Lukas Larsson <span dir="ltr"><<a href="mailto:lukas@erlang.org" target="_blank">lukas@erlang.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra">Hello,</div><div class="gmail_extra"><br></div><div class="gmail_extra"><div class="gmail_quote"><span>On Fri, Oct 31, 2014 at 10:20 AM, Chandrashekhar Mullaparthi <span dir="ltr"><<a href="mailto:chandrashekhar.mullaparthi@gmail.com" target="_blank">chandrashekhar.mullaparthi@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div lang="EN-GB" link="#0563C1" vlink="#954F72"><div><p class="MsoNormal"> <br></p><p class="MsoNormal"><u></u></p><p class="MsoNormal">I have a question about beam’s GC implementation. When an erlang process is being GCed, is the processing required to do the GC taken out of the process’s 2000 reduction quota, or is it done after a process has been scheduled out?<u></u><u></u></p><p class="MsoNormal"><u></u> </p></div></div></blockquote><div><br></div></span><div>The GC work is taken out of the process' reductions. The GC is never triggered when it is scheduled out, but it can be triggered before being scheduled in, in which case the newly allotted reductions will be reduced by the GC work.</div></div></div></div></blockquote><div><br></div></div></div><div>So what happens if the process has a large heap? Can the GC end up taking more time than to execute 2000 reductions? Or is it somehow time bounded? If it is not time bounded, it explains a lot of the problems I'm seeing on a system.</div></div></div></div></blockquote><div><br></div></div></div><div>The current GC is not incremental, so once it has started doing work it cannot be interrupted. This means that if a process has a large heap it will block all other execution on that scheduler for the GC duration. </div></div></div></div>
</blockquote></div></div><div class="gmail_extra"><br></div></div></div><div class="gmail_extra">Can it also block other schedulers by any chance? Robert Virding's presentation [1] says that every 20-40k reductions, a new master scheduler is chosen. I'm wondering if this transition of master scheduler has to happen while one of the scheduler's is stuck in a long GC, will it potentially block other schedulers?</div><div class="gmail_extra"><br></div></div></blockquote><div><br></div></div></div><div>It can result in schedulers not waking up properly, for the same reasons as long running nifs/bifs causes this. So if this is you problem I would have a look at which processes are using a lot of heap space and try to reduce it, or make sure that they do not GC :)</div></div></div></div></blockquote><div><br></div><div>Thanks for the confirmation, and yes of course. I'm trying to convince someone that it isn't GC which is the problem, but its the system design ;-) Knowing exactly how it works helps in the argument.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div>Scott has collected a bunch of his observations on the long running nifs/bifs issue here: <a href="https://github.com/slfritchie/nifwait/blob/md5/README.md" target="_blank">https://github.com/slfritchie/nifwait/blob/md5/README.md</a>. Sometime in the not too distant future I hope to have the time to write an incremental GC for large Erlang heaps, but in the meantime I believe Scott recommends using something like "+sfwi 500 +scl false" in order to avoid this problem. Try them out and see if the options work for you.</div></div></div></div></blockquote><div><br></div><div>Thanks, I'll give them a try.</div><div><br></div><div>cheers</div><div>Chandru</div><div> </div></div></div></div>