<div dir="ltr">Hi,<div><br></div><div>I applied the following patch:</div><div><br></div><div><div>; diff -u test_fsm5.orig test_fsm5.erl</div><div>--- test_fsm5.orig<span class="" style="white-space:pre"> </span>2015-05-27 13:54:34.381978128 +0200</div><div>+++ test_fsm5.erl<span class="" style="white-space:pre"> </span>2015-05-27 13:51:38.521826422 +0200</div><div>@@ -10,7 +10,7 @@</div><div> -define(MINCOUNT, 300). % iters. Will not record jitter until this many timeouts have passed. Crude attempt to give schedulers settle time.</div><div> </div><div> init([TickInterval,NotificationPid]) -></div><div>- State = #state{ last_time = get_os_time(), tickinterval = TickInterval, pid = NotificationPid },</div><div>+ State = #state{ last_time = erlang:monotonic_time(), tickinterval = TickInterval, pid = NotificationPid },</div><div> {ok, s_firststate, State, TickInterval}.</div><div> </div><div> handle_event(_Event, StateName, StateData) -></div><div>@@ -29,10 +29,10 @@</div><div> {ok, StateName, StateData}.</div><div> </div><div> s_firststate(timeout, #state{ last_time = LastTime, count = Count , tickinterval = TickInterval, pid = NotificationPid, jitters = Jitters } = StateData) -></div><div>- NewTime = get_os_time(),</div><div>- TimeDiff = NewTime - LastTime,</div><div>- Jitter = TimeDiff - (TickInterval * 1000), % microseconds</div><div>+ NewTime = erlang:monotonic_time(),</div><div> gen_fsm:start_timer(TickInterval, timeout),</div><div>+ TimeDiff = erlang:convert_time_unit(NewTime - LastTime, native, micro_seconds),</div><div>+ Jitter = TimeDiff - (TickInterval * 1000), % microseconds</div><div> case {(Count > ?MINCOUNT), (Count < ?MAXCOUNT)} of</div><div> {false, true} -></div><div> { next_state, s_firststate, StateData#state{ last_time = NewTime, count = Count + 1, pid = NotificationPid, jitters = Jitters } };</div><div>@@ -81,10 +81,6 @@</div><div> report(TickFrom,NumFSMs),</div><div> go_run(NumFSMs, TickFrom + TickStep, TickTo, TickStep).</div><div> </div><div>-get_os_time() -></div><div>- {MegaS, S, MicroS} = os:timestamp(),</div><div>- (MegaS * 1000000 * 1000000 + S * 1000000 + MicroS).</div><div>-</div><div> await(0) -> ok;</div><div> await(N) -></div><div> receive _ -></div><div>@@ -93,6 +89,7 @@</div><div> </div><div> report(Tick, NumFSMs) -></div><div> X = lists:sort([A || {_, A} <- ets:lookup(metrics,jitter)]),</div><div>+ file:write_file("observations.txt", [[integer_to_binary(E), $\n] || E <- X]),</div><div> Avg = lists:sum(X)/length(X),</div><div> Max = lists:max(X),</div><div> Min = lists:min(X),</div></div><div><br></div><div>which switches to Erlang/OTP 18-rc2 and also uses the new time API. Machine is a Core i7-4900MQ CPU, ArchLinux, fairly recent linux kernel (Linux lady-of-pain 4.0.4-2-ARCH #1 SMP PREEMPT Fri May 22 03:05:23 UTC 2015 x86_64 GNU/Linux). It also dumps the jitter to "observations.txt" so we can look at the observations.</div><div><br></div><div>We run `erl +sbt db +C multi_time_warp` to bind schedulers to cores and request a timing mode which is able to maximize precise/accurate monotonic time.</div><div><br></div><div>Then I ran the data through a bit of R:</div><div><br></div><div><div>> x <- read.csv("observations.txt", header=FALSE)</div><div>> require(ggplot2)</div><div>> p <- ggplot(x, aes(x = V1))</div><div>> png("observations.png")</div><div>> p + geom_density()</div><div>> dev.off()</div><div>X11cairo </div><div> 2 </div><div>> summary(x)</div><div> V1 </div><div> Min. : 153 </div><div> 1st Qu.: 973 </div><div> Median : 998 </div><div> Mean :1001 </div><div> 3rd Qu.:1028 </div><div> Max. :8018 </div></div><div><br></div><div>One may wonder what the spread is of the upper quantiles:</div><div><br></div><div><div>> quantile(x$V1, c(.9, .95, .99, .999, .9999, .99999))</div><div> 90% 95% 99% 99.9% 99.99% 99.999% </div><div>1065.000 1083.000 1142.000 1818.001 5045.000 7945.010 </div></div><div><br></div><div>The kernel density plot is attached, and it places itself around 1000 quite precisely.</div><div><br></div><div>This is actually somewhat to be expected. When we request a timeout of 40ms, we are somewhere inside a milli-second T. We can be close to the "start", i.e., T.001, or close to the "end", i.e., T.997. If we define T = 0 as our start point for our time, then we clearly can't wake up at P = 40, because 40 - 0.997 is 39.003, which would violate our wait time and wake up too early. Hence, we round our wake-time up to the next full milli-second, which is 41. This tracks extremely well with our numbers.</div><div><br></div><div>But it is highly unlikely that our 1000 processes would all trigger in the *same* millisecond, which would make a few of them round up to a different timepoint for wakeup. At least this would be a plausible explanation for jitter less than 1000 micro-seconds.</div><div><br></div><div>As for the 99.99th and 99.999th percentile, I think these can be attributed to something else happening in the system: OS, Garbage Collection, etc. I'm not sure the culprit there is the timer wheels.</div><div><br></div><div>Another point worth mentioning is that if 1000 processes wake up at the same millisecond edge, then they will queue over N cores. So invariably, this means some of the processes will have jitter. The workload you are working with is very "spiky" in this case, even if the average load on the system is very low. Do the math, assuming perfect spread, each core gets 1000 div 8 = 125 processes. They all awaken at the same time. Even if we can handle each process in 1 micro-second, there will be a process with a 125 micro-second latency. That is, since we pack so many processes in a short window, the system becomes more sensitive to small fluctuations. The numbers suggest we handle each process in less than a microsecond.</div><div><br></div><div>Going even faster, in the low nanoseconds, requires a change of system, since Erlang isn't the right tool for the job at that scale. You need to pack data in arrays to get better cache-access patterns at that scale since a DRAM hit is roughly 30ns (or more!). The functional nature of Erlang will hurt here. This is usually a job for OCaml, C, Go, GPUs, FPGAs or ASICs.</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 26, 2015 at 11:03 PM, Felix Gallo <span dir="ltr"><<a href="mailto:felixgallo@gmail.com" target="_blank">felixgallo@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Innovative thinking, Jesper! But in this case, in this testbed, the fsms aren't getting any messages other than those which they are delivering to themselves. Which adds to the intrigue. <div><br><div>I took your suggestion and tried using gen_fsm:start_timer/2. Interestingly it slightly increased the jitter variance and the negative jitter issue is still present. It's possible that my, ah, rapidly-and-pragmatically-built testbed suffers from some flaw, but I'm not seeing it.</div></div><div><br></div><div>Here's my code:</div><div><br></div><div><a href="https://gist.github.com/anonymous/47cde5e60a619319053f" target="_blank">https://gist.github.com/anonymous/47cde5e60a619319053f</a><br></div><div><br></div><div>Here's sample output on this small but moderately modern non-cloud osx machine:</div><div><br></div><div><div>> test_fsm5:go(1000,40,40,10).</div><div>waiting for 1000 FSMs, tickrate 40</div><div>avg: 1324.1012703862662</div><div>max: 50219</div><div>min: -184</div><div>median: 1018</div><div>95th: 2615</div><div>99th: 9698</div></div><div><br></div><div>note that the max is 50ms of jitter; the min is negative 184 us jitter, and the median jitter is about 1ms, which correlates well with my beliefs about scheduler wakeup timers...</div><span class="HOEnZb"><font color="#888888"><div><br></div><div>F.</div><div><br></div></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 26, 2015 at 12:09 PM, Jesper Louis Andersen <span dir="ltr"><<a href="mailto:jesper.louis.andersen@gmail.com" target="_blank">jesper.louis.andersen@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 26, 2015 at 8:52 PM, Felix Gallo <span dir="ltr"><<a href="mailto:felixgallo@gmail.com" target="_blank">felixgallo@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span style="color:rgb(0,0,0);font-family:Courier,monospace;font-size:medium;font-weight:bold">{next_state,NextStateName,NewStateData,Timeout}</span></blockquote></div><br>This explains why you sometimes get less than 30ms sleep times. If an event reaches the process before Timeout, then the timeout is not triggered. Also, it may explain the jitter you are seeing, because an early event will reset the timeout. Try using gen_fsm:start_timer/2 or erlang:send_after...</div><div class="gmail_extra"><br></div><div class="gmail_extra">If the problem persists, check lcnt. If you are locked on the timer wheel, then consider release 18 :)<span><font color="#888888"><br><br clear="all"><div><br></div>-- <br><div>J.</div>
</font></span></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">J.</div>
</div>