<div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Aug 14, 2015 at 2:28 PM, Loïc Hoguin <span dir="ltr"><<a href="mailto:essen@ninenines.eu" target="_blank">essen@ninenines.eu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">The time spent to run X is the worst kind of benchmark value you can rely on though.</blockquote></div><br></div><div class="gmail_extra">This would be a worry of me as well. The Go benchmark runner is better than nothing, but it is highly naive. One, it doesn't try to figure out garbage collector interference. Two, it doesn't cook benchmarks until they reach a stable state. Three, it doesn't carry out the proper statistics. A simple bootstrapping analysis of the average could tell you if the average is stable or is perturbed by some internal thing in the system.<br><br></div><div class="gmail_extra">The problem may not even be tied to the Erlang node. When you are measuring highly sensitive benchmarks at the nanosecond scale, a simple interrupt may be enough to mess up the benchmark. And you can't in general control for this in a complex environment.<br></div><div class="gmail_extra"><br clear="all"><br>-- <br><div class="gmail_signature">J.</div>
</div></div>