<html><body bgcolor="#FFFFFF"><div><br></div><div>On Jul 6, 2011, at 5:45 AM, Zabrane Mickael <<a href="mailto:zabrane3@gmail.com">zabrane3@gmail.com</a>> wrote:<br><font class="Apple-style-span" color="#005001"><font class="Apple-style-span" color="#000000"><br></font></font></div><blockquote type="cite"><div><span></span><br><span>I disagree. That's not what the challenge is about.</span><br><span>Why don't just try it and avoid (useless) questions!</span><br></div></blockquote><br><div>Because your "benchmark" is meaningless without proper controls and variables. This supposed challenge characterizes nothing, because you've not defined any meaningful constraints on the system. </div><div><br></div><div>A meaningful benchmark measures changes in response to one or more environmental pressures. For example, you can measure OS process starvation's effect on tcp backlog across each server with memory, io, CPU load, and payload all remaining constant. </div><div><br></div><div>Similarly you can look at performance with respect to slow connections, high packet loss, CPU starvation, memory starvation, file descriptor starvation, and io blocking. And just about any other variable you can control for. </div><div><br></div><div>Throughput will vary as a function of all these factors and will demonstrate a lot of important issues for each system design. </div><div><br></div><div>So to answer your question, the best reason not to try the challenge is because it is a terrible waste of time that could be better spent properly characterizing a production system. </div><div><br></div><div>Dave</div></body></html>