<div><span class="gmail_quote">On 10/23/07, <b class="gmail_sendername">Anders Nygren</b> <<a href="mailto:anders.nygren@gmail.com">anders.nygren@gmail.com</a>> wrote:</span><blockquote class="gmail_quote" style="margin:0;margin-left:0.8ex;border-left:1px #ccc solid;padding-left:1ex">
On 10/23/07, Steve Vinoski <<a href="mailto:vinoski@ieee.org">vinoski@ieee.org</a>> wrote:<br>> On 10/23/07, Anders Nygren <<a href="mailto:anders.nygren@gmail.com">anders.nygren@gmail.com</a>> wrote:<br>> > To summarize my progress on the widefinder problem
<br>> > A few days ago I started with Steve Vinoski's tbray16.erl<br>> > As a baseline on my 1.66 GHz dual core Centrino<br>> > laptop, Linux,<br>> > tbray16<br>> > real 0m7.067s<br>> > user
0m12.377s<br>> > sys 0m0.584s<br>><br>> Anders, thanks for collecting and posting these. I've just performed a set<br>> of new timings for all of them, as listed below. For each, I just ran this<br>
> command:<br>><br>> time erl -smp -noshell -run <test_case> main o1000k.ap >/dev/null<br>><br>> where "<test_case>" is the name of the tbray test case file. All were looped<br>> ten times, and I took the best timing for each. All tests were done on my
<br>> 8-core 2.33 GHz dual Intel Xeon with 2 GB RAM Linux box, in a local<br>> (non-NFS) directory.<br>><br><br>I don't keep track of the finer details of different CPUs, but I have<br>a vague memory of that the 8 core Xeon is really 2 4 core CPUs
<br>on one chip, is that correct?</blockquote><div><br class="webkit-block-placeholder"></div><div>Yes, I believe so. </div><br><blockquote class="gmail_quote" style="margin:0;margin-left:0.8ex;border-left:1px #ccc solid;padding-left:1ex">
The reason I am asking is that I can not figure out why Your<br>measurements have shorter real times than mine, but more<br>than twice the user time.</blockquote><div><br class="webkit-block-placeholder"></div><div>It's because the user time includes CPU time on all the cores. More cores, and more things happening on those cores, means more CPU time and thus more user time. Tim saw the same phenomenon on his T5120 and blogged about it here:
</div><div><br class="webkit-block-placeholder"></div><div><<a href="http://www.tbray.org/ongoing/When/200x/2007/10/09/Niagara-2-T2-T5120">http://www.tbray.org/ongoing/When/200x/2007/10/09/Niagara-2-T2-T5120</a>></div>
<br><blockquote class="gmail_quote" style="margin:0;margin-left:0.8ex;border-left:1px #ccc solid;padding-left:1ex">Also it does not seems to scale so well up to 8 cores.<br>Steve's best time is 0m1.546s an mine was 0m1.992s
.</blockquote><div><br class="webkit-block-placeholder"></div><div>The default settings in the code are probably not ideal for the 8-core box.</div><br><blockquote class="gmail_quote" style="margin:0;margin-left:0.8ex;border-left:1px #ccc solid;padding-left:1ex">
Steve, can You also do some tests on tbray_blockread using<br>different numbers of worker processes. Since smaller block<br>size means that we start using all the cores earlier.</blockquote><div><br class="webkit-block-placeholder">
</div><div>I ran a series of tests of different block sizes, and I found that for the 8 core, dividing the file into 1024 chunks (for this file, this means a block size of 230606 bytes) produced the best time:</div><div><br class="webkit-block-placeholder">
</div><div><div>real 0m1.103s</div><div>user 0m6.651s</div></div><div>sys 0m0.492s</div><div><br class="webkit-block-placeholder"></div><div>Which is pretty darn fast. :-) Smaller chunk sizes are slower probably because there's more result collecting and merging to do, while larger chunk sizes are slower because parallelism is reduced.
</div><div><br class="webkit-block-placeholder"></div><div>I can't wait to see this thing run on Tim's T5120.</div><div><br class="webkit-block-placeholder"></div><div>BTW, I got a comment on my blog today from someone who essentially said I was making Erlang look bad by applying it to a problem for which it's not a good fit. My response was that I didn't agree; Tim's original goal was to maximize the use of a multicore system for solving the Wide Finder, and Erlang now does that better than anything else I've seen so far. Does anyone in the Erlang community agree with the person who made that comment that this Wide Finder project has made Erlang look bad?
</div><div><br class="webkit-block-placeholder"></div><div>--steve</div></div>