<div><span class="gmail_quote">On 10/21/07, <b class="gmail_sendername">Anders Nygren</b> <<a href="mailto:anders.nygren@gmail.com">anders.nygren@gmail.com</a>> wrote:</span><blockquote class="gmail_quote" style="margin:0;margin-left:0.8ex;border-left:1px #ccc solid;padding-left:1ex">
On 10/21/07, Steve Vinoski <<a href="mailto:vinoski@ieee.org">vinoski@ieee.org</a>> wrote:<br>> On 21 Oct 2007 11:41:12 +0200, Bjorn Gustavsson <<a href="mailto:bjorn@erix.ericsson.se">bjorn@erix.ericsson.se</a>
><br>> wrote:<br>> > "Anders Nygren" <<a href="mailto:anders.nygren@gmail.com">anders.nygren@gmail.com</a>> writes:<br>> > ><br>> > > My only idea is that this is caused by some pathological CPU cache
<br>> > > behaviour. Does anyone have a better explanation?<br>> ><br>> > Could be. I have another GUESS: using a tuple instead of dict will<br>> > result in a smaller heap size for the spawned process, which will lead
<br>> > to more frequent garbage collections. You could use the spawn_opt()<br>> > BIF with the min_heap_size option.<br>> ><br>> > To get a similar heap size as when using a dict, use<br>> >
<br>> > erts_debug:size(wfbm3:init()).<br>> ><br>> > which is 1120.<br>> ><br>><br>> I didn't try this or Bob's suggestion of simulating a tuple, but I did<br>> eliminate a dict lookup in the code that finds shift values, and also
<br>> changed the macros to use hard-coded string lengths (since the strings are<br>> fixed) rather than recalculating them with length/1, and got another 25%<br>> speedup:<br>><br>> <<a href="http://steve.vinoski.net/blog">
http://steve.vinoski.net/blog</a>/2007/10/21/faster-wf-still/><br>><br>><br><br>So I then took Steve's latest and<br>- changed to using a tuple instead of dict and VICTORY :)</blockquote><div><br class="webkit-block-placeholder">
</div><div><snip/></div><div><br class="webkit-block-placeholder"></div><div>Anders sent me his code and I ran it on my 8-core Linux box, with the following performance results. VICTORY is right! :-)</div><div><br class="webkit-block-placeholder">
</div><div>real 0m1.904s</div><div>user 0m7.917s</div><div>sys 0m1.185s</div><div><br class="webkit-block-placeholder"></div></div>Like I mentioned to Anders in private email, it's nice to have someone more experienced with Erlang finally taking a look at this; I'm still a relative newbie.
<div><br class="webkit-block-placeholder"></div><div>One thing I've liked about this entire exercise is that the early attempts at solving Tim Bray's Wide Finder in Erlang were taking minutes to execute and were providing only partial answers. Several of us then started whittling away at it, and because of the richness of the language, we had a variety of different avenues to explore. Over time, we've vastly increased the performance of our solutions. Anders's solution now beats Ruby on the same machine by about
0.3s, and because of the way it uses multiple cores, it will likely execute extremely quickly and efficiently when Tim gets a chance to try it on his T5120.</div><div><br class="webkit-block-placeholder"></div><div>Yes, fast solutions in other languages were quickly found, but those had almost nowhere to go beyond their initial forms in terms of improvement, not because they were already so fast, but because the languages ran out of alternatives. This is especially true when it comes to taking advantage of the T5120's many cores. I'm a fan of many languages, including Ruby, Python, Perl, and C++, all of which have figured prominently in the collection of various Wide Finder solutions. But for my money, Erlang has fulfilled Tim's original wishes the best, which is to take the best possible advantage of all those cores.
</div><div><br class="webkit-block-placeholder"></div><div>Anders, I look forward to even more speed when you finish those GC tweaks! :-)</div><div><br><div>thanks,</div><div>--steve</div></div>