<div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div class="gmail_quote"><div class="Ih2E3d"><div>Sorry, this email got sent before it was complete, so if you get it twice, my apologies. I never knew that, in Gmail, if you reply to an email while busy with another, it seems to send the one you were busy with first. I would have expected it to save to draft, but you live and learn.<br>
</div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>
It is not at all surprising that the SMP version run much slower than<br>
the non SMP version.<br>
I looked at the program source and what I find there is an<br>
implementation that does not allow very much of<br>
parallell execution.<br>
The broker process is clearly a bottleneck since it is involved in<br>
everything. Every other process must wait for the broker process<br>
before it can continue it's execution.</blockquote></div><div><br>Agreed.<br> <br></div><div class="Ih2E3d"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
The other processes are also doing so little useful work so the<br>
task-switching and locking around the run-queue will become the<br>
dominating thing.</blockquote></div><div><br>I also thought so - but more about that later. <br><br></div><div class="Ih2E3d"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
When you have benchmarks with parallell processes that hardly perform<br>
any work and all processes are highly dependent on<br>
other processes, you can expect results like this, there is nothing<br>
wrong or bad with the ERlang SMP implementation because of that.</blockquote><div> </div></div><div>You are absolutely right that the benchmark is not representative of most real-world situations. It does, however, show up some weakness in the current Erlang SMP implementation. Other languages managed to run the same benchmark in a reasonable amount of time, therefore the benchmark itself cannot be totally invalid. Two orders of magnitude difference between running non-SMP and SMP is excessive under any circumstances.<br>
<br>I tried some further experiments and got surprising results. I'd like to hear your opinion on this.<br></div></div><br>I first hypothesized the following:<br><ol><li>If the locking of the shared run queue is the problem, then if you run multiple VMs (nodes) each with a non-shared run queue (i.e. non-SMP) the program will run faster.</li>
<li>If the broker is the bottleneck, have multiple brokers and there should be a large improvement in performance.</li></ol>Initially, I modified the chameneos benchmark to allow more flexibility.<br><ul><li>I changed it into a distributed system that allows you to choose how many separate Erlang nodes on which to run. The nodes have to be pre-started manually, unfortunately, but I haven't gotten around to trying to start them from within the program.<br>
</li><li>I increased the number of brokers to one broker per node.</li><li>I changed the benchmark code to spawn processes evenly across the number of nodes chosen. I also set it up that there will be an even number of "chameneos" processes per node, because one of the scenarios I wanted to test is if there is no cross-node communication - the broker on a given node only communicates with chameneos on the same node. This is to test how inter-node communications compares to intra-node communications.<br>
</li><li>The benchmark now has an extra step that gathered the intermediate results from all the brokers to present the final result.</li><li>I put lots of print statements in the code to show progress.<br></li><li>Finally, I removed the initial test that runs with only 3 chameneos because I only wanted to test the worst-case scenario (10 chameneos). The benchmark figures are therefore no longer directly comparable to the results on the alioth web site, but may be compared to each other only.<br>
</li></ul>The code I modified is now a bit of a mess because (a) I am not a highly experienced Erlang programmer and (b) I was rushing to try a whole lot of different things, so I was hacking it, but it does the intended job.<br>
<br>I started four Erlang shells (nodes), all on the same 4-core Q6600 system. Initially I started the nodes with SMP enabled then I re-ran the tests with SMP disabled.<br><br>I decided to test the following scenarios (parameters are [number of iterations, number of nodes]):<br>
<ol><li>One Erlang SMP node, 1 broker, 10 chameneos - timer:tc(broker, start, [<b>6000000</b>, 1]) [353.18 secs].</li><li>Four Erlang SMP nodes, 4 brokers, 10 chameneos - timer:tc(broker, start, [<b>6000000</b>, 4]). [12.42 secs]</li>
<li>One Erlang non-SMP node, 1 broker, 10 chameneos - timer:tc(broker, start, [<b>6000000</b>, 1]) [6.08 secs].</li><li>Four Erlang non-SMP nodes, 4 brokers, 10 chameneos - timer:tc(broker, start, [<b>6000000</b>, 4]). [1.55 secs]</li>
</ol>Please note that the above results are for brokers that are constrained to be on the <b>same</b> node as the chameneos with which they are communicating. When this constraint is removed, that is, when brokers communicate with chameneos on different nodes, it is much slower, even in smp-disabled mode. To quantify that:<br>
<br>Four Erlang non-SMP nodes, 4 brokers, 10 chameneos, <b>intra-node</b> communications - timer:tc(broker, start, [<b>6000000</b>, 4]). <b>[1.55 secs]</b><br>Four Erlang non-SMP nodes, 4 brokers, 10 chameneos, <b>inter-node</b> communications - timer:tc(broker, start, [<b>6000000</b>, 4]). <b>[192.6 secs]</b><br>
<br>Maybe this is to be expected, but if so, why? Should it be 124 times faster to communicate between Erlang nodes <b>on the same physical system</b> than it is to communicate within the nodes only?<br>
<br><b>Sample output of intra-node communication</b><br>Started broker <6364.44.0> on cwork_4@ender expecting 1500000 messages; collector pid = <0.39.0><br>Started broker <6363.44.0> on cwork_3@ender expecting 1500000 messages; collector pid = <0.39.0><br>
Started broker <6362.44.0> on cwork_2@ender expecting 1500000 messages; collector pid = <0.39.0><br>Started broker <0.64.0> on cwork_1@ender expecting 1500000 messages; collector pid = <0.39.0><br>
Started chameneos <0.65.0> for color blue on cwork_1@ender for broker <0.64.0><br>Started chameneos <0.66.0> for color red on cwork_1@ender for broker <0.64.0><br>Started chameneos <6362.45.0> for color yellow on cwork_2@ender for broker <6362.44.0><br>
Started chameneos <6362.46.0> for color red on cwork_2@ender for broker <6362.44.0><br>Started chameneos <6363.45.0> for color yellow on cwork_3@ender for broker <6363.44.0><br>Started chameneos <6363.46.0> for color blue on cwork_3@ender for broker <6363.44.0><br>
Started chameneos <6364.45.0> for color red on cwork_4@ender for broker <6364.44.0><br>Started chameneos <6364.46.0> for color yellow on cwork_4@ender for broker <6364.44.0><br>Started chameneos <0.67.0> for color red on cwork_1@ender for broker <0.64.0><br>
Started chameneos <0.68.0> for color blue on cwork_1@ender for broker <0.64.0><br><br></div>
</div><b>Sample output showing inter-node communication</b><br>Started broker <0.41.0> on cwork_1@ender expecting 1500000 messages; collector pid = <0.39.0><br>Started broker <6346.44.0> on cwork_2@ender expecting 1500000 messages; collector pid = <0.39.0><br>
Started broker <6347.44.0> on cwork_3@ender expecting 1500000 messages; collector pid = <0.39.0><br>Started broker <6348.44.0> on cwork_4@ender expecting 1500000 messages; collector pid = <0.39.0><br>
Started chameneos <0.53.0> for color blue on cwork_1@ender for broker <6348.44.0><br>Started chameneos <0.54.0> for color red on cwork_1@ender for broker <6348.44.0><br>Started chameneos <6346.45.0> for color yellow on cwork_2@ender for broker <6347.44.0><br>
Started chameneos <6346.49.0> for color red on cwork_2@ender for broker <6347.44.0><br>Started chameneos <6347.45.0> for color yellow on cwork_3@ender for broker <6346.44.0><br>Started chameneos <6347.49.0> for color blue on cwork_3@ender for broker <6346.44.0><br>
Started chameneos <6348.45.0> for color red on cwork_4@ender for broker <0.41.0><br>Started chameneos <6348.46.0> for color yellow on cwork_4@ender for broker <0.41.0><br>Started chameneos <0.55.0> for color red on cwork_1@ender for broker <6348.44.0><br>
Started chameneos <0.56.0> for color blue on cwork_1@ender for broker <6348.44.0><br><br>Again, please excuse the ugly code.<br><br>To compile, just erlc broker.erl and chameneos.erl. No HiPE used in these measurements.<br>
To run, start nodes with the sname cwork_1, cwork_2, ... e.g.<br><br>$ erl +K true -sname cwork_1 -smp disable<br>- OR -<br>$ erl +K true -sname cwork_1 # to enable SMP<br><br>Then in cwork_1, enter:<br>> timer:tc(broker, start, [6000000,4]).<br>
<br>The 4 is the number of nodes expected (cwork_1 .. cwork_4).<br><br>Regards,<br>Edwin Fine<br></div>