<div dir="ltr">Joel,<br><br>I have mentioned this before on this list, but I'm getting the feeling that folks on this list must think it's a really stupid idea, or not worth looking into because I am new to Erlang or some other reason - maybe I say lots of dumb things - because I have had virtually no responses. Or maybe it is worth looking into, but could be a big can of worms. EIther way, there is something nagging at me that won't allow me to let this go until I know a bit more about it.<br>
<br>Here's the stupid idea: Run one +S 1 VM per core instead of N +S N VMS.<br><br>You'd probably have to partition the load to round-robin across the individual VMs, possibly using some front-end load-balancing hardware. This is why I keep harping on this: some time ago I put the system I am working on under heavy load to test the maximum possible throughput. There was no appreciable disk I/O. The kicker is that I did not see an even distribution of load across the 4 cores of my box. In fact, it looked as if one or maybe two cores were being used at 100% and the rest were idle. When I re-ran the test on a whim, using only 1 non-SMP (+S 1) node, I actually got better performance.<br>
<br>This seemed counter-intuitive and against the "Erlang SMP scales linearly for CPU-intensive loads" idea. I have not done a lot of investigation into this because I have other fish to fry right now, but the folks over at LShift (RabbitMQ) - assuming I did not misunderstand them - wrote that they had seen similar behavior when running clustered Rabbit nodes (i.e. better performance from N single-CPU nodes than N +S N nodes). However, they, like me, are not ready to come out and state this bluntly as a fact because (I believe) they feel not enough investigation has been done to make this a conclusive case.<br>
<br>It looks, though, like you are in a very good position to see if this makes any difference, and perhaps bring to light a hitherto-unknown flaw (or at least hitch) in the SMP implementation, which will benefit everyone if there is something there and it gets resolved.<br>
<br>What do you think? Is the newbie smoking something, or is there maybe something to it?<br><br>Regards,<br>Edwin Fine<br><br><div class="gmail_quote">On Sat, Sep 13, 2008 at 4:12 PM, Joel Reymont <span dir="ltr"><<a href="mailto:joelr1@gmail.com">joelr1@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Foot in mouth. I forgot to supply the node name.<br>
<br>
'erl -sname 2 -remsh 1@mothership' works fine.<br>
<div class="Ih2E3d"><br>
On Sep 13, 2008, at 9:09 PM, Joel Reymont wrote:<br>
<br>
> I thought I'd try to connect to the node, find that test process and<br>
> kill it.<br>
><br>
> erl -remsh 1@mothership<br>
> Erlang (BEAM) emulator version 5.6.3 [source] [64-bit] [smp:8]<br>
> [async-threads:0] [kernel-poll:false]<br>
><br>
> *** ERROR: Shell process terminated! (^G to start new job) ***<br>
<br>
</div><div><div></div><div class="Wj3C7c">--<br>
<a href="http://wagerlabs.com" target="_blank">wagerlabs.com</a><br>
<br>
<br>
<br>
<br>
<br>
_______________________________________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>
<a href="http://www.erlang.org/mailman/listinfo/erlang-questions" target="_blank">http://www.erlang.org/mailman/listinfo/erlang-questions</a><br>
<br>
</div></div></blockquote></div><br></div>