<div dir="ltr"><div>Another important point to add:</div><div><br></div><div>Erlang doesn't make maximal usage of the CPU cores it is given. The system usually values low latency operation over throughput, so it has to forgo some raw throughput in order to achieve this goal. To be precise, there has to be a check in each loop in the program to make sure preemption can happen of a process which has run for too long on a CPU core. This check is a function call, because loops are implemented as tail-calling functions.</div><div><br></div><div>In turn, you cannot maximize the CPU doing productive work as these checks will cost something.</div><div><br></div><div>The tradeoff is also made in e.g., Go, and they have recently added an option to preempt loops as well. However, people are somewhat reluctant of that option since it hurts their throughput.</div><div><br></div><div>These tradeoff turned out to be valuable in the original setting of the Erlang language, but it also works for modern distributed systems.<br></div></div><br><div class="gmail_quote"><div dir="ltr">On Wed, Jun 13, 2018 at 7:39 AM Vance Shipley <<a href="mailto:vances@motivity.ca">vances@motivity.ca</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><div>Joe,<div dir="auto"><br></div><div dir="auto">The answer is multiple schedulers.</div><div dir="auto"><br></div><div dir="auto">In the days before the SMP enabled Erlang emulator (VM) we would need to run multiple nodes (VM) to take advantage of multiple cores. Now the default mode for the VM is to run as many schedulers as there are cores available. This also highly configurable.</div><div dir="auto"><br></div><div dir="auto">Although Erlang's distribution allows processes to transparently communicate across nodes there is a great performance advantage in intra-node messaging as there is reference passing and it avoids serialization in the external term format.</div><div dir="auto"><br></div><div dir="auto">It's interesting though how everything goes in cycles. Five years ago I was keeping the core counts very high, and the node counts very low, to get great performance. Now with cloud native we run a node in a container and communicate with other (microservices) nodes using either distribution or external protocols (e.g. REST) so the overheads are again much higher. But the old way was vertical scaling and this is horizontal scaling, we pay a tax but go from dozens of cores to hundreds or thousands.</div></div></div><div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr">On Wed, Jun 13, 2018, 08:46 joe mcguckin <<a href="mailto:joe@via.net" target="_blank">joe@via.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
If erlang is one large unix process with hundreds or thousands of it's own processes internally, how does Erlang make maximum use of a multi-core cpu?<br>
<br>
Wouldn’t Erlang be scheduled and run on a single core ?<br>
</blockquote></div></div></div>
_______________________________________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org" target="_blank">erlang-questions@erlang.org</a><br>
<a href="http://erlang.org/mailman/listinfo/erlang-questions" rel="noreferrer" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
</blockquote></div>