[erlang-questions] trouble with erlang or erlang is a ghetto

Ulf Wiger ulf.wiger@REDACTED
Thu Jul 28 12:33:39 CEST 2011


On 28 Jul 2011, at 04:23, Jon Watte wrote:

> A rack in each city? Is the Erlang kernel getting more latency tolerant?

I assume you are referring to the occasional "node not responding" issues? As far as I know, the kernel doesn't have issues with latency.

The problems with Distributed Erlang are related to a heavy-handed backpressure solution, where processes trying to send to the dist_port are simply suspended if the output queue exceeds a given threshold. When the queue falls under the threshold, all suspended processes are resumed. Since the algorithm doesn't differentiate between processes, this fate can befall the net ticker as well.

This *has* been improved a bit in the latest release, and I believe more improvements are forthcoming. Simply increasing the thresholds (currently not configurable) should mostly eliminate the problem.

> So, the *current* best-value hardware looks something like:
> 32 GB of RAM
> 24 hardware threads (in 2-way NUMA, btw -- does BEAM pay attention to memory affinity?)
> 240 GB SSD, 1 or 2 (RAID-1 for redundancy)
> Probably 10 Gbps networking

BEAM is starting to make use of NUMA, for example when allowing you to control the binding of schedulers to cores. See e.g. 

http://www.erlang.org/doc/man/erl.html (search for NUMA)

I believe that the current activities are mainly laying the groundwork for some more powerful optimizations, e.g. delayed deallocation, but R14B actually included quite a few improvements already, esp. in regard to locking.

http://www.erlang.org/download/otp_src_R14B.readme

Still, micro benchmarks have indicated that memory allocation (not least GC meta-data) locking issues mainly start affecting performance somewhere beyond 40 cores. The question is how much this really affects applications on the "usual" hardware of today?

One problem is that it's hard to do detailed profiling on complex real-world applications. The issues limiting scalability might well be wholly unrelated to core VM aspects such as GC, scheduling and message passing. In the first SMP experiments with Ericsson's Telephony Gateway Controller, the big bottleneck was the big lock protecting the ports and linked-in drivers.


> Next year's best-value hardware will probably look something like:
> 64 GB of RAM
> 40 hardware threads (still with 2-way NUMA)
> 240 GB SSD, 1 or 2 (RAID-1 for redundancy) (it will be cheaper than this year, but still the "sweet spot" unless you're building RAID 6 volumes or something)
> Definitely 10 Gbps networking


Yes, but one thing I learned while at Ericsson was that NEBS-compliant ATCA processor boards don't exactly stay on the leading edge of processor capacity. The top-end blade servers today seem to host up to two dual- or quad-core processors. This is not to say that everyone has to evolve at the same pace, but the main funding sources for Erlang/OTP tend to follow this path.

Now, Joe has publicly mentioned running an application on a 24-core architecture, for which the optimum setup at the time seemed to be 4 erlang nodes - one for each physical CPU. The problems arise when the application isn't embarrassingly parallel, but requires processes to actually interact with each other, sometimes in fairly complex ways. Also, these many-core architectures are still finding their way in terms of memory access architectures, and each vendor has different ideas. The combination of bottlenecks in the VM, limitations of the hardware architecture, and complex interaction patterns, can easily result in emergent behaviour, which can be quite dramatic.

My own take on this is that it's something we will have to learn to live with, and I started developing Jobs - a load control framework (http://github.com/esl/jobs) to allow for "traffic regulation" of erlang-based systems similarly to how one achieves quality of service on TCP-based networks (another messaging system).

The key, in my experience, is not usually to go as fast as possible, but to deliver reliable and predictable performance that is good enough for the problem at hand. As many have pointed out through the years, Erlang was never about delivering maximum performance.

BR,
Ulf W

Ulf Wiger, CTO, Erlang Solutions, Ltd.
http://erlang-solutions.com



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20110728/1c4da52e/attachment.htm>


More information about the erlang-questions mailing list