[erlang-questions] Data locality and the Erlang Runtime

Vincent Siliakus zambal@REDACTED
Thu Dec 12 13:15:17 CET 2013


Thanks for the detailed and informative answer. In my day to day
programming I rarely encounter cases where thinking about memory layout
would make much sense. However, the referenced article triggered my
curiosity to what level BEAM's implementation could take advantage of this,
although I already suspected it would be hard, given Erlang's semantics.

Apart from binaries/bitstrings, the only thing I could think of that might
create a tightly packed data structure, is a tuple of small integers,
unless my assumption is false that small integers are unboxed values and
tuples, as opposed to lists, directly stores it's elements in Erlang. But
then again, even if that would be the case, you still have to unpack these
integers from the tuple to do something meaningful with them, so that
probably mitigates any advantage of having these integers nicely packed
together in a tuple.

BTW, the referenced article did in fact mention that the described
techniques only work in a single threaded environment, but it was sneakily
put a way in a side bar of the article ;) :

"There’s a key assumption here, though: one thread. If you are accessing
nearby data on multiple threads, it’s faster to have it on
*different*cache lines. If two threads try to use data on the same
cache line, both
cores have to do some costly synchronization of their caches."






On Wed, Dec 11, 2013 at 4:03 PM, Jesper Louis Andersen <
jesper.louis.andersen@REDACTED> wrote:

>
> On Wed, Dec 11, 2013 at 11:34 AM, Vincent Siliakus <zambal@REDACTED>wrote:
>
>> Last night I was reading the following article:
>> http://gameprogrammingpatterns.com/data-locality.html. It gives a nice
>> overview about the importance of data locality for performance critical
>> code these days. So I started wondering how much of this applies to my
>> favourite runtime (I say runtime because I use Elixir more often than
>> Erlang lately).
>>
>
> It is hard to say exactly how a given piece of code will execute on a
> system. The general consensus the latter years has been optimizing for data
> locality and limiting DRAM access speeds things up. This is mostly due to
> how caches work and so on. Clock cycles today are almost "free". The Erlang
> BEAM VM is quite pointer-heavy in the sense that we usually don't pack data
> that tightly and prefer to use a tree-like term structure. The advantage of
> the approach is it gives a better and easier path to handle persistence and
> immutability, which are cornerstones of writing large functional programs.
>
> Furthermore, you have relatively little control over the concrete memory
> layout of data in BEAM, making it harder to pack data tightly. It makes it
> hard to apply those kinds of tricks in the article. The "structural zoo" of
> Erlang is not that strong and due to the language being dynamic--it is
> limited how much representational control you have. If you want to exploit
> memory layout, there are better languages out there.
>
> However, the article is written for single-core imperative code. One
> advantage you have on the BEAM is that small process heaps tend to stay in
> cache. This makes their garbage collection and operation faster and it
> improves locality for the heap. Also, the article completely forgets to
> touch on the fact that modern systems have multiple cores. In such a
> system, immutability and copying can help drive up parallelism, which in
> turn means you can get more cores to do productive work. It is not as
> clear-cut as one might believe. Multiple cores changes everything,
> especially with respect to data mutation which become more expensive.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20131212/26de788b/attachment.htm>


More information about the erlang-questions mailing list