[erlang-questions] Experience with much memory

Max Lapshin max.lapshin@REDACTED
Sun Sep 29 05:12:49 CEST 2019


Your friends are oversimplifying things of course.

When you speak about 200GB or 3TB of RAM, you need to speak about things a
bit deeper than "fast or not fast".  Here is the beginning of long list of
issues to discuss:


1) is your software NUMA aware?  Many people experience strange performance
degradations when they upgrade from 1CPU to 2CPU and double system memory.
Numa unaware software starts migrating data between processors with serious
penalty.

Erlang is NUMA aware out of the box, but you can spoil things if you allow
too much data migration.



2) how big is per-process memory?  If you allocate 200GB of small integers
into tuples and all of them in a single erlang process, then you will not
be able even to do it in a reasonable amount of time, I suppose =)
If you do things correctly and all your 200GB are in ets, binaries and
spread across different processes, then it will be ok



3) how fast is memory allocation/deallocation done?  If you just read 200GB
file to memory and read data from it, then it is really easy.
If you do complicated allocations/deallocations with complex memory
connections, then your memory allocator should be fast enought
to handle these allocations on 200GB of RAM. Erlang is fast enough for this.

Program written on C++ will have lot of hidden bugs that are hard to find
because valgrinding 200GB is a funny task.


There are other questions about "is 200GB handling possible", these 3 are
just what I could remember.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20190929/56bce1fd/attachment.htm>


More information about the erlang-questions mailing list