Efficiency of big return functions?
Oliver Korpilla
oliver.korpilla@REDACTED
Thu Aug 27 11:51:28 CEST 2020
Hello,
I have some data that's between 100K and 1M in size, depending if I use
the whole data set or just a part. Access is read-only.
So far we kept a copy in each process for latency reasons and
performance has been quite good. There can be quite a lot of processes
(1,000s) but we have some big machines to run them on...
So far we haven't considered ETS or Mnesia because all these processes
would have to go through a single bottleneck in rather short order (they
are truly parallel and independent from each other and we have lots of
cores to schedule them on) - or am I wrong? How well does it scale?
That said, we had good experiences with moving some static configuration
information to code. Performance is really good, but that data was
roughly of the format of a keyed map or smaller lists.
The data we're looking at are big lists (potentially 1,000s of entries)
of medium-sized maps, or maybe a map serving as index into these other maps.
My question is - how do I efficiently return a big static value (a list
of maps with no parameters to change their construction) from a
function? Does BEAM optimize this? Or is the value constructed when the
function is called? And is there anything I can do to improve it?
Thank you!
Oliver
--
Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
https://www.avast.com/antivirus
More information about the erlang-questions
mailing list