[erlang-questions] Scheduling/messaging overhead

Antonio SJ Musumeci anarchocptlist@REDACTED
Mon Mar 26 04:47:14 CEST 2012

I've got the following gen_server.

start() ->

launch() ->
    [test:start() || _ <- lists:seq(1,10000)].

init([]) ->
    {ok, #state{}, timer:seconds(random:uniform(10))}.

handle_info(timeout,State) ->

I'm just starting beam and issuing test:launch(). Using erl -smp
disable I'm getting 8-9% cpu utilization according to top on a Core i7
2.67Ghz laptop. With erl -smp auto I'm getting as high as 25% for a
bit and gradually reduces to low teens.

After changing it to a single process with a timeout of 10
milliseconds I'm getting around 8% for smp and not. Is waking 100
times a second really that costly?

I'm not planning on doing the above but in a platform I'm working on
the intent is to have possibly tens of thousands of processes
representing individual elements in external systems. I was trying to
see what the load would be updating ETS from each process once every
few seconds and it nearly pegged all the cpus with a few tens of
thousands of processes which were using the timeout and then inserting
a few times. A quick test of 1 process timing out every 100
milliseconds creates a beam load of 1%. Adding dozens of ets updates
make no notable impact. Still 1% with the handle_info calling 50
individual ets:insert's.

Should I not be timing out like that to simulate random writes? If I
had the updates being triggered due to messages from ports would it be
about the same? Or would it make more sense to not create one process
per logical entity? Let the true originator of the data write it to
ETS? Or a proxy for a class of objects? It was designed to be 1
process per entity so as to be persuasively concurrent and give the
appearance of 1 to 1 mapping but in practice it may be that most
processes rely largely on external triggers to do anything and
therefore could be normalized.


More information about the erlang-questions mailing list