Performance of selective receive

Pascal Brisset pascal.brisset@REDACTED
Sun Nov 13 19:18:00 CET 2005


Sean Hinde writes:
 > Alternative proposal:
 > 
 > Never, but never use async messages into a blocking server from the  
 > outside world. The way to avoid this is to make the socket process  
 > (or whatever it is) make a synchronous call into the gen_server  
 > process.

Agreed, it's all about propagating flow-control end-to-end.

Note that if the "socket process" uses {active,true}, it might
itself be susceptible to the "snowball effect" - unless we are
talking about a well-behaved network protocol with end-to-end
flow control, in which case there is no need to worry at all.

Also, consider a scenario with not one client (or socket process),
but 1000.  Even if each client calls the server synchronously, the
server can still have as much as 1000 requests in its message queue.
That's enough to trigger a snowball effect.


 > You then have to make the gen_server itself not block. The  
 > neatest way is to spawn a new process to make the blocking call to  
 > the backend, and use the gen_server:reply/2 mechanism to reply later.  
 > You could also use a pool of backend processes and reject if you hit  
 > "congestion".

Sometimes you just can't process requests asynchronously.
In our case, the server was a session manager which *must*
read and update a local mnesia table of active sessions
before it can process the next message.

As Ulf highlighted, there are blocking calls hidden everywhere.
There's nothing wrong with that, as long as your system is
dimensioned correctly.  The problem is that a 10 % CPU load can
suddenly turn into 100 % if you add a few thousand messages in
the wrong message queue.

-- Pascal




More information about the erlang-questions mailing list