Performance of selective receive

Shawn Pearce spearce@REDACTED
Sun Nov 13 19:33:28 CET 2005


todd <todd@REDACTED> wrote:
> Having unconstrained senders turns out to be a gun that can fire
> at anytime.

This is always true of any system.  Send in work at a rate faster
than the server can process and you are sure to overload the server.

I remember working with a Java system which averaged about 2 seconds
of CPU computation (on a specific SPARC processor) to process a
single request from a client.

A customer wanted to deliver about 100 requests/second.  For some
strange reason the developers of this Java application thought we
could do this with 10 CPUs in a single Solaris box.  Hah!

Since the protocol was HTTP, what we found was the clients would
connect, issue the request, timeout in 5 seconds, disconnect and
immediately retry.  Meanwhile the HTTP server and Java servlet
container were queuing requests and processing them in the order they
were received.  Almost immediately the system would enter a state
wherein every client was timing out and automatically retrying;
further compounding the problem.  At least until the OS ran out
of both physical memory and swap and started killing processes at
random to reclaim memory.  At that point the JVMs would be dead,
the web server would be dead, and clients started getting connection
refused messages.

*sigh*

The test case which started this thread is quite interesting.

I wouldn't expect to lose 90% of a CPU to Erlang's mailbox scanning
during selective receive.  Of course it makes some sense after
reading the descriptions that have been posted in this thread,
but it still is unexpected.

-- 
Shawn.



More information about the erlang-questions mailing list