TCP stack throughput

Joe Armstrong (AL/EAB) joe.armstrong@REDACTED
Wed Jul 6 09:29:45 CEST 2005



 Matthias Lang wrote:
 
> Mickael Remond writes:
>  > Joel Reymont wrote:
>  > > So this means 500 simultanous connection requests on 
> Solaris and  about 
> Matthias Lang said:
> Your results could be interpreted as showing one _or more_ of the
> following:
> 
>    a) That your test client is the bottleneck
> 
>    b) That your test server is the bottleneck
> 
>    c) That the OS/tcp stack isn't really designed for the sort of
>       use you're testing.
> 
> My gut feeling says (c), mainly because different people are reporting
> fairly different behaviour from one OS to another. I've written a
> quick and dirty C server which does something similar enough to the
> erlang server to work with your client. You could experiment with it
> to try and eliminate hypothesis (b).
> 

     I think you might run into problems distinguishing between a) and b) here, 
if you run the client and server on the *same* machine then both the client 
and server start failing at the same time when the machine is loaded.

     When I tested yaws against apache. I ran tests on a 16 node cluster. I put yaws (or apache) on the server, then ran 15 clients on each of the other nodes in the cluster. Now if the client and server stack have roughly equivalent breaking characteristics then at the point when the server breaks the clients should have
be running at say 1/15'th of their max capacity - so rate measurements on the clients should be valid.

     As regards c) I have recently reviewed a paper on an implementation of TCP/IP
in Erlang (this paper has been accepted for the next SIGPLAN Erlang workshop) - the
results were interesting - they showed that

	1) A pure Erlang stack was much more "even" than conventional stacks.
	Conventional stacks seemed to favour a few connections and starve out the rest,
	So a few connections got very good bandwidth, the rest were very poor.
	The Erlang stack was much fairer.

	2) The Erlang stack was c. 25% of the efficiency of the native stack

   The Erlang stack also could synchronize the state of each connection with a second machine, so that the second machine could take over a live TCP connection.

   All of this seems to be saying that if you want to handle very large numbers
of very short-lived connections use a pure erlang stack - but if you want to handle
a small number of long-lived high bandwidth connections use a conventional stack.

   If this is true, it would not surprise me, since this behaviour mirrors how processes
are handled in Erlang and in the OS - since Erlang favours losts of small light-weight processes and the OS a few heavy-weight processes. 

   /Joe




	   

     




> The program works for me on linux 2.6.x. 
> 
> Matthias
> 
> 



More information about the erlang-questions mailing list