controlling_process and TCP servers

Jay Nelson <>
Sun Mar 2 00:26:10 CET 2003


Sean Hinde wrote:

 > > This model is widely used but for a very nice example take a
 > > look at joe's
 > > recent web_server tutorial. http://www.sics.se/~joe

Chris Pressey wrote:

 > OK, I added connection capping to my version, the results can be seen at
 >  http://kallisti.mine.nu/projects/ce-2003.0228/src/ce_socket.erl
 > With it, the complexity *does* increase - I'm beginning to see why Joe's
 > looks the way it does.


I was just completing a tutorial for a TCP proxy server using
OTP gen_server when I asked the gen_tcp:controlling_process
question.  Now I see I'll have to do a little more reading and
thinking before I post it!  The subtleties are similar to analyzing
the opening moves of a chess game.  It turns out I implemented
exactly the same thing they did without knowing it.  I guess this
is a popular idiom that should be available to everyone.

 From what I understand so far the issues involve: spawn race
conditions, message ordering, which process receives messages
and how many connections are active.

Joe's approach:
1) Spawn a cold start and exit when it returns.
2) Create a listen socket with {active, false} in the new process,
spawn an Accepter and then enter the server loop [cold start thread].
3) Meanwhile the Accepter sends a message to the server loop
when it gets a connect, sets {active, true} and transfers
control to the handler routine; the server spawns a new
Accepter if it hasn't reached the maximum count.

Joe avoids spawn race conditions by maintaining {active,
false} until the handler is ready to run, so the mailbox of
messages doesn't get confused between processes and
he doesn't need to call controlling_process.

All of the messages received by the server come from the
Accepter process, except for the query for the Children
list.  Each new connection is a separate process with separate
messages, so no ordering issues come up.

The server needs to stay up to maintain the children list, so
EXITs must be caught and handled properly.

Chris has a similar approach with the main differences in
the over-generation of an Accept and the use of gen_tcp:
controlling_process.


I am taking the approach of using OTP and gen_server.
The main difference is that I can be less worried about
the server process going down because I will use a supervisor
to restart it when it happens, and rely on error logs to
tell me what is going wrong over time.  It does mean that
I need to avoid using state in any process I will allow to
go down, however, and prevent those that need state
from going down.  The other difference is that where they
used erlang messages to notify when connections arrive,
the messages are hidden in my approach because the
gen:call methods implement the messaging.

I will have to add the children counting and reporting so
that it is comparable to Joe's.  I wasn't planning on worrying
about it in the beginning.

My approach is:
1) Spawn a linked gen_server that opens a Listen socket,
and then spawns an Accept.
2) When Accept receives a connection, it spawns a linked
Relay process that transfers the socket to the ProxyModule.
3) The Accept loops waiting for the next connection.

I used {active, false} so I wouldn't have to worry about
passing the controlling_process (although my next task
is to read the source code for gen_tcp:controlling_process),
and so that the details of parsing the stream can occur in
ProxyModule.  In reality I am going to give Relay enough
smarts to be a simple router to one of several ProxyModules,
but not enough knowledge to need to inspect the packets.
Each ProxyModule can be what Joe calls a middleman
process dealing with the Socket and translating to / from
erlang messages.

One thing I am worried about is that I am not spawning a
new Accept process on every connection.  I am spawning
a new child, but looping on the same Accept process the
whole time.  This allows my Accept state to maintain
statistics without resorting to message passing, but the
main gen_server doesn't have access to the statistics. The
worry is that somehow starting a new process on the Accept
might avoid process corruption or garbage issues over
the course of a month of non-stop running.  It is a single
function running a tight loop, so I feel justified that I can
work out any problems.

The big issues I need to address are managing a large
number of connections without overburdening, allowing a
distributed network of servers to handle the computational
load, and maintaining both stateless HTTP and stateful
game connections while attempting to keep a single server
front (http://myserver:80/) rather than bouncing people
around to different servers.

I need to rework the text, add the max connection stuff,
and do some more testing before I post it.  Unfortunately,
home chores are taking priority.  I expect to have something
up this weekend, though.  I hope that someone will find it
useful since it is so similar to everyone else's code it won't
offer much insight, but will give an example of solving the
same problem using gen_server.

jay



More information about the erlang-questions mailing list