gen_tcp under gen_server design

Andre Nathan <>
Fri Jul 30 16:58:17 CEST 2010


I'm building my first OTP application. It's a network server that deals
with a binary protocol. My question is about the design I came up to fit
a gen_tcp server into the gen_server behaviour.

The idea is to have a server process (a gen_server) which listens on a
given port and spawns an acceptor process. This process is created with
proc_lib:spawn_link and the listening socket is passed to it with
gen_tcp:controlling_process; it doesn't implement any OTP behavior.

The acceptor process calls gen_tcp:accept, and when the call returns, a
new process responsible for handling the data is spawned. The client
socket is given to this process via gen_tcp:controlling_process, and the
acceptor calls gen_tcp:accept again waiting for new connections.

I call the process created by the acceptor the "packager" process. It's
a simple gen_fsm that reads binary data from the socket and packages it
into binaries of the appropriate size, according to the protocol, parses
them and generate the appropriate events on the worker process that
implements the protocol FSM. The packager process receives tcp data in
the handle_info callback, which sets the socket to {active, once} each
time it is called and calls the function corresponding to the current
gen_fsm state, generating a "data" event.

Is this a reasonable design for this kind of server? Is there anything I
could do to improve its reliability and/or efficiency? Would it be
better to, for example, have multiple acceptor processes and let them do
the binary processing that the packager processes do in my case by

Thanks in advance,

More information about the erlang-questions mailing list