[erlang-questions] integrating inter-process protocol checkers

Vlad Dumitrescu vladdu55@REDACTED
Tue May 20 08:51:39 CEST 2014


Hi Rich,

On Mon, May 19, 2014 at 2:57 PM, Rich Neswold <rich.neswold@REDACTED>wrote:

> On Sat, May 17, 2014 at 10:33 AM, Vlad Dumitrescu <vladdu55@REDACTED>
> wrote:
> > I was thinking today (obvious from the number of mails to the list :-) )
> and
> > was considering how great idea protocol checkers are and wondered why
> they
> > aren't used and popularized more.
>
> Protocol checkers are wonderful.
>
> > One of the possible reasons is that in order to be able to use them
> > everywhere, one has to use a specific architecture: with middleman
> processes
> > in front of every server process. This is somewhat clunky, introducing
> > elements that are not related to the application domain. Besides, it
> makes
> > it difficult to add them to existing applications.
>
> A colleague and I have developed a protocol compiler which takes a
> protocol specification file and generates source code to
> marshal/unmarshal the messages. It's very similar to Google's protobuf
> idea and we seriously considered using their's, but there were some
> requirements at Fermilab which it didn't meet, so we rolled our own.
> We support C++, Java, Objective-C and Erlang (which are all used in
> our Control System) and we also target Python and OCaml (which are
> experimental.)
>
> We don't use middleware: each generator uses the target language's
> method of choice to prep an outgoing message. So, for example, the C++
> code will write to a stream; the Erlang code builds a binary (which
> can then be sent to a socket.); etc.
>
> The generated code for each target language validates the messages and
> will only succeed if every required field is present and is of the
> correct type and in range. Handling protocols, before this tool was
> created, was tedious and error-prone. Now adding features is a breeze
> and we regularly add or refactor protocol messages.
>

Generating the code for marshalling from a protocol specification is very
useful too, but I think it falls short in some areas.

A marshaller is always needed and if it is generated from a higher-level
description, it is less prone to errors. However, it is local to the
endpoint of the connection and knows only of the messages it handles. In
order to make sense of what's happening in a system, one needs to gather
information from all these stubs and put it together, which is difficult to
do 'live'.

A proxy process knows about all messages going back and forth, their timing
and order. It can even hold state and issue warnings/errors right away
based on the whole conversation going on. A server process could have a
single proxy in front of it, handling all connections from clients, or
there could be one proxy per client connection, making it an implementation
of channels. The latter is probably more difficult to implement
transparently (if possible at all).

regards,
Vlad
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20140520/b79113fa/attachment.htm>


More information about the erlang-questions mailing list