[erlang-questions] Kill process if message mailbox reaches a certain size (was discarding signals)

Banibrata Dutta <>
Thu Jun 16 06:06:58 CEST 2011


On Thu, Jun 16, 2011 at 8:55 AM, Bob Ippolito <> wrote:

> On Wed, Jun 15, 2011 at 8:11 PM, József Bérces
> <> wrote:
> > Thanks for all the thoughts and suggestions. If I got it right, there
> were two main branches:
> >
> > 1. Avoid the congestion situation
> > 2. Detect and kill/restart the problematic process(es)
> >
> > The problem with these approaches that the Erlang applications are not
> just playing with themselves but receive input from other nodes. Those nodes
> can be very numerous and uncontrollable.
> >
> > As an example, just let's take the mobile network where the traffic is
> generated by millions of subscribers using mobile devices from many vendors.
> In this case we (1) cannot control the volume of the traffic and (2) cannot
> make sure that all the devices follow the protocol.
> > So there can be situations when we cannot avoid congestion simply because
> the source of the traffic is beyond our reach.
>
> The Erlang distribution protocol is only suitable for connecting a
> relatively small number of trusted nodes on a LAN.
>

I think I get the jist of it, but could someone quantify, as to how "small"
is "relatively small number" here ? Fifty, few hundreds, couple of thousands
?
What is the largest 'Erlang cloud' (i.e. hosts running Erlang processes
communicating accross nodes in a cluster), that has been seen ?

If you were to expertly implement such an application you would have
> some Erlang nodes speaking to these mobile devices, but with another
> protocol (probably over TCP), and then you would have as much control
> as you need over the other details. For example, you can avoid
> congestion by rate limiting or refusing to accept new connections.
> When the Erlang nodes speak to each other (with or without Erlang
> distribution), you also control that protocol and can avoid congestion
> there as well.
>

In telecom world such a situation is pretty common, however think of the
situation that even to discard a message (due to congestion) that starts a
new transaction if I need to determine things like priority, transaction-id
or application level session-id etc., I'd have to have the ability to decode
that much message in the rate-limiter process, which I think we are saying,
will be written in C/C++ and communicating over IP or another protocol.
Duplicating the decode logic, I guess would be unavoidable, in most such
cases. Or is there a better behavioral pattern someone has figured out ?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20110616/87a3c7c5/attachment.html>


More information about the erlang-questions mailing list