[erlang-questions] How to handle a massive amount of UDP packets?

Chandru <>
Tue Apr 17 20:43:27 CEST 2012


John,

When using the {active, once} option, how much processing was the receiving
process doing with the packet before setting {active, once} again? I
personally wouldn't use {active, true} in a production environment because
of its potential to clog up the system.

In our case, the receiving process does just this.

handle_info({udp, Socket, _IP, _RemotePortNo, Packet} = Req,
        #state{stats_instance = StatsInstance,
           port = _InPortNo,
           overload = Overload,
           type = Type} = State) ->
    inet:setopts(Socket, [{active, once}]),
    case check_overload(Type, Overload) of
    allow ->
        spawn(?MODULE, handle_udp_packet, [Req, State]),
        {noreply, State};
    {allow, O2} ->
        spawn(?MODULE, handle_udp_packet, [Req, State]),
        {noreply, State#state{overload = O2}};
    {deny, O2} ->
        spawn(?MODULE, reject_udp_packet, [Req, State]),
        {noreply, State#state{overload = O2}}
    end;

cheers
Chandru

On 17 April 2012 17:00, John-Paul Bader <> wrote:

> Quick Update:
>
>
> In our quest to find the potential bottlenecks we started benchmarking
> {active, false} vs {active, once} vs {active, true}.
>
> We used a rate limited client and measured how many packets were actually
> handled on the server. Note that when we say 40000 packets/s the client
> would send a burst of 40000 packets and then sleep the remaining time of
> that very second.
>
> {active, once}  was the worst, losing up to 50% packets
> {active, false} was much better but still losing up to 25% packets
> {active, true}  was the clear winner with no packet loss at 100000 packets
> per second. When using two clients at 100k/s it started losing packets but
> it was still reasonable given the amount of packets.
>
> Also the VM stayed below 100% CPU and did not increase memory consumption,
> staying stable below 23MB
>
> The server is using active, true and passes the message instantly to
> another process which does the actual work on another thread.
>
> Also, we ran these benchmarks on the same machine and sent / received all
> packets via the loopback interface. The next benchmarks could be run from
> different machines and also with a more sophisticated client that doesn't
> send the packets in bursts to see if that changes anything.
>
> During the tests we played with read_packets and recbuf settings but it
> appeared to have little or no effect at all but we will continue to play
> around with that.
>
> ~ John
>
> Chandru wrote:
>
>> Hi John,
>>
>> In the steady state, we have about 2000/sec, but in error situations,
>> we've had peaks of up to 20000 packets / sec.
>>
>> cheers
>> Chandru
>>
>> On 16 April 2012 10:31, John-Paul Bader <
>> <mailto:>> wrote:
>>
>>    Hey Chandru,
>>
>>    how many packets per second did you have to deal with and how big
>>    are they? Just to have something to compare to.
>>
>>    ~ John
>>
>>    Chandru wrote:
>>
>>        Hi John,
>>
>>        Our RADIUS server which handles our data network is written in
>>        Erlang.
>>        We've experimented with various values of recbuf and read_packets
>>        options for the UDP socket. We also use {active, once}. The
>>        receiving
>>        process receives a packet and spawns a new process to handle it.
>>        That is
>>        all it does. The spawned process then executes the rest of the
>>        business
>>        logic.
>>
>>        That won't be your problem. The problem will be to make sure
>>        your system
>>        is stable while handling all those packets. We use overload
>>        control at
>>        the receiver. You have to pretty much look at the entire
>>        execution path
>>        for each packet and ensure there are no bottlenecks. At that kind
>> of
>>        load, every little bottleneck shows up sooner or later.
>>
>>        cheers
>>        Chandru
>>
>>        On 15 April 2012 19:08, John-Paul Bader <
>>        <mailto:>
>>        <mailto: <mailto:>>> wrote:
>>
>>            Dear list,
>>
>>
>>            I'm currently writing a bittorrent tracker in Erlang. While
>>        a naive
>>            implementation of the protocol is quite easy, there are some
>>            performance related challanges where I could use some help.
>>
>>            In the first test run as a replacement for a very popular
>>        tracker,
>>            my erlang tracker got about 40k requests per second.
>>
>>            My initial approach was to initialize the socket in one
>>        process with
>>            {active, once}, handle the message in handle_info with minimal
>>            effort and pass the data asynchronously to a freshly spawned
>>        worker
>>            processes which responds to the clients. After spawning the
>>        process
>>            I'm setting the socket back to {active, once}.
>>
>>            Now when I switched the erlang tracker live the erlang vm was
>>            topping at 100% CPU load. My guess is that the process
>>        handling the
>>            udp packets from the socket could not keep up. Since I'm
>>        still quite
>>            new to the world of erlang I'd like to know if there are
>>        some best
>>            practices / patterns to handle this massive amount of packets.
>>
>>            For example using the socket in {active, once} might be too
>>        slow?
>>            Also the response to the clients needs to come from the same
>>        port as
>>            the request was coming in. Is it a problem to use the same
>>        socket
>>            for that? Should I pre-spawn a couple of thousand workers and
>>            dispatch the data from the socket to them rather than
>>        spawning them
>>            on each packet?
>>
>>            It would be really great if you could give some advice or
>>        point me
>>            into the right directions.
>>
>>            ~ John
>>            ______________________________**_____________________
>>
>>            erlang-questions mailing list
>>         <mailto:erlang-questions@**erlang.org<>
>> >
>>        <mailto:**ang.org <http://erlang.org>
>>        <mailto:erlang-questions@**erlang.org<>
>> >>
>>        http://erlang.org/mailman/____**listinfo/erlang-questions<http://erlang.org/mailman/____listinfo/erlang-questions>
>>        <http://erlang.org/mailman/__**listinfo/erlang-questions<http://erlang.org/mailman/__listinfo/erlang-questions>
>> >
>>        <http://erlang.org/mailman/__**listinfo/erlang-questions<http://erlang.org/mailman/__listinfo/erlang-questions>
>>        <http://erlang.org/mailman/**listinfo/erlang-questions<http://erlang.org/mailman/listinfo/erlang-questions>
>> >>
>>
>>
>>
>>    ______________________________**___________________
>>    erlang-questions mailing list
>>     <mailto:erlang-questions@**erlang.org<>
>> >
>>    http://erlang.org/mailman/__**listinfo/erlang-questions<http://erlang.org/mailman/__listinfo/erlang-questions>
>>    <http://erlang.org/mailman/**listinfo/erlang-questions<http://erlang.org/mailman/listinfo/erlang-questions>
>> >
>>
>>
>>  ______________________________**_________________
> erlang-questions mailing list
> 
> http://erlang.org/mailman/**listinfo/erlang-questions<http://erlang.org/mailman/listinfo/erlang-questions>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20120417/3ca5fdd4/attachment.html>


More information about the erlang-questions mailing list