[erlang-questions] How to handle a massive amount of UDP packets?

Kenneth Lundin <>
Wed Apr 18 18:34:20 CEST 2012

Active, once is the recommended way to implement a server in both UDP and
The use of active once is superior to the other alternatives from a
programmatical standpoint, it is clean and robust. I don't recommend the
use of active true or active false just for performance reasons, because it
will cause other problems instead.

We are currently working with improvements regarding active once for TCP
since it obviously seems to be slower than necessary. Most probably it is
the same with UDP and we will look into that as well.

We don't think there need any significant performance difference between
active once and the other alternatives and hope to have a solution
confirming that soon (meaning r15b02 or 03)

Regards Kenneth Erlang/OTP, Ericsson
Den 18 apr 2012 06:10 skrev "Kannan" <>:

> Hi John,
> {active, true} is the winner in performance, but it lacks throttling at
> the OS level. If can assure that your server does not receive beyond the
> benchmarked limits, it is the choice. For this, you may want to use a rate
> weighted load-balancer in front of your application. This will help you to
> scale your system easily as well.
> Initializing heavily loaded Mnesia, loops, selective receive on heavy
> message queues and lots of parallel heavy processing are also come culprits
> of 100% CPU.
> I would write my own process, instead of gen_server, for performance
> critical applications.
> Kind Regards,
> Kannan.
> On Tue, Apr 17, 2012 at 9:30 PM, John-Paul Bader <>wrote:
>> Quick Update:
>> In our quest to find the potential bottlenecks we started benchmarking
>> {active, false} vs {active, once} vs {active, true}.
>> We used a rate limited client and measured how many packets were actually
>> handled on the server. Note that when we say 40000 packets/s the client
>> would send a burst of 40000 packets and then sleep the remaining time of
>> that very second.
>> {active, once}  was the worst, losing up to 50% packets
>> {active, false} was much better but still losing up to 25% packets
>> {active, true}  was the clear winner with no packet loss at 100000
>> packets per second. When using two clients at 100k/s it started losing
>> packets but it was still reasonable given the amount of packets.
>> Also the VM stayed below 100% CPU and did not increase memory
>> consumption, staying stable below 23MB
>> The server is using active, true and passes the message instantly to
>> another process which does the actual work on another thread.
>> Also, we ran these benchmarks on the same machine and sent / received all
>> packets via the loopback interface. The next benchmarks could be run from
>> different machines and also with a more sophisticated client that doesn't
>> send the packets in bursts to see if that changes anything.
>> During the tests we played with read_packets and recbuf settings but it
>> appeared to have little or no effect at all but we will continue to play
>> around with that.
>> ~ John
>> Chandru wrote:
>>> Hi John,
>>> In the steady state, we have about 2000/sec, but in error situations,
>>> we've had peaks of up to 20000 packets / sec.
>>> cheers
>>> Chandru
>>> On 16 April 2012 10:31, John-Paul Bader <
>>> <mailto:>> wrote:
>>>    Hey Chandru,
>>>    how many packets per second did you have to deal with and how big
>>>    are they? Just to have something to compare to.
>>>    ~ John
>>>    Chandru wrote:
>>>        Hi John,
>>>        Our RADIUS server which handles our data network is written in
>>>        Erlang.
>>>        We've experimented with various values of recbuf and read_packets
>>>        options for the UDP socket. We also use {active, once}. The
>>>        receiving
>>>        process receives a packet and spawns a new process to handle it.
>>>        That is
>>>        all it does. The spawned process then executes the rest of the
>>>        business
>>>        logic.
>>>        That won't be your problem. The problem will be to make sure
>>>        your system
>>>        is stable while handling all those packets. We use overload
>>>        control at
>>>        the receiver. You have to pretty much look at the entire
>>>        execution path
>>>        for each packet and ensure there are no bottlenecks. At that kind
>>> of
>>>        load, every little bottleneck shows up sooner or later.
>>>        cheers
>>>        Chandru
>>>        On 15 April 2012 19:08, John-Paul Bader <
>>>        <mailto:>
>>>        <mailto: <mailto:>>> wrote:
>>>            Dear list,
>>>            I'm currently writing a bittorrent tracker in Erlang. While
>>>        a naive
>>>            implementation of the protocol is quite easy, there are some
>>>            performance related challanges where I could use some help.
>>>            In the first test run as a replacement for a very popular
>>>        tracker,
>>>            my erlang tracker got about 40k requests per second.
>>>            My initial approach was to initialize the socket in one
>>>        process with
>>>            {active, once}, handle the message in handle_info with minimal
>>>            effort and pass the data asynchronously to a freshly spawned
>>>        worker
>>>            processes which responds to the clients. After spawning the
>>>        process
>>>            I'm setting the socket back to {active, once}.
>>>            Now when I switched the erlang tracker live the erlang vm was
>>>            topping at 100% CPU load. My guess is that the process
>>>        handling the
>>>            udp packets from the socket could not keep up. Since I'm
>>>        still quite
>>>            new to the world of erlang I'd like to know if there are
>>>        some best
>>>            practices / patterns to handle this massive amount of packets.
>>>            For example using the socket in {active, once} might be too
>>>        slow?
>>>            Also the response to the clients needs to come from the same
>>>        port as
>>>            the request was coming in. Is it a problem to use the same
>>>        socket
>>>            for that? Should I pre-spawn a couple of thousand workers and
>>>            dispatch the data from the socket to them rather than
>>>        spawning them
>>>            on each packet?
>>>            It would be really great if you could give some advice or
>>>        point me
>>>            into the right directions.
>>>            ~ John
>>>            ______________________________**_____________________
>>>            erlang-questions mailing list
>>>         <mailto:erlang-questions@**erlang.org<>
>>> >
>>>        <mailto:**ang.org <http://erlang.org>
>>>        <mailto:erlang-questions@**erlang.org<>
>>> >>
>>>        http://erlang.org/mailman/____**listinfo/erlang-questions<http://erlang.org/mailman/____listinfo/erlang-questions>
>>>        <http://erlang.org/mailman/__**listinfo/erlang-questions<http://erlang.org/mailman/__listinfo/erlang-questions>
>>> >
>>>        <http://erlang.org/mailman/__**listinfo/erlang-questions<http://erlang.org/mailman/__listinfo/erlang-questions>
>>>        <http://erlang.org/mailman/**listinfo/erlang-questions<http://erlang.org/mailman/listinfo/erlang-questions>
>>> >>
>>>    ______________________________**___________________
>>>    erlang-questions mailing list
>>>     <mailto:erlang-questions@**erlang.org<>
>>> >
>>>    http://erlang.org/mailman/__**listinfo/erlang-questions<http://erlang.org/mailman/__listinfo/erlang-questions>
>>>    <http://erlang.org/mailman/**listinfo/erlang-questions<http://erlang.org/mailman/listinfo/erlang-questions>
>>> >
>>>  ______________________________**_________________
>> erlang-questions mailing list
>> http://erlang.org/mailman/**listinfo/erlang-questions<http://erlang.org/mailman/listinfo/erlang-questions>
> _______________________________________________
> erlang-questions mailing list
> http://erlang.org/mailman/listinfo/erlang-questions
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20120418/b41dddb0/attachment.html>

More information about the erlang-questions mailing list