[erlang-questions] How to handle a massive amount of UDP packets?

Ulf Wiger ulf@REDACTED
Sun Apr 15 22:29:58 CEST 2012


That erlang-based apps have better response times than C/C++-based apps is not uncommon. Many techniques to get high throughput do bad things to latency.

The dynamically spawned processes in the Diameter application don't coat much (unless you have them do heavy apllication-specific stuff), and they shouldn't load only one core. Most of the heavy lifting (encode/decode, retramsmit logic etc.) is in the dia_service module, and (I think) done in the dedicated process handling a certain port.

BR,
Ulf W

Ulf Wiger, Feuerlabs, Inc.
http://www.feuerlabs.com

15 apr 2012 kl. 21:49 skrev Kabir <nang2_2000@REDACTED>:

> I am eager to know the answer of this question, too. Recently, I am trying out Erlang's own Diameter stack, which runs in a similar fashion, though it is mainly tcp-based. It spawns a new process to handle each request. With 3.6K requests per second, erlang vm is sucking one core out, while our in-house Diameter stack (C/C++ based) barely uses 5-10%. However, the round trip time is much shorter with Erlang Diameter stack, which further confuses me.
> 
> I remember RabbitMQ uses a process worker pool instead of spawning new processes at will. 
> 
> Cheers
> ----
> Kabir
> 
> 
> --- On Mon, 4/16/12, John-Paul Bader <hukl@REDACTED> wrote:
> 
>> From: John-Paul Bader <hukl@REDACTED>
>> Subject: [erlang-questions] How to handle a massive amount of UDP packets?
>> To: erlang-questions@REDACTED
>> Date: Monday, April 16, 2012, 3:08 AM
>> Dear list,
>> 
>> 
>> I'm currently writing a bittorrent tracker in Erlang. While
>> a naive implementation of the protocol is quite easy, there
>> are some performance related challanges where I could use
>> some help.
>> 
>> In the first test run as a replacement for a very popular
>> tracker, my erlang tracker got about 40k requests per
>> second.
>> 
>> My initial approach was to initialize the socket in one
>> process with {active, once}, handle the message in
>> handle_info with minimal effort and pass the data
>> asynchronously to a freshly spawned worker processes which
>> responds to the clients. After spawning the process I'm
>> setting the socket back to {active, once}.
>> 
>> Now when I switched the erlang tracker live the erlang vm
>> was topping at 100% CPU load. My guess is that the process
>> handling the udp packets from the socket could not keep up.
>> Since I'm still quite new to the world of erlang I'd like to
>> know if there are some best practices / patterns to handle
>> this massive amount of packets.
>> 
>> For example using the socket in {active, once} might be too
>> slow? Also the response to the clients needs to come from
>> the same port as the request was coming in. Is it a problem
>> to use the same socket for that? Should I pre-spawn a couple
>> of thousand workers and dispatch the data from the socket to
>> them rather than spawning them on each packet?
>> 
>> It would be really great if you could give some advice or
>> point me into the right directions.
>> 
>> ~ John
>> _______________________________________________
>> erlang-questions mailing list
>> erlang-questions@REDACTED
>> http://erlang.org/mailman/listinfo/erlang-questions
>> 
> _______________________________________________
> erlang-questions mailing list
> erlang-questions@REDACTED
> http://erlang.org/mailman/listinfo/erlang-questions



More information about the erlang-questions mailing list