[erlang-questions] Heavy duty UDP server performance

Ameretat Reith <>
Mon Feb 8 15:05:03 CET 2016


I simplified scenario and made a stress tester for this use case: Handle
each
UDP socket in a gen_server and send a UDP packet in every miliseconds [1].

It won't reach more that 280Mbit/s on my Core 2 duo system without sending
anything to wire.  At this point CPU will be a bottleneck here.  I sent perf
report in `out` directory of repository [2] and it shows still time spent in
process_main is high.

On our production servers with Xeon E3-1230 CPUs and low latency (.20ms
between servers), I can fill 1Gbits link: send 1400 byte packets each 20ms
from 1800 ports to 1800 ports, and measure bandwidth by received packets.
I can transfer with 1Gbit/s speed but at this point CPU usage is above 50%.
By overloading system, I can see no packet drop from /proc/net/udp but
response time drops considerably.  I think Erlang get packets from kernel
very
soon and buffers getting overrun in Erlang side, not sure how to measure
them
then.  I disabled and enabled kernel poll, perf report same time spent but
on
different functions.

1: https://github.com/reith/udpstress
2:
https://raw.githubusercontent.com/reith/udpstress/master/out/b67acb689f-perf.svg

On Fri, Feb 5, 2016 at 12:17 PM, Max Lapshin <> wrote:

> Well, I'm not going to argue about it, but I know that it is a serious
> blocker for us: when flussonic is consuming 50% of all cores only on
> capturing (unpacking mpegts is another pain) when code in C takes only
> 10-15% for this task, customers are complaining.
>
>
>
> _______________________________________________
> erlang-questions mailing list
> 
> http://erlang.org/mailman/listinfo/erlang-questions
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20160208/a62ca893/attachment.html>


More information about the erlang-questions mailing list