[erlang-questions] Heavy duty UDP server performance

Chandru chandrashekhar.mullaparthi@REDACTED
Tue Feb 9 00:58:45 CET 2016


Hi,

I rewrote your client slightly and got better throughput than what you are
getting. Tests were run on a 2.8 GHz Intel Core i7 running OS X.

https://github.com/cmullaparthi/udpstress

23:52:18.712 [notice] listening on udp 12000

...

23:52:18.780 [notice] listening on udp 12993

23:52:18.781 [notice] listening on udp 12994

23:52:18.781 [notice] listening on udp 12995

23:52:18.781 [notice] listening on udp 12996

23:52:18.781 [notice] listening on udp 12997

23:52:18.781 [notice] listening on udp 12998

23:52:18.781 [notice] listening on udp 12999

23:52:18.781 [notice] listening on udp 13000

23:52:28.718 [notice] 1454975548: recv_pkts: 0 recv_size: 0 sent_pkts: 0
sent_size: 0 recv: 0.000000 Mbit/s send: 0.000000 Mbit/s

23:52:38.724 [notice] 1454975558: recv_pkts: 0 recv_size: 0 sent_pkts: 0
sent_size: 0 recv: 0.000000 Mbit/s send: 0.000000 Mbit/s

23:52:48.725 [notice] 1454975568: recv_pkts: 0 recv_size: 0 sent_pkts: 0
sent_size: 0 recv: 0.000000 Mbit/s send: 0.000000 Mbit/s

23:52:58.728 [notice] 1454975578: recv_pkts: 679648 recv_size: 951507200
sent_pkts: 679648 sent_size: 3398240 recv: 761.205760 Mbit/s send: 2.718592
Mbit/s

23:53:08.729 [notice] 1454975588: recv_pkts: 652524 recv_size: 913533600
sent_pkts: 652524 sent_size: 3262620 recv: 730.826880 Mbit/s send: 2.610096
Mbit/s

23:53:18.730 [notice] 1454975598: recv_pkts: 638936 recv_size: 894510400
sent_pkts: 638936 sent_size: 3194680 recv: 715.608320 Mbit/s send: 2.555744
Mbit/s

23:53:28.733 [notice] 1454975608: recv_pkts: 618893 recv_size: 866450200
sent_pkts: 618893 sent_size: 3094465 recv: 693.160160 Mbit/s send: 2.475572
Mbit/s

23:53:38.735 [notice] 1454975618: recv_pkts: 620698 recv_size: 868977200
sent_pkts: 620698 sent_size: 3103490 recv: 695.181760 Mbit/s send: 2.482792
Mbit/s

23:53:48.736 [notice] 1454975628: recv_pkts: 610931 recv_size: 855303400
sent_pkts: 610931 sent_size: 3054655 recv: 684.242720 Mbit/s send: 2.443724
Mbit/s

23:53:58.738 [notice] 1454975638: recv_pkts: 623615 recv_size: 873061000
sent_pkts: 623615 sent_size: 3118075 recv: 698.448800 Mbit/s send: 2.494460
Mbit/s

23:54:08.739 [notice] 1454975648: recv_pkts: 629565 recv_size: 881391000
sent_pkts: 629565 sent_size: 3147825 recv: 705.112800 Mbit/s send: 2.518260
Mbit/s

23:54:18.740 [notice] 1454975658: recv_pkts: 624504 recv_size: 874305600
sent_pkts: 624504 sent_size: 3122520 recv: 699.444480 Mbit/s send: 2.498016
Mbit/s

23:54:28.741 [notice] 1454975668: recv_pkts: 625500 recv_size: 875700000
sent_pkts: 625500 sent_size: 3127500 recv: 700.560000 Mbit/s send: 2.502000
Mbit/s

23:54:38.742 [notice] 1454975678: recv_pkts: 615165 recv_size: 861231000
sent_pkts: 615165 sent_size: 3075825 recv: 688.984800 Mbit/s send: 2.460660
Mbit/s

23:54:48.743 [notice] 1454975688: recv_pkts: 620643 recv_size: 868900200
sent_pkts: 620643 sent_size: 3103215 recv: 695.120160 Mbit/s send: 2.482572
Mbit/s

23:54:58.744 [notice] 1454975698: recv_pkts: 623126 recv_size: 872376400
sent_pkts: 623126 sent_size: 3115630 recv: 697.901120 Mbit/s send: 2.492504
Mbit/s

23:55:08.746 [notice] 1454975708: recv_pkts: 630593 recv_size: 882830200
sent_pkts: 630593 sent_size: 3152965 recv: 706.264160 Mbit/s send: 2.522372
Mbit/s

23:55:18.747 [notice] 1454975718: recv_pkts: 623336 recv_size: 872670400
sent_pkts: 623336 sent_size: 3116680 recv: 698.136320 Mbit/s send: 2.493344
Mbit/s

23:55:28.749 [notice] 1454975728: recv_pkts: 611828 recv_size: 856559200
sent_pkts: 611828 sent_size: 3059140 recv: 685.247360 Mbit/s send: 2.447312
Mbit/s

23:55:38.750 [notice] 1454975738: recv_pkts: 626984 recv_size: 877777600
sent_pkts: 626984 sent_size: 3134920 recv: 702.222080 Mbit/s send: 2.507936
Mbit/s


On 8 February 2016 at 14:05, Ameretat Reith <ameretat.reith@REDACTED>
wrote:

> I simplified scenario and made a stress tester for this use case: Handle
> each
> UDP socket in a gen_server and send a UDP packet in every miliseconds [1].
>
> It won't reach more that 280Mbit/s on my Core 2 duo system without sending
> anything to wire.  At this point CPU will be a bottleneck here.  I sent
> perf
> report in `out` directory of repository [2] and it shows still time spent
> in
> process_main is high.
>
> On our production servers with Xeon E3-1230 CPUs and low latency (.20ms
> between servers), I can fill 1Gbits link: send 1400 byte packets each 20ms
> from 1800 ports to 1800 ports, and measure bandwidth by received packets.
> I can transfer with 1Gbit/s speed but at this point CPU usage is above 50%.
> By overloading system, I can see no packet drop from /proc/net/udp but
> response time drops considerably.  I think Erlang get packets from kernel
> very
> soon and buffers getting overrun in Erlang side, not sure how to measure
> them
> then.  I disabled and enabled kernel poll, perf report same time spent but
> on
> different functions.
>
> 1: https://github.com/reith/udpstress
> 2:
> https://raw.githubusercontent.com/reith/udpstress/master/out/b67acb689f-perf.svg
>
> On Fri, Feb 5, 2016 at 12:17 PM, Max Lapshin <max.lapshin@REDACTED>
> wrote:
>
>> Well, I'm not going to argue about it, but I know that it is a serious
>> blocker for us: when flussonic is consuming 50% of all cores only on
>> capturing (unpacking mpegts is another pain) when code in C takes only
>> 10-15% for this task, customers are complaining.
>>
>>
>>
>> _______________________________________________
>> erlang-questions mailing list
>> erlang-questions@REDACTED
>> http://erlang.org/mailman/listinfo/erlang-questions
>>
>>
>
> _______________________________________________
> erlang-questions mailing list
> erlang-questions@REDACTED
> http://erlang.org/mailman/listinfo/erlang-questions
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20160208/39f2c9a0/attachment.htm>


More information about the erlang-questions mailing list