<div dir="ltr">Hi,<div><br></div><div>I rewrote your client slightly and got better throughput than what you are getting. Tests were run on a 2.8 GHz Intel Core i7 running OS X.</div><div><br></div><div><a href="https://github.com/cmullaparthi/udpstress">https://github.com/cmullaparthi/udpstress</a><br></div><div><br></div><div><p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:52:18.712 [notice] listening on udp 12000</p><p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">...</p><p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:52:18.780 [notice] listening on udp 12993</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:52:18.781 [notice] listening on udp 12994</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:52:18.781 [notice] listening on udp 12995</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:52:18.781 [notice] listening on udp 12996</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:52:18.781 [notice] listening on udp 12997</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:52:18.781 [notice] listening on udp 12998</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:52:18.781 [notice] listening on udp 12999</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:52:18.781 [notice] listening on udp 13000</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:52:28.718 [notice] 1454975548: recv_pkts: 0 recv_size: 0 sent_pkts: 0 sent_size: 0 recv: 0.000000 Mbit/s send: 0.000000 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:52:38.724 [notice] 1454975558: recv_pkts: 0 recv_size: 0 sent_pkts: 0 sent_size: 0 recv: 0.000000 Mbit/s send: 0.000000 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:52:48.725 [notice] 1454975568: recv_pkts: 0 recv_size: 0 sent_pkts: 0 sent_size: 0 recv: 0.000000 Mbit/s send: 0.000000 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:52:58.728 [notice] 1454975578: recv_pkts: 679648 recv_size: 951507200 sent_pkts: 679648 sent_size: 3398240 recv: 761.205760 Mbit/s send: 2.718592 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:53:08.729 [notice] 1454975588: recv_pkts: 652524 recv_size: 913533600 sent_pkts: 652524 sent_size: 3262620 recv: 730.826880 Mbit/s send: 2.610096 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:53:18.730 [notice] 1454975598: recv_pkts: 638936 recv_size: 894510400 sent_pkts: 638936 sent_size: 3194680 recv: 715.608320 Mbit/s send: 2.555744 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:53:28.733 [notice] 1454975608: recv_pkts: 618893 recv_size: 866450200 sent_pkts: 618893 sent_size: 3094465 recv: 693.160160 Mbit/s send: 2.475572 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:53:38.735 [notice] 1454975618: recv_pkts: 620698 recv_size: 868977200 sent_pkts: 620698 sent_size: 3103490 recv: 695.181760 Mbit/s send: 2.482792 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:53:48.736 [notice] 1454975628: recv_pkts: 610931 recv_size: 855303400 sent_pkts: 610931 sent_size: 3054655 recv: 684.242720 Mbit/s send: 2.443724 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:53:58.738 [notice] 1454975638: recv_pkts: 623615 recv_size: 873061000 sent_pkts: 623615 sent_size: 3118075 recv: 698.448800 Mbit/s send: 2.494460 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:54:08.739 [notice] 1454975648: recv_pkts: 629565 recv_size: 881391000 sent_pkts: 629565 sent_size: 3147825 recv: 705.112800 Mbit/s send: 2.518260 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:54:18.740 [notice] 1454975658: recv_pkts: 624504 recv_size: 874305600 sent_pkts: 624504 sent_size: 3122520 recv: 699.444480 Mbit/s send: 2.498016 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:54:28.741 [notice] 1454975668: recv_pkts: 625500 recv_size: 875700000 sent_pkts: 625500 sent_size: 3127500 recv: 700.560000 Mbit/s send: 2.502000 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:54:38.742 [notice] 1454975678: recv_pkts: 615165 recv_size: 861231000 sent_pkts: 615165 sent_size: 3075825 recv: 688.984800 Mbit/s send: 2.460660 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:54:48.743 [notice] 1454975688: recv_pkts: 620643 recv_size: 868900200 sent_pkts: 620643 sent_size: 3103215 recv: 695.120160 Mbit/s send: 2.482572 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:54:58.744 [notice] 1454975698: recv_pkts: 623126 recv_size: 872376400 sent_pkts: 623126 sent_size: 3115630 recv: 697.901120 Mbit/s send: 2.492504 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:55:08.746 [notice] 1454975708: recv_pkts: 630593 recv_size: 882830200 sent_pkts: 630593 sent_size: 3152965 recv: 706.264160 Mbit/s send: 2.522372 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:55:18.747 [notice] 1454975718: recv_pkts: 623336 recv_size: 872670400 sent_pkts: 623336 sent_size: 3116680 recv: 698.136320 Mbit/s send: 2.493344 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:55:28.749 [notice] 1454975728: recv_pkts: 611828 recv_size: 856559200 sent_pkts: 611828 sent_size: 3059140 recv: 685.247360 Mbit/s send: 2.447312 Mbit/s</p>
<p style="margin:0px;font-size:14px;line-height:normal;font-family:Menlo">23:55:38.750 [notice] 1454975738: recv_pkts: 626984 recv_size: 877777600 sent_pkts: 626984 sent_size: 3134920 recv: 702.222080 Mbit/s send: 2.507936 Mbit/s</p></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 8 February 2016 at 14:05, Ameretat Reith <span dir="ltr"><<a href="mailto:ameretat.reith@gmail.com" target="_blank">ameretat.reith@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>I simplified scenario and made a stress tester for this use case: Handle each <br>UDP socket in a gen_server and send a UDP packet in every miliseconds [1].<br><br>It won't reach more that 280Mbit/s on my Core 2 duo system without sending<br>anything to wire. At this point CPU will be a bottleneck here. I sent perf<br>report in `out` directory of repository [2] and it shows still time spent in<br>process_main is high.<br><br></div>On our production servers with Xeon E3-1230 CPUs and low latency (.20ms<br>between servers), I can fill 1Gbits link: send 1400 byte packets each 20ms<br>from 1800 ports to 1800 ports, and measure bandwidth by received packets.<br>I can transfer with 1Gbit/s speed but at this point CPU usage is above 50%.<br></div><div>By overloading system, I can see no packet drop from /proc/net/udp but<br>response time drops considerably. I think Erlang get packets from kernel very<br>soon and buffers getting overrun in Erlang side, not sure how to measure them<br>then. I disabled and enabled kernel poll, perf report same time spent but on<br>different functions.<br></div><div><div><div><br>1: <a href="https://github.com/reith/udpstress" target="_blank">https://github.com/reith/udpstress</a><br>2: <a href="https://raw.githubusercontent.com/reith/udpstress/master/out/b67acb689f-perf.svg" target="_blank">https://raw.githubusercontent.com/reith/udpstress/master/out/b67acb689f-perf.svg</a></div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Fri, Feb 5, 2016 at 12:17 PM, Max Lapshin <span dir="ltr"><<a href="mailto:max.lapshin@gmail.com" target="_blank">max.lapshin@gmail.com</a>></span> wrote:<br></span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><div dir="ltr"><div class="gmail_extra">Well, I'm not going to argue about it, but I know that it is a serious blocker for us: when flussonic is consuming 50% of all cores only on capturing (unpacking mpegts is another pain) when code in C takes only 10-15% for this task, customers are complaining.</div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div></div>
<br></span><span class="">_______________________________________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org" target="_blank">erlang-questions@erlang.org</a><br>
<a href="http://erlang.org/mailman/listinfo/erlang-questions" rel="noreferrer" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
<br></span></blockquote></div><br></div>
<br>_______________________________________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>
<a href="http://erlang.org/mailman/listinfo/erlang-questions" rel="noreferrer" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
<br></blockquote></div><br></div>