[erlang-questions] Heavy duty UDP server performance

Ameretat Reith ameretat.reith@REDACTED
Tue Feb 9 16:10:08 CET 2016


On Tue, Feb 9, 2016 at 3:28 AM, Chandru <
chandrashekhar.mullaparthi@REDACTED> wrote:

    Hi,

    I rewrote your client slightly and got better throughput than what you
are getting. Tests were run on a 2.8 GHz Intel Core i7 running OS X.

    https://github.com/cmullaparthi/udpstress


Thanks.  I tested your method and made same thing on server (avoiding
gen_server) Here is what I'm getting:

VMARGS_PATH=$PWD/server.args ./bin/udpstress foreground -extra plain_server
1400 1600

06:04:31.122 [notice] 1455023071: recv_pkts: 91000 recv_size: 127400000
sent_pkts: 91000 sent_size: 455000 recv: 101.920000 Mbit/s send: 0.364000
Mbit/s
06:04:41.123 [notice] 1455023081: recv_pkts: 642473 recv_size: 899462200
sent_pkts: 642473 sent_size: 3212365 recv: 719.569760 Mbit/s send: 2.569892
Mbit/s
06:04:51.124 [notice] 1455023091: recv_pkts: 659013 recv_size: 922618200
sent_pkts: 659013 sent_size: 3295065 recv: 738.094560 Mbit/s send: 2.636052
Mbit/s
06:05:01.126 [notice] 1455023101: recv_pkts: 656831 recv_size: 919563400
sent_pkts: 656831 sent_size: 3284155 recv: 735.650720 Mbit/s send: 2.627324
Mbit/s
06:05:11.126 [notice] 1455023111: recv_pkts: 646297 recv_size: 904815800
sent_pkts: 646297 sent_size: 3231485 recv: 723.852640 Mbit/s send: 2.585188
Mbit/s
06:05:21.127 [notice] 1455023121: recv_pkts: 638607 recv_size: 894049800
sent_pkts: 638607 sent_size: 3193035 recv: 715.239840 Mbit/s send: 2.554428
Mbit/s
06:05:31.128 [notice] 1455023131: recv_pkts: 641356 recv_size: 897898400
sent_pkts: 641356 sent_size: 3206780 recv: 718.318720 Mbit/s send: 2.565424
Mbit/s

$ VMARGS_PATH=$PWD/server.args ./bin/udpstress foreground -extra
genserver_server 1400 1600

06:06:41.262 [notice] 1455023201: recv_pkts: 238786 recv_size: 334300400
sent_pkts: 238786 sent_size: 1193930 recv: 267.440320 Mbit/s send: 0.955144
Mbit/s
06:06:51.262 [notice] 1455023211: recv_pkts: 646220 recv_size: 904708000
sent_pkts: 646220 sent_size: 3231100 recv: 723.766400 Mbit/s send: 2.584880
Mbit/s
06:07:01.263 [notice] 1455023221: recv_pkts: 647552 recv_size: 906572800
sent_pkts: 647552 sent_size: 3237760 recv: 725.258240 Mbit/s send: 2.590208
Mbit/s
06:07:11.264 [notice] 1455023231: recv_pkts: 642863 recv_size: 900008200
sent_pkts: 642863 sent_size: 3214315 recv: 720.006560 Mbit/s send: 2.571452
Mbit/s
06:07:21.265 [notice] 1455023241: recv_pkts: 644790 recv_size: 902706000
sent_pkts: 644790 sent_size: 3223950 recv: 722.164800 Mbit/s send: 2.579160
Mbit/s


Seems both servers fullfill the request rate, now I try to flood server:
request every 20 milliseconds and see receive rate in server:

$ VMARGS_PATH=$PWD/client.args ./bin/udpstress foreground -extra
plain_client server_addr -i 20 1400 3100

$ VMARGS_PATH=$PWD/server.args ./bin/udpstress foreground -extra
plain_server 1400 3100

07:49:01.804 [notice] 1455029341: recv_pkts: 850948 recv_size: 1191327200
sent_pkts: 850948 sent_size: 4254740 recv: 953.061760 Mbit/s send: 3.403792
Mbit/s
07:49:11.805 [notice] 1455029351: recv_pkts: 851744 recv_size: 1192441600
sent_pkts: 851744 sent_size: 4258720 recv: 953.953280 Mbit/s send: 3.406976
Mbit/s

while using genserver_client less packets getting routed, server log:

07:53:41.832 [notice] 1455029621: recv_pkts: 810797 recv_size: 1135115800
sent_pkts: 810797 sent_size: 4053985 recv: 908.092640 Mbit/s send: 3.243188
Mbit/s
07:53:51.833 [notice] 1455029631: recv_pkts: 810403 recv_size: 1134564200
sent_pkts: 810403 sent_size: 4052015 recv: 907.651360 Mbit/s send: 3.241612
Mbit/s


Above logs are made using two Xeon E3 Quad core servers having about .20ms
latency and codes I collected in [1].  I think problem is not using or not
using gen_server, It's mostly because just getting UDP packets in 1Gbit/s
takes that much CPU in Erlang.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20160209/06857b1f/attachment.htm>


More information about the erlang-questions mailing list