<div dir="ltr">Hello Dmitry,<div><br></div><div>On my localbox I had a simple lan configuration with a single gateway/switch to internet, so I'm not sure what else there is to mention about the lan config. However; the machine configurations are as below:</div><div>1. Server-1 [4 core 4 GB]</div><div>2. Server-2 <span style="line-height:1.5">[4 core 4 GB]</span></div><div>3. Relay [Works as a load Balancer] <span style="line-height:1.5">[4 core 4 GB]</span></div><div>4. Client 1 <span style="line-height:1.5">[4 core 4 GB]</span></div><div>5. Client 2 <span style="line-height:1.5">[4 core 4 GB]</span></div><div><span style="line-height:1.5"><br></span></div><div><span style="line-height:1.5">Each machine runs a single node as mentioned. There is a single tcp socket connection between peers where </span><span style="line-height:1.5">alll Clients connect to Relay and Relay in turn connects with the 2 servers and acts a proxy while also balancing the request between the two servers using round robin algorithm.</span><span class="inbox-inbox-Apple-converted-space" style="line-height:1.5"> This connection is used to send all the request using the gen_server:call (blocking) method. Following is the erl vm params I'm using for all of the nodes:</span></div><div><span class="inbox-inbox-Apple-converted-space" style="line-height:1.5"><br></span></div><div><span class="inbox-inbox-Apple-converted-space">erl +A 1024 +P 134217727 +Q 134217727 -env ERL_MAX_PORTS 134217727 +K true <br></span></div><div><span class="inbox-inbox-Apple-converted-space"><br></span></div><div><span class="inbox-inbox-Apple-converted-space">I'm able to get around 26K req/sec on relay which divides the requests between two servers around 13K req/sec. Each and every node was utilizing near about <b>99%</b> of all the cores available on the system. This I was able to get on my local network.</span></div><div><span class="inbox-inbox-Apple-converted-space"><br></span></div><div><span class="inbox-inbox-Apple-converted-space">As for AWS, I had the following setup:</span></div><div><span class="inbox-inbox-Apple-converted-space"><br></span></div><div><span class="inbox-inbox-Apple-converted-space">1. Server - 1 [4 core 16GB]</span></div><div><span class="inbox-inbox-Apple-converted-space">2. Server - 2 [4 core 16GB]<br></span></div><div><span class="inbox-inbox-Apple-converted-space">3. Relay [16 core 30GB]<br></span></div><div><span class="inbox-inbox-Apple-converted-space">4. Client [16 core 30GB]<br></span></div><div><br></div><div>I had launched four of them as part of single placement group to remove bandwidth restriction. When I did run the nodes, the processors were <b>not</b> being utilized more than <b>60% </b>and yet the relay was getting around 12K req/sec which it was dividing between servers with each one getting around 6K req/sec. Erlang VM args were the same as mentioned above. I was using internal private IPs on AWS to connect the peers with one another to avoid bandwidth limitation that comes with public IP usage. Each of the aws instance has EBS volume attached to it. The only disk access I'm doing is to write log at the per second rate, besides accessing the ETS table for getting counter instances from each instance. </div><div><br></div><div>I hope I've covered all there is to the configurations and also that I was clear enough. </div><div><br></div><div>Thanks again,</div><div>Regards,</div><div>Arshad</div></div><br><div class="gmail_quote"><div dir="ltr">On Tue, Aug 30, 2016 at 12:34 AM Dmitry Kolesnikov <<a href="mailto:dmkolesnikov@gmail.com">dmkolesnikov@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
<br>
You’ve experienced difference due to network configuration. CPU factor is important but I’ll spent more time to study network config.<br>
<br>
You have not defined to us your local network config, its latency, used operating system, networking kernel options and Erlang flags. Similarly, your AWS config is not know to us either.<br>
<br>
Best Regards,<br>
Dmitry<br>
<br>
<br>
> On Aug 29, 2016, at 6:55 PM, Arshad Ansari <<a href="mailto:arshadansari27@gmail.com" target="_blank">arshadansari27@gmail.com</a>> wrote:<br>
><br>
> Hello there,<br>
><br>
> Today, as a part of evalution of erlang diameter client, relay & server, I was benchmarking for requests per second. This same test was performed by me on my local network and I was able to get 30K requests per second between client and server using a single connection. However, on Aws I got to only about 11-12K requests per second. I had used c3.x4large instance with 16 cores and 30 GB ram for both client and server, which is 4 times the core I have and twice the ram I had when I was testing locally. These are compute enhanced instances to make sure I get to use core to the maximum, but I wasn't even using 60% of cores on server and 80% of cores on client. I even used the same placement group to get 10GBps network bandwidth. I can't think of any reason why there would be just a downgrade. Has anyone experienced something of this sort on Aws while benchmarking erlang?<br>
><br>
> Thanks in advance for the help.<br>
><br>
> Regards,<br>
> Arshad<br>
> _______________________________________________<br>
> erlang-questions mailing list<br>
> <a href="mailto:erlang-questions@erlang.org" target="_blank">erlang-questions@erlang.org</a><br>
> <a href="http://erlang.org/mailman/listinfo/erlang-questions" rel="noreferrer" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
<br>
</blockquote></div>