<div dir="rtl"><div dir="ltr">Hi all,<br></div><div dir="ltr"><br></div><div dir="ltr">First, thank you all for your inputs.</div><div dir="ltr"><br></div><div dir="ltr">I'll try to address all the inputs here: </div><div dir="ltr">
<br></div><div dir="ltr">Regarding the network and the interface:</div><div dir="ltr">I've tested the network configuration and the interface between the machines using iperf tool:<br></div><div dir="ltr">I've reach about 8Gbits bandwidth. I've checked my net.core and net.ipv4.tcp_X configuration and it seems OK.</div>
<div dir="ltr">I also tried to reduce the window size and run parallel connections - in the worst case scenario I've reached about 2.5Gbits throughput between the machines.</div><div dir="ltr">To remind you I'm talking about ~120Mbits bottleneck.</div>
<div dir="ltr"><br></div><div dir="ltr">Regarding the number of UDP sockets:</div><div dir="ltr">The requests to start receiving data dynamic. Each data flow may be received from several UDP ports. </div><div dir="ltr">My way to implement it was dynamically create processes when each may create several gen_udp's to listen on. </div>
<div dir="ltr">This was my way to implement it. I'll be happy to hear another ideas.</div><div dir="ltr">To remind you, when I'm moving the receiving machine - the throughput between the nodes is doubled. (which is still low, but it proves that the UDP sockets are not the bottleneck - </div>
<div dir="ltr">at least not the original 120Mbits bottleneck).</div><div dir="ltr"><br></div><div dir="ltr">Regarding the counter:</div><div dir="ltr">This is only a helper for debugging this issue. I've mentioned the counter to clarify that this is the only activity done in the receiving process (dst node). </div>
<div class="gmail_extra"><div dir="ltr">For my understanding storing this counter in internal gen_server State record is the cheapest way to realize it.</div><div dir="ltr"><br></div><div dir="ltr">Please advice,</div><div dir="ltr">
Thanks again</div><div dir="ltr"><br><div class="gmail_quote">2013/12/10 Jesper Louis Andersen <span dir="ltr"><<a href="mailto:jesper.louis.andersen@erlang-solutions.com" target="_blank">jesper.louis.andersen@erlang-solutions.com</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="gmail_extra"><div><br><div class="gmail_quote">On Tue, Dec 10, 2013 at 4:09 PM, Eli Cohen <span dir="ltr"><<a href="mailto:eli.cohen357@gmail.com" target="_blank">eli.cohen357@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Is there any flags/parameters that may affect this area?</blockquote></div><br></div>First thing: Verify that you can actually get the bandwidth you assume between the two nodes in a raw transmit without any Erlang in between.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">Secondly, look at the TCP connection between the two machines. On a 10Gig interface, it is often the case you need to tune the kernel and the network card a bit before you can push data flawlessly.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra"><br></div></blockquote></div></div></div></div>