Fixed. I found this forum message from back in 2004.<br><br><a href="http://erlang.org/pipermail/erlang-questions/2004-July/012808.html">http://erlang.org/pipermail/erlang-questions/2004-July/012808.html</a><br><br>Editing odbcserver.c and disabling nagel's algorithm (approx 40ms on Redhat 6) on the socket solved the problem. I wonder why this was never added to odbcserver.c in the past? <br>
<br>I still see a packet being sent and a ack in my tcpdump inbetween the initial query msg being sent and my resultset being sent back to erlang which I have no idea what for? However overall performance has improved greatly.<br>
<br>Andy.<br><br>On Friday, 23 March 2012, Andy Richards <<a href="mailto:andy.richards.iit@googlemail.com">andy.richards.iit@googlemail.com</a>> wrote:<br>> Hi all,<br>><br>> I'm experiencing performance issues with erlang odbc when running on Linux. I have a simple test application which sends 100 queries from my test gen_server via erlang odbc to sqlserver. When running this test on Windows the test completes in about 200 millis however the exact same test on Linux take about 4 seconds!<br>
><br>> I added trace logging to the odbc port driver odbcserver.c and can see that it also takes approx 200 millis to execute all messages and send them back to Erlang however the function receive_msg & receive_msg_part adds approx 3.6 seconds receiving messages from the socket?<br>
><br>> I'm running OTP R15B which I've compiled myself on our Redhat EL 6 server. Has anyone come experienced socket performance issues with ODBC under Linux ?<br>><br>> Many thank,<br>><br>> Andy.