[erlang-questions] Linux ODBC socket recieve_msg woes
Andy Richards
andy.richards.iit@REDACTED
Fri Mar 23 18:16:59 CET 2012
Fixed. I found this forum message from back in 2004.
http://erlang.org/pipermail/erlang-questions/2004-July/012808.html
Editing odbcserver.c and disabling nagel's algorithm (approx 40ms on Redhat
6) on the socket solved the problem. I wonder why this was never added to
odbcserver.c in the past?
I still see a packet being sent and a ack in my tcpdump inbetween the
initial query msg being sent and my resultset being sent back to erlang
which I have no idea what for? However overall performance has improved
greatly.
Andy.
On Friday, 23 March 2012, Andy Richards <andy.richards.iit@REDACTED>
wrote:
> Hi all,
>
> I'm experiencing performance issues with erlang odbc when running on
Linux. I have a simple test application which sends 100 queries from my
test gen_server via erlang odbc to sqlserver. When running this test on
Windows the test completes in about 200 millis however the exact same test
on Linux take about 4 seconds!
>
> I added trace logging to the odbc port driver odbcserver.c and can see
that it also takes approx 200 millis to execute all messages and send them
back to Erlang however the function receive_msg & receive_msg_part adds
approx 3.6 seconds receiving messages from the socket?
>
> I'm running OTP R15B which I've compiled myself on our Redhat EL 6
server. Has anyone come experienced socket performance issues with ODBC
under Linux ?
>
> Many thank,
>
> Andy.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20120323/c29a667a/attachment.htm>
More information about the erlang-questions
mailing list