<div dir="ltr"><div><div><div><div>What OS are you running, and has the TCP stack been tuned/optimized with sysctl's?<br><br></div>Does the problem happen all the time, or only sometimes?<br><br></div>In my quick test before posting, gen_tcp:send() returned {error, closed} soon after my client disconnected without any recv's on the socket. I was sending single bytes at a time with a 1-second delay, so I'm sure no buffers were overflowing to trigger the disconnect. It acted pretty much the same in about 5 attempts, so I'm really curious why we see different behavior.<br>
<br></div>Tangential, but possibly interesting, I had to comment out 2 of the socket options in your list. I got a badarg from inet_tcp:listen with the 'broadcast' and 'dontroute' options.<br><br></div>-G<br>
<div><div><div><div><div><div><br></div></div></div></div></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Apr 8, 2014 at 5:49 PM, Fred Hebert <span dir="ltr"><<a href="mailto:mononcqc@ferd.ca" target="_blank">mononcqc@ferd.ca</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">The difference here is that we have a peer that actively closes the<br>
connection and the packets are readable and detected as far as tcpdumps<br>
go. Things like FINs or RSTs. They're making it to our host, they're<br>
just entirely ignored unless we're willing to read undefined amounts of<br>
data from the buffer for the stack to apparently get access to them.<br>
99.99% of the time that's gonna be 0 bytes because of HTTP clients' rare<br>
tendency to pipeline requests.<br>
<br>
I'm just extremely annoyed having to figure out a way to add in a buffer<br>
to carry possibly unlimited amounts of data to read from a socket so it<br>
can figure out if a FIN or RST was sent recently, but I guess that's the<br>
way we'll have to do it.<br>
<br>
Oh well.<br>
<div class="HOEnZb"><div class="h5"><br>
On 04/08, Garret Smith wrote:<br>
> Basically, this has everything to do with TCP sockets (and the OS<br>
> implementation) and very little to do with Erlang. I do get {error,<br>
> closed} from gen_tcp:send when the client disconnects. I'm running<br>
> R16B03-1 on FreeBSD 10.<br>
><br>
> Here are some decent descriptions of the problem:<br>
><br>
> <a href="http://www.linuxquestions.org/questions/programming-9/how-could-server-detect-closed-client-socket-using-tcp-and-c-824615/" target="_blank">http://www.linuxquestions.org/questions/programming-9/how-could-server-detect-closed-client-socket-using-tcp-and-c-824615/</a><br>
><br>
> The second answer in the page above has some good links too, pulled out for<br>
> easy reference:<br>
><br>
> <a href="http://stackoverflow.com/questions/722240/instantly-detect-client-disconnection-from-server-socket" target="_blank">http://stackoverflow.com/questions/722240/instantly-detect-client-disconnection-from-server-socket</a><br>
> <a href="http://www.softlab.ntua.gr/facilities/documentation/unix/unix-socket-faq/unix-socket-faq-2.html#ss2.8" target="_blank">http://www.softlab.ntua.gr/facilities/documentation/unix/unix-socket-faq/unix-socket-faq-2.html#ss2.8</a><br>
><br>
> -Garret<br>
><br>
><br>
> On Tue, Apr 8, 2014 at 1:40 PM, Fred Hebert <<a href="mailto:mononcqc@ferd.ca">mononcqc@ferd.ca</a>> wrote:<br>
><br>
> > Hi there,<br>
> ><br>
> > Happy fun case we're hitting on nodes using Erlang right now. We have an<br>
> > HTTP proxy that does direct data streaming from a server to a client.<br>
> > This is done through a series of recv from the server, and of sends to<br>
> > the client. The TCP sockets involved are all in passive mode.<br>
> ><br>
> > The problem with this approach is that because this is a purely<br>
> > unidirectional stream until the server says so or the client quits, we'd<br>
> > like to detect both of these events.<br>
> ><br>
> > The server quitting or being done is easy enough, but the client<br>
> > quitting cannot be detected, apparently.<br>
> ><br>
> > It turns out that gen_tcp:send/2 always returns 'ok' even if the<br>
> > connection has been closed by the peer before. Erlang/OTP just won't<br>
> > acknowledge the fact unless someone tries to read from the socket,<br>
> > either through gen_tcp:recv or by using inet:setopts(Port, [{active,<br>
> > once}]), at which point {error, closed} starts being returned by<br>
> > gen_tcp:send/2. This happens, even if `{exit_on_close, true}` is<br>
> > specified as an option.<br>
> ><br>
> > The question I have here is why is this behavior different for send than<br>
> > recv. It seems that `gen_tcp:send` will happily wait for hours<br>
> > pretending to send data (no matter the timeout values used are), even<br>
> > once the connection has been closed by the other peer.<br>
> ><br>
> > Is there any way for me to detect that a connection has been closed<br>
> > without possibly having to poll `recv` on each packet I try to send (and<br>
> > then may need to buffer all that data, which I'd prefer to avoid) or<br>
> > changing the entire app's workflow to be active?<br>
> ><br>
> > In case, here are the socket options we use:<br>
> ><br>
> > [{active,false},<br>
> > {broadcast,false},<br>
> > {buffer,1460},<br>
> > {delay_send,false},<br>
> > {dontroute,false},<br>
> > {exit_on_close,true},<br>
> > {header,0},<br>
> > {high_watermark,8192},<br>
> > {keepalive,false},<br>
> > {linger,{false,0}},<br>
> > {low_watermark,4096},<br>
> > {mode,binary},<br>
> > {nodelay,true},<br>
> > {packet,0},<br>
> > {packet_size,0},<br>
> > {priority,0},<br>
> > {recbuf,87380},<br>
> > {reuseaddr,true},<br>
> > {send_timeout,infinity},<br>
> > {sndbuf,65536}]<br>
> ><br>
> > Regards,<br>
> > Fred.<br>
> > _______________________________________________<br>
> > erlang-questions mailing list<br>
> > <a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>
> > <a href="http://erlang.org/mailman/listinfo/erlang-questions" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
> ><br>
</div></div></blockquote></div><br></div>