Frequent crashes in inets http client (R12B-5)

Chris Newcombe <>
Fri Jun 5 20:34:50 CEST 2009


My first (weak) hypothesis was wrong -- cancelling the request timer in the
redirect and retry clauses did not fix the issue.

I tried the obvious mindless suppression of the symptom, and that appears to
work.
See hack below.  (This is not a production-worthy patch.)

Of course this does nothing to address the root cause, or other symptoms of
the bug.  In a short test with 100 sessions (pipeline depth of 1), I didn't
see any obvious resource leaks, but absence of evidence is not evidence of
absence and all that.

It would be great to get an official fix for this bug, and the other issues
identified by Oscar.

Finally, this hack also applies Mats Cronqvist's suggested patch to avoid
SASL error reports every time that an HTTP server closes a connection.  (As
I mention in a comment below, I don't think a SASL report should be
generated if a TCP connection happens to be broken and cause a tcp_error,
but I didn't make that change as I have no easy way to test it.)

Chris

This is against R12B-5

==== erlang/lib/inets/src/http_client/httpc_handler.erl ====
@@ -361,9 +361,15 @@

 %%% Error cases
 handle_info({tcp_closed, _}, State) ->
-    {stop, session_remotly_closed, State};
+    %% cnewcom: was: {stop, session_remotly_closed, State};
+    {stop, shutdown, State};     % cnewcom: per
http://groups.google.com/group/erlang-programming/browse_thread/thread/4c497978c75ed6a9/18d9a242df81ba3a?lnk=gst&q=badrecord#18d9a242df81ba3a
+
 handle_info({ssl_closed, _}, State) ->
-    {stop, session_remotly_closed, State};
+    %% cnewcom: was: {stop, session_remotly_closed, State};
+    {stop, shutdown, State};
+
+%% cnewcom: TODO: do we really want a SASL error report if a connection is
broken?
+%%
 handle_info({tcp_error, _, _} = Reason, State) ->
     {stop, Reason, State};
 handle_info({ssl_error, _, _} = Reason, State) ->
@@ -379,6 +385,14 @@
     {stop, normal,
      State#state{canceled = [RequestId | State#state.canceled],
                 request = Request#request{from = answer_sent}}};
+
+%% cnewcom : try to work around request timeout arriving when
State#state.request == undefined
+%% cnewcom : this appears to successfully suppress the symptom (and a
stress test doesn't show any obvious resource leak)
+%% cnewcom : but there may be other symptoms
+%%
+handle_info({timeout, RequestId}, State = #state{request = undefined}) ->
+    {noreply, State#state{canceled = [RequestId | State#state.canceled]}};
+
 handle_info({timeout, RequestId}, State = #state{request = Request}) ->
     httpc_response:send(Request#request.from,



On Fri, Jun 5, 2009 at 7:43 AM, Chris Newcombe <>wrote:

> Is there a patch for the following issue?
>
>
>
> It was reported a while ago:
> http://groups.google.com/group/erlang-programming/browse_thread/thread/4c497978c75ed6a9/18d9a242df81ba3a?lnk=gst&q=badrecord#18d9a242df81ba3a(but I didn't see any replies)
>
>
>
> Here's a bit more detail:
>
>
>
> httpc_handler is crashing with
>
>
>
>        {badrecord,request}
>
>
>
> (BTW it would be great if badrecord errors also contained the incorrect
> term, not just the name of the expected record type)
>
>
>
> It’s crashing in
>
>
>
>        httpc_handler,handle_info,2
>
>
>
> The last message received by the gen_server is
>
>
>
>       {timeout,#Ref<0.0.0.9038>}
>
>
>
> The gen_server #state is
>
>
>
>
> {state,undefined,{tcp_session,{{"my-test-url",8080},<0.709.0>},false,http,#Port<0.1351>,1},undefined,undefined,undefined,undefined,{[],[]},pipeline,[#Ref<0.0.0.5834>],nolimit,nolimit,{options,{undefined,[]},20000,1,100,disabled,enabled,false},{timers,[],#Ref<0.0.0.19293>}
>
>
>
> I think the relevant element is the first one (request).
>
> i.e. request == undefined
>
>
>
> Given the message, it seems almost certain that the crash is in the second
> timeout clause of handle_info,
>
> (marked below with ***).
>
> This clause will fire even if request == undefined, but will try to use
> Request#request.from, which crashes with {badrecord,request}
>
>
>
>        %%% Timeouts
>
>        %% Internaly, to a request handling process, a request time out is
>
>        %% seen as a canceld request.
>
>        handle_info({timeout, RequestId}, State =
>
>                    #state{request = Request = #request{id = RequestId}})
> ->
>
>            httpc_response:send(Request#request.from,
>
>                          httpc_response:error(Request,timeout)),
>
>            {stop, normal,
>
>             State#state{canceled = [RequestId | State#state.canceled],
>
>                         request = Request#request{from = answer_sent}}};
>
>
>
> ***    handle_info({timeout, RequestId}, State = #state{request =
> Request}) ->
>
>            httpc_response:send(Request#request.from,
>
>                               httpc_response:error(Request,timeout)),
>
>            {noreply, State#state{canceled = [RequestId |
> State#state.canceled]}};
>
>
>
>        handle_info(timeout_pipeline, State = #state{request = undefined})
> ->
>
>            {stop, normal, State};
>
>
>
> It looks like State#state.request is being set to undefined without
> cancelling an in-progress request timer.
>
>
> I've only glanced at the code, but both of the following clauses appear to
> do that.
>
> (But it could easily be something else.)
>
>
>
>         %% On a redirect or retry the current request becomes
>
>         %% obsolete and the manager will create a new request
>
>         %% with the same id as the current.
>
>         {redirect, NewRequest, Data}->
>
>             ok = httpc_manager:redirect_request(NewRequest, ProfileName),
>
>             handle_pipeline(State#state{request = undefined}, Data);
>
>         {retry, TimeNewRequest, Data}->
>
>             ok = httpc_manager:retry_request(TimeNewRequest, ProfileName),
>
>             handle_pipeline(State#state{request = undefined}, Data);
>
>
>
> thanks,
>
>
>
> Chris
>
>
>


More information about the erlang-questions mailing list