[erlang-questions] gen_server call and reply (Matthias Lang)

Ingela Anderton Andin ingela@REDACTED
Tue Sep 18 08:27:49 CEST 2007


erlang-questions-request@REDACTED wrote:
> Why use 'noreply'? I can only really think of one reason---you have a
> relatively long-running request but don't want to block the gen_server
> while it runs. Maybe there's also some case where you can avoid a
> deadlock, but I haven't thought about that.
>
>   
I use noreply  quite often.  One reason  is that I do not want my server to
do blocking receives as that will spoil the soft upgrade for that process.
I almost always use active_once on a socket to collect arbitary number 
of  bytes in my server
and then returning the response to the client with gen_server:reply/2 
when I got the needed amount.  (Could also be gen_fsm:reply/2).

> In the case of the long-running request, what I do instead is
>
>   {reply, {pending, Ref}, State}
>
> where Ref is a reference which is then included in a later message
> sent the old-fashioned way. This amounts to more or less the same as
> 'noreply', though it's one message more expensive and has the benefit
> of making it easier to reason about timeouts, at least to my mind.
>   
I do not agree.  That changes the  semantics for the client. I think a 
client that
does a call shall hang until the reply is delivered (or is timed out)  
that is the semantics of call otherwise you might as well use cast.  Of 
course  for a long running request you do not want the server to hang 
for the duration of the call and that is one of the  reasons why 
gen_server:reply/2  exists. If it is an independent calculation the best 
option is to spawn a new process that does the calculation and then 
calls gen:server_reply/2 in the new process.  Otherwise you
save "From" in the state and send the reply sometime later when you got 
enough information from
elsewhere to send the reply. Timeout can always be handled with the help 
of erlang:send_after.

Regards Ingela - OTP team









More information about the erlang-questions mailing list