[erlang-questions] Reinserting a message into a gen_server's mailbox
Hynek Vychodil
hynek@REDACTED
Wed Aug 11 08:46:03 CEST 2010
On Wed, Aug 11, 2010 at 4:33 AM, Ryan Zezeski <rzezeski@REDACTED> wrote:
> On Thu, Jul 29, 2010 at 11:15 AM, Juan Jose Comellas <juanjo@REDACTED>wrote:
>
>> I have a situation where for one specific gen_server:call/2 invocation I
>> have to perform an action that may take a few seconds (e.g. reading from a
>> DB; reading from external files). I don't want to block the gen_server
>> process while I do this, so I want to spawn a helper process that takes
>> care
>> of reading the data I need. The problem is that I can receive several
>> gen_server:call/2 invocations that depend on this data being present at the
>> same time and I don't want to have them fail because the data is not yet
>> ready. I've thought of two ways of solving this problem:
>>
>>
>>
> I'm currently writing an application that needs similar behavior, i.e. high
> request throughput on single gen_server, but the requests may take a long
> time to service. I approached the problem by creating a gen_server name
> req_delegate (under a supervisor w/ one_for_one restart policy) whose sole
> purpose in life is to spawn req_hdlr processes and kick them off. This way
> the only time spent in the gen_server is the time needed to create a
> new process and send a msg to it. The req_hdlr is a worker underneath a
> supervisor (underneath the main supervisor) with a simple_one_for_one
> restart policy. When a new one is created it is given a reference to the
> process that made the request and it uses this to msg back the result. When
> the req_hdlr is done it simply dies.
>
> My req_delegate has a function that looks like this:
>
> handle_cast({new_request, Req}, State) ->
> %% Start new req_hdlr to process request
> {ok, Pid} = supervisor:start_child(req_hdlr_sup, [Req]),
> %% Tell it to process the request
> req_hdlr:process(Pid),
> {noreply, State}.
>
> The caller then waits for a response from the newly spawned req_hdlr worker.
> I wrote a library module to make this easier.
>
> This has worked great for me so far. However, the next thing I want to
> address (as I think you hinted to as well) is how to only have one req_hdlr
> do the work when there are multiple requests for the same data. In my case,
> I'm using this app as the engine behind a web application. I want to be
> able to take 10 concurrent requests for the same data, and perform the work
> only once. I haven't solved this yet, but I have some ideas. One idea is
> to normalize every request to an ID that represents what it's asking for.
> If the req_delegate notices there is already a request for a particular ID
> then it can have the req_hdlr piggy back off the request that is already in
> progress.
>
> Anyways, I hope I'm making sense. I'm still _very_ new to Erlang and trying
> to learn.
>
> -Ryan
>
One idea comes on mine mind, why there is "high request throughput on
*single* gen_server"? Where those requests comes from? If I read
handle_cast({new_request, Req}, State) ->
%% Start new req_hdlr to process request
{ok, Pid} = supervisor:start_child(req_hdlr_sup, [Req]),
%% Tell it to process the request
req_hdlr:process(Pid),
{noreply, State}.
I can't see any reason why this request arrived to this single
gen_server at all. There is known anti-pattern in the linux kernel
development named "midlayer mistake" and common advice is, if there is
not a reason for midlayer use a library Luke. Translated to Erlang: if
there is not a reason for process use a module Luke. I know it is
strong temptation add some abstract layer and separate things and
server is first thing what comes on mind in Erlang but stateless
module often serves too. Very similar advice is written in subsection
A Case Study on Concurrency-Oriented Programming of Chapter 4:
Concurrent Programming of famous Cesarini and Thompson's book Erlang
Programming (page 110). Where "high request throughput" comes from? If
it is from already concurrent activity (web server request handler I
guess) keep it then. I did a lot of same mistakes and will do but I
hope I will wrap mine head around and will do it less and less in
future.
--
--Hynek (Pichi) Vychodil
Analyze your data in minutes. Share your insights instantly. Thrill
your boss. Be a data hero!
Try GoodData now for free: www.gooddata.com
More information about the erlang-questions
mailing list