gen_server locked for some time

Guilherme Andrade g@REDACTED
Sat Nov 30 00:23:42 CET 2019


(I know see you actually mentioned using separate processes, but my
suggestion should still apply, depending on constraints.)

On Fri, 29 Nov 2019 at 23:21, Guilherme Andrade <g@REDACTED> wrote:

> Hello Roberto,
>
> If copying the data to a second process is not (too) costly, you can do
> just that: have a second process responsible for writing the data - the I/O
> bound component - and the original one act as a coordinator, directing the
> second one asynchronously but always conscious of what the secondary
> process is doing.
> This means it will remain available to handle incoming calls (unless
> seriously overloaded) and, if need be, reject incoming requests
> preemptively for back pressure, whether instantaneously or after a timeout
> measured on its own (rather than on the caller) by replying through
> `gen_server:reply/2` rather than through `{reply, Reply, State}`.
>
> Alternatively, you can also switch the problem around and use some sort of
> broker to match write requests with the writer process - this has the
> advantage of making it trivial to scale to multiple writer processes unless
> there are hard serialization constraints.
> With this approach, and in the simplest setup, a single broker process
> never blocks and can forward the write requests to an available writer
> process which has explicitly declared itself available for writing (and it
> only does this between write requests), as well as manage timeouts. The
> `sbroker` library[1], although no longer maintained, is a true wonder for
> implementing this sort of pattern.
>
> [1]: https://github.com/fishcakez/sbroker
>
> On Fri, 29 Nov 2019 at 22:47, Roberto Ostinelli <ostinelli@REDACTED>
> wrote:
>
>> All,
>> I have a gen_server that in periodic intervals becomes busy, eventually
>> over 10 seconds, while writing bulk incoming data. This gen_server also
>> receives smaller individual data updates.
>>
>> I could offload the bulk writing routine to separate processes but the
>> smaller individual data updates would then be processed before the bulk
>> processing is over, hence generating an incorrect scenario where smaller
>> more recent data gets overwritten by the bulk processing.
>>
>> I'm trying to see how to solve the fact that all the gen_server calls
>> during the bulk update would timeout.
>>
>> Any ideas of best practices?
>>
>> Thank you,
>> r.
>>
>
>
> --
> Guilherme
>


-- 
Guilherme
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20191129/f594bf1a/attachment.htm>


More information about the erlang-questions mailing list