<div dir="ltr"><div>Hello Roberto,</div><div><br></div><div>If copying the data to a second process is not (too) costly, you can do just that: have a second process responsible for writing the data - the I/O bound component - and the original one act as a coordinator, directing the second one asynchronously but always conscious of what the secondary process is doing.<br></div><div>This means it will remain available to handle incoming calls (unless seriously overloaded) and, if need be, reject incoming requests preemptively for back pressure, whether instantaneously or after a timeout measured on its own (rather than on the caller) by replying through `gen_server:reply/2` rather than through `{reply, Reply, State}`.<br><br></div><div>Alternatively, you can also switch the problem around and use some sort of broker to match write requests with the writer process - this has the advantage of making it trivial to scale to multiple writer processes unless there are hard serialization constraints.<br></div><div>With this approach, and in the simplest setup, a single broker process never blocks and can forward the write requests to an available writer process which has explicitly declared itself available for writing (and it only does this between write requests), as well as manage timeouts. The `sbroker` library[1], although no longer maintained, is a true wonder for implementing this sort of pattern.<br><br>[1]: <a href="https://github.com/fishcakez/sbroker">https://github.com/fishcakez/sbroker</a></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 29 Nov 2019 at 22:47, Roberto Ostinelli <<a href="mailto:ostinelli@gmail.com">ostinelli@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div style="padding:20px 0px 0px;font-size:0.875rem;font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif"><span style="font-family:Arial,Helvetica,sans-serif;font-size:small">All,</span><br></div><div style="font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:medium"><div id="gmail-m_-3295704752582870313gmail-:1cr" style="font-size:0.875rem;direction:ltr;margin:8px 0px 0px;padding:0px"><div id="gmail-m_-3295704752582870313gmail-:1cs" style="overflow:hidden;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:small;line-height:1.5;font-family:Arial,Helvetica,sans-serif"><div dir="ltr"><div>I have a gen_server that in periodic intervals becomes busy, eventually over 10 seconds, while writing bulk incoming data. This gen_server also receives smaller individual data updates.</div><div><br></div><div>I could offload the bulk writing routine to separate processes but the smaller individual data updates would then be processed before the bulk processing is over, hence generating an incorrect scenario where smaller more recent data gets overwritten by the bulk processing.</div><div><br></div><div>I'm trying to see how to solve the fact that all the gen_server calls during the bulk update would timeout.</div><div><br></div><div>Any ideasĀ of best practices?</div><div><br></div><div>Thank you,</div><div>r.</div></div></div></div></div></div>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr">Guilherme<br></div></div></div></div></div></div>