[erlang-questions] Architecture: How to achieve concurrency when gen_server:handle_call waits?

Luke random.outcomes@REDACTED
Sun Feb 7 17:33:08 CET 2016


Hi All,

Firstly thank you for your answers, some of my colleagues have criticized
Erlang for not having a large developer community behind it, but I would
counter and say that a smaller community can be even more useful when
experts are contactable directly :) the more I use Erlang the more I
absolutely love it!

I believe I have derived the solution the problem I was having and it's all
starting to 'click'. I am a very visual thinker so I have created diagrams
(and please excuse my crappy paint skills) outlining my first solution,
which feels incredibly poor to me now hahaha. I have also sketched out what
I believe is the correct implementation but I am very appreciative of any
and all feedback.

This is the architecture which I am trying to implement - I included some
text to describe it but you probably get the idea:

http://postimg.org/image/yz9gnpzgd/

[image: Inline image 1]


This was the first way I thought of implementing it, and how I thought
concurrency worked in Erlang. I feel a bit stupid looking at it now hahaha.
I should have drawn this neater (especially step 2, follow the light blue
line to the process just down and to the right, not the step 6 process
which is spawned a long way 'down') - my apologies - just follow the
numbers/colours and try to relate it back to the above architecture.
"Message ! Broker" is of course an Erlang message pass, I was using
spawn_link to delegate work (the "interns"), the black circles were the
gen_servers I was creating to offer the functionality of each block, and I
was spinning off new workers to try and deal with the obvious bottleneck of
one server trying to handle all the messages.

http://postimg.org/image/bnpzzcx39/

[image: Inline image 2]


This is the new implementation I have devised, as I said I welcome all
feedback on it as well

http://s21.postimg.org/byeovx07b/good_concurrency.png

[image: Inline image 3]

Not only does this look a lot simpler, But each request is now being
contained within the main process (I will still most likely use a
gen_server), and each new request is handled simply be creating a new
process. On the error handling side of the design, if say the process in
Block1 crashes then the process in the main big block will eventually be
able to catch an exception caused by the timeout of not getting a reply,
and perhaps try again, or that supervisor could restart it I guess, what do
you guys think? Also different behaviours can be captured by simply coding
different implementations of gen_server and calling the correct one in the
first spawn_link/3, for example if we use the insurance example above I
would create a 'change customer address' server after the message is
interpreted in the yaws block, but 'submit claim' would be a different
module that still implements the gen_server behaviour.

The only part this diagram doesn't show is how the YAWS block handles
requests, but I guess that the answer is just to spawn a new process in
that block similar to blocks 1 and 2, and that process will eventually
receive a message back from the process in the main big block. This
description is probably not sounding very clear hahaha, let me know if it
needs clarification.

Thanks again :)



On Mon, Feb 8, 2016 at 2:14 AM, Jesper Louis Andersen <
jesper.louis.andersen@REDACTED> wrote:

>
> On Sun, Feb 7, 2016 at 2:20 PM, Luke <random.outcomes@REDACTED> wrote:
>
>> 1 - Why isn't this done automatically behind the scenes in gen_server?
>> When would you ever not want to free up your gen_server to handle more
>> requests?
>>
>
> From a perspective of increasing concurrency (and thus parallelism) this
> model is alluring. But the strength of processing one message at a time in
> order is that handling messages linearize. You are sure handling one
> message is either going to succeed--you reach the next state invariant and
> is ready for the next message--or fail. Since the gen_server is the sole
> owner of the data it processes, there is no room for data races. Many parts
> of a larger system benefit a lot from this simple model and it scales up to
> a point.
>
>
>> 2 - Is it best practice to spawn each new process under the gen_server, a
>> supervisor somewhere, or not all?
>>
>
> This .. depends. Usually the question is "how do you want your newly
> spawned processes and the process tree to behave when something goes
> wrong?". Running under the gen_server usually means that one failure means
> failure of all. Running in a simple-one-for-one supervisor as a sibling to
> the gen_server lets you handle individual errors more gracefully (also set
> monitors on the workers in this case).
>
>
>> 3 - If your gen_server is being flooded with messages, would one viable
>> solution to achieve concurrency be creating say 10 of the same gen_server
>> under a supervisor, and having any processes passing messages to this "job"
>> randomly pick one, effectively using probability to reduce traffic to each
>> by 1/10 - is there a library/methodology for doing this kind of thing
>> already?
>>
>
> Yes. There are multiple solutions and they have different failure
> semantics and different semantics when loaded with more requests than they
> can handle, or with jobs of varying latency. Can the library batch requests
> which are equivalent? There is usually no good one-size-fits-all here. And
> this is the main questions you should ask of any such library.
>
> 4 - This seems like it would be a common piece of code, is this bundled
>> into OTP somewhere? Is this situation what I'm supposed to use gen_event
>> for? Or if I'm completely wrong, what is the actual way programs like yaws
>> achieve high concurrency, as reading the source code has not revealed the
>> answer to me.
>>
>
> If you read the above remarks, you understand why it is hard to come up
> with a good design solution everyone would be happy with. You get the tools
> to build such a solution, but no solution.
>
> gen_event is actually two kinds of modules: an event manager which is an
> endpoint for publishing events, and event handlers, which runs as part of
> the event managers context subscribing to events as they come in and doing
> work on the event. The model is very suitable for loosely coupling events
> from one application into another as you can dynamically install and remove
> such handlers. One application contains the event manager and the other
> application installs a handler module so it can get notified when such
> events occur.
>
> The built-in alarm_handler in Erlang is a good example of a gen_event
> system: alarms in the system can be set and cleared, and you can then
> subscribe to alarms in order to react to those. An application could
> subscribe to the alarm that you lost database connectivity and change its
> behavior accordingly. Once the alarm clears, you can resume normal
> operation. Another good alarm is "some process is using more than 5% of
> available memory". And so on.
>
>
>
>
>
> --
> J.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20160208/e5be7865/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 40981 bytes
Desc: not available
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20160208/e5be7865/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 57765 bytes
Desc: not available
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20160208/e5be7865/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 250723 bytes
Desc: not available
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20160208/e5be7865/attachment-0002.png>


More information about the erlang-questions mailing list