[erlang-questions] gproc_dist:multicall/3

Ulf Wiger ulf@REDACTED
Thu Nov 10 11:13:06 CET 2016


I recently increased the share of my own dogfood in my daily programming
diet, which among other things might mean that I'll become a bit more
responsive to support requests - let's hope.

I thought I'd mention my latest PR to gproc:
https://github.com/uwiger/gproc/pull/126

>From the edoc:
@spec multicall(Module::atom(), Func::atom(), Args::list()) ->
{[Result], [{node(), Error}]}

@doc Perform a multicall RPC on all live gproc nodes

This function works like {@link rpc:multicall/3}, except the calls are
routed via the gproc leader and its connected nodes - the same route as
for the data replication. This means that a multicall following a global
registration is guaranteed to follow the update on each gproc node.

The return value will be of the form {GoodResults, BadNodes}, where
BadNodes is a list of {Node, Error} for each node where the call
fails.
@end

This is not something I personally need right now, but meditating over the
test suite, it occurred to me that it might be useful. Given that gproc
replicates asynchronously, there is a distinct risk of race conditions if
you register a global entry and then want to perform an operation on remote
nodes, based on the newly registered information (given that updates go
through the leader and lookups are served locally).

The gproc_dist:multicall(M, F, A) function *should* (as far as I can tell)
ensure that the multicall will always be executed after the successful
update of a preceding entry. Note that this assumes that the registration
and multicall originate from the same process.

Given that gproc_dist:multicall/3 is routed via the gproc leader, it will
practically always be slower than an rpc:multicall/4, but for the intended
use case, this is of course intentional (since being too fast means that
the rpc:multicall/4 might race past the preceding registation.)

Feedback is welcome, esp. while the PR is waiting to be merged.

BR,
Ulf W
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20161110/78c73ebb/attachment.htm>


More information about the erlang-questions mailing list