[erlang-questions] Discussion and proposal regarding rpc scalability

José Valim jose.valim@REDACTED
Fri Feb 12 09:40:31 CET 2016


Thanks Michael and Benoit. There are some interesting ideas in gen_rpc.

In particular, they have one gen_server per node. This is quite smart.
Instead of naming the process "rex", they name the process node(). This
way, if I have nodes named A, B and C, each node will have processes A, B
and C in them. RPC between nodes goes directly to those processes. This
could potentially solve the bottlenecks mentioned above without requiring
developers to start their own RPC.

Btw, this leads to how we could introduce consistent hashing if desired. On
the first request to a node, it won't have an entry in the local ETS table,
we can request the node and ask how many local rpc processes it has and
their PIDs. These are relatively simple solutions which could also be
backwards compatible.

At this point, we have a couple different solutions, each with different
complexity. I personally prefer the consistent hashing one. I am definitely
willing to work on any of the solutions above and I could draft a more
detailed proposal. I just need some guidance from the OTP team about if and
which solution they are more inclined to accept.


*José Valim*
www.plataformatec.com.br
Skype: jv.ptec
Founder and Director of R&D

On Fri, Feb 12, 2016 at 7:26 AM, Benoit Chesneau <bchesneau@REDACTED>
wrote:

> Hrm wouldn'mtwould'nt the method in gen_rpc
> https://github.com/priestjim/gen_rpc
>
> partially fit the bill?
>
> - benoit
>
> On Fri, 12 Feb 2016 at 02:15, José Valim <jose.valim@REDACTED>
> wrote:
>
>> Thanks everyone for the feedback so far.
>>
>> Douglas, I thought about such solution. The problem is that the caller
>> does not know how many rex instances there are per node. Therefore you
>> would need to first message a single process that does the consistent
>> hashing on the node, which is an additional step (extra copying) and may
>> still be a bottleneck. How would you solve this particular problem?
>>
>> Chandru, thanks for validating the concerns raised. I agree your proposal
>> has many improvements. IIRC the release project from where the paper came
>> from explored similar ideas as well. I am afraid however such solution
>> can't be a replacement. A lot of the usage of rex in OTP and other
>> applications is to request information from a particular node and therefore
>> the load balancing and node grouping wouldn't be desired.
>>
>> Sean, the idea is to leverage all of the rpc functionality without
>> reimplementing it. Like the group leader handling, the call/async_call API
>> and so forth. So while we could tell folks to use a gen_server (or to use
>> spawn), we would be neglecting what rpc provides. In other words, I would
>> use it for the same reason I use rpc, I'd just have it in my own app's tree.
>>
>>
>>
>>
>> *José Valim*
>> www.plataformatec.com.br
>> Skype: jv.ptec
>> Founder and Director of R&D
>>
>> On Thu, Feb 11, 2016 at 10:58 PM, Sean Cribbs <seancribbs@REDACTED>
>> wrote:
>>
>>> José,
>>>
>>> It's interesting to me that your example of making registered RPC
>>> receivers isn't that much different from having registered processes with
>>> specific messages (call/cast/etc) they handle. What do you see as the
>>> use-case of allowing generic RPC in that scenario?
>>>
>>> Cheers,
>>>
>>> Sean
>>>
>>> On Thu, Feb 11, 2016 at 3:04 PM, José Valim <
>>> jose.valim@REDACTED> wrote:
>>>
>>>> Hello everyone,
>>>>
>>>> I was reading the publication "Investigating the Scalability Limits of
>>>> Distributed Erlang
>>>> <http://www.dcs.gla.ac.uk/~amirg/publications/DE-Bench.pdf>" and one
>>>> of the conclusions is:
>>>>
>>>> *> We observed that distributed Erlang scales linearly up to 150 nodes
>>>> when no global command is made. Our results reveal that the latency of rpc
>>>> calls rises as cluster size grows. This shows that spawn scales much better
>>>> than rpc and using spawn instead of rpc in the sake of scalability is
>>>> advised. *
>>>>
>>>> The reason why is highlighted in a previous section:
>>>>
>>>> *> To find out why rpc’s latency increases as the cluster size grows,
>>>> we need to know more about rpc. (...) There is a generic server process
>>>> (gen server) on each Erlang node which is named rex. This process is
>>>> responsible for receiving and handling all rpc requests that come to an
>>>> Erlang node. After handling the request, generated results will be returned
>>>> to the source node. In addition to user applications, rpc is also used by
>>>> many built-in OTP modules, and so it can be overloaded as a shared service.*
>>>>
>>>> In other words, the more applications we have relying on rpc, the more
>>>> likely rpc will become a bottleneck and increase latency. I believe we have
>>>> three options here:
>>>>
>>>> 1. Promote spawn over rpc, as the paper conclusion did (i.e. mention
>>>> spawn in the rpc docs and so on)
>>>> 2. Leave things as is
>>>> 3. Allow "more scalable" usage of rpc by supporting application
>>>> specific rpc instances
>>>>
>>>> In particular, my proposal for 3 is to allow developers to spawn their
>>>> own rpc processes. In other words, we can expose:
>>>>
>>>> rpc:start_link(my_app_rpc) %% start your own rpc
>>>>
>>>> rpc:call({my_app_rpc, nodename}, foo, bar, [1, 2, 3]) %% invoke your
>>>> own rpc at the given node
>>>>
>>>>
>>>> This is a very simple solution that moves the bottleneck away from
>>>> rpc's rex process since developers can place their own rpc processes in
>>>> their application's tree. The code changes required to support this feature
>>>> are also minimal and are almost all at the API level, i.e. support a tuple
>>>> were today a node is expected or allow the name as argument, mimicking the
>>>> same API provided by gen_server that rpc relies on. We won't change
>>>> implementation details. Finally, I believe it will provide a more
>>>> predictable usage of rpc.
>>>>
>>>> Feedback is appreciated!
>>>>
>>>> *José Valim*
>>>> www.plataformatec.com.br
>>>> Skype: jv.ptec
>>>> Founder and Director of R&D
>>>>
>>>> _______________________________________________
>>>> erlang-questions mailing list
>>>> erlang-questions@REDACTED
>>>> http://erlang.org/mailman/listinfo/erlang-questions
>>>>
>>>>
>>>
>> _______________________________________________
>> erlang-questions mailing list
>> erlang-questions@REDACTED
>> http://erlang.org/mailman/listinfo/erlang-questions
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20160212/021961cd/attachment.htm>


More information about the erlang-questions mailing list