<div dir="ltr">Thanks everyone for the feedback so far.<div><br></div><div>Douglas, I thought about such solution. The problem is that the caller does not know how many rex instances there are per node. Therefore you would need to first message a single process that does the consistent hashing on the node, which is an additional step (extra copying) and may still be a bottleneck. How would you solve this particular problem?</div><div><br></div><div>Chandru, thanks for validating the concerns raised. I agree your proposal has many improvements. IIRC the release project from where the paper came from explored similar ideas as well. I am afraid however such solution can't be a replacement. A lot of the usage of rex in OTP and other applications is to request information from a particular node and therefore the load balancing and node grouping wouldn't be desired.</div><div><br></div><div>Sean, the idea is to leverage all of the rpc functionality without reimplementing it. Like the group leader handling, the call/async_call API and so forth. So while we could tell folks to use a gen_server (or to use spawn), we would be neglecting what rpc provides. In other words, I would use it for the same reason I use rpc, I'd just have it in my own app's tree.</div><div><br></div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr"><div><div><br></div><div><br></div><div><span style="font-size:13px"><div><span style="font-family:arial,sans-serif;font-size:13px;border-collapse:collapse"><b>José Valim</b></span></div><div><span style="font-family:arial,sans-serif;font-size:13px;border-collapse:collapse"><div><span style="font-family:verdana,sans-serif;font-size:x-small"><a href="http://www.plataformatec.com.br/" style="color:rgb(42,93,176)" target="_blank">www.plataformatec.com.br</a></span></div><div><span style="font-family:verdana,sans-serif;font-size:x-small">Skype: jv.ptec</span></div><div><span style="font-family:verdana,sans-serif;font-size:x-small">Founder and Director of R&D</span></div></span></div></span></div></div></div></div></div>
<br><div class="gmail_quote">On Thu, Feb 11, 2016 at 10:58 PM, Sean Cribbs <span dir="ltr"><<a href="mailto:seancribbs@gmail.com" target="_blank">seancribbs@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>José,</div><div><br></div>It's interesting to me that your example of making registered RPC receivers isn't that much different from having registered processes with specific messages (call/cast/etc) they handle. What do you see as the use-case of allowing generic RPC in that scenario?<div><br></div><div>Cheers,</div><div><br></div><div>Sean</div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="h5">On Thu, Feb 11, 2016 at 3:04 PM, José Valim <span dir="ltr"><<a href="mailto:jose.valim@plataformatec.com.br" target="_blank">jose.valim@plataformatec.com.br</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5"><div dir="ltr">Hello everyone,<div><br></div><div>I was reading the publication "<a href="http://www.dcs.gla.ac.uk/~amirg/publications/DE-Bench.pdf" target="_blank">Investigating the Scalability Limits of Distributed Erlang</a>" and one of the conclusions is:</div><div><br></div><div><i>> We observed that distributed Erlang scales linearly up to 150 nodes when no global command is made. Our results reveal that the latency of rpc calls rises as cluster size grows. This shows that spawn scales much better than rpc and using spawn instead of rpc in the sake of scalability is advised. </i><br clear="all"><div><div><div dir="ltr"><div><div><br></div><div>The reason why is highlighted in a previous section:</div><div><br></div><div><i>> To find out why rpc’s latency increases as the cluster size grows, we need to know more about rpc. (...) There is a generic server process (gen server) on each Erlang node which is named rex. This process is responsible for receiving and handling all rpc requests that come to an Erlang node. After handling the request, generated results will be returned to the source node. In addition to user applications, rpc is also used by many built-in OTP modules, and so it can be overloaded as a shared service.</i><br></div><div><br></div><div>In other words, the more applications we have relying on rpc, the more likely rpc will become a bottleneck and increase latency. I believe we have three options here:</div><div><br></div><div>1. Promote spawn over rpc, as the paper conclusion did (i.e. mention spawn in the rpc docs and so on)</div><div>2. Leave things as is</div><div>3. Allow "more scalable" usage of rpc by supporting application specific rpc instances</div><div><br></div><div>In particular, my proposal for 3 is to allow developers to spawn their own rpc processes. In other words, we can expose:</div><div><br></div></div></div></div></div></div><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div><div><div><div dir="ltr"><div><div>rpc:start_link(my_app_rpc) %% start your own rpc</div></div></div></div></div></div></blockquote><blockquote style="margin:0 0 0 40px;border:none;padding:0px"><div><div><div><div dir="ltr"><div>rpc:call({my_app_rpc, nodename}, foo, bar, [1, 2, 3]) %% invoke your own rpc at the given node</div></div></div></div></div></blockquote><div><div><div><div dir="ltr"><div><div><br></div><div>This is a very simple solution that moves the bottleneck away from rpc's rex process since developers can place their own rpc processes in their application's tree. The code changes required to support this feature are also minimal and are almost all at the API level, i.e. support a tuple were today a node is expected or allow the name as argument, mimicking the same API provided by gen_server that rpc relies on. We won't change implementation details. Finally, I believe it will provide a more predictable usage of rpc.</div><div><br></div><div>Feedback is appreciated!</div><span><font color="#888888"><div><br></div><div><span style="font-size:13px"><div><span style="font-family:arial,sans-serif;font-size:13px;border-collapse:collapse"><b>José Valim</b></span></div><div><span style="font-family:arial,sans-serif;font-size:13px;border-collapse:collapse"><div><span style="font-family:verdana,sans-serif;font-size:x-small"><a href="http://www.plataformatec.com.br/" style="color:rgb(42,93,176)" target="_blank">www.plataformatec.com.br</a></span></div><div><span style="font-family:verdana,sans-serif;font-size:x-small">Skype: jv.ptec</span></div><div><span style="font-family:verdana,sans-serif;font-size:x-small">Founder and Director of R&D</span></div></span></div></span></div></font></span></div></div></div></div>
</div></div>
<br></div></div><span class="">_______________________________________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org" target="_blank">erlang-questions@erlang.org</a><br>
<a href="http://erlang.org/mailman/listinfo/erlang-questions" rel="noreferrer" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
<br></span></blockquote></div><br></div>
</blockquote></div><br></div>