[erlang-questions] Distributed Process Registry

Michael Truog mjtruog@REDACTED
Mon Feb 9 09:46:11 CET 2015


On 02/08/2015 11:16 PM, Ulf Wiger wrote:
>> On 09 Feb 2015, at 00:51, Michael Truog <mjtruog@REDACTED> wrote:
>>
>> If resolving the separate chunks of data that exist after a netsplit requires user source code, to pick which data is "correct", that can not be consistent and is an arbitrary process (ad-hoc, based on your use-case).  I don't believe that is being partition tolerant, but is instead ignoring the problem of partition tolerance and telling the user: "you should really figure this out”.
> But it’s a pathological problem! (see e.g. the Byzantine Generals dilemma). There is no generic way to resolve a netsplit that will lead to a *functionally consistent* system in every case. That is, data consistency in itself is not the end goal, but rather that conflicts are resolved in a way that the system functions as well as possible afterwards.
I agree that using a master/slave relationship to store data creates a difficult problem.  Many solutions have been discussed and you have implemented an approach in gproc.  I agree that you are focused on being able to resolve conflicts manually, because that is the only option.  However, the CAP theorem does have a C for consistency.  If your system fails data consistency, due to being unable to handle partition tolerance without sacrificing consistency (possibly with a loss of data), that should impact the view of the system as it relates to the CAP theorem.  I don't think it is realistic to say gproc is able to handle A out of CAP, since that doesn't really show its focus (for unique names, its original focus).  It should be better to say gproc is CA since it is not handling partition tolerance issues, the user is with its own source code (i.e., an approach that makes the user's system continue to function in a way that is agreeable, if the user doesn't make a mistake, 
and it is possible) to resolve conflicts.
>
> But saying that a library is inconsistent and arbitrary just because it requires user-level logic is not very helpful. By this reasoning, lists:foldl/3 is arbitrary, since it doesn’t understand how to fold a list in a way that’s ‘right’ for the user every time. If you accept that user intervention is _required_, at least some of the time (see e.g. lists:sort/1 vs lists:sort/2), you need to provide users with the hooks needed to do the job.
The comparison to the lists module is invalid, since this relates to distributed computing and the difference between a master/slave system (gproc, specifically unique names) and a master-less system (cpg).  The usage of gproc unique names requires the user to resolve state conflicts manually after a netsplit, and that makes the process arbitrary since it is undefined within gproc.  The user can do anything which includes incorrect source code and choosing to ignore the problem.  The potential for problems impacts the reliability expected when a netsplit occurs.  Having netsplits resolved without user source code in cpg avoids the possibility of reliability suffering after a netsplit, since common source code is used which is testable, it is not undefined user source code.
>
>
>>> I agree that it can be a problem in a given system that different components automatically try to resolve an inconsistency using potentially different strategies. For this reason, I’ve long argued that one should have one master arbiter; the other systems need to be able to adapt. Otherwise, the different conflict resolution decisions can actually _cause_ inconsistencies from a system perspective.
>> This stance appears to be contradicted by usage of gproc properties. You can have automatic conflict resolution that does not cause inconsistencies, i.e., it does not need to be a manual process that requires a master arbiter.
> What I was referring to here was that different parts of a system can end up ‘locally consistent’, but in ways that are not consistent at the system level. For example, if you have a master/slave system where several components independently elect a master, the system will be inconsistent if they end up electing different nodes, _and you expected them to end up with the same master_. Note that whatever is considered wrong is completely a local issue: it might be undesireable if they actually _did_ elect the same master. Either way, if their respective decisions should not be regarded as independent, it is arguably better to have an arbiter decide and tell whoever needs to know.
>
> In a very early version of the AXD 301 cluster controller, I used global to elect a leader instance. When the nodes reconnected after netsplit, the cluster controller would immediately start trying to heal the system, but so did global. At that time, global had only one conflict resolution method: it would randomly pick one of the processes and kill it! It was annoying to say the least, for the cluster controller to try to heal the system while global was gunning for it! In this case, we decided that it would be better for global to simply unregister the name - the cluster controller would then automatically elect a new master.
>
> The work on the AXD 301 cluster controller resulted in several additions to OTP and decisions on how to handle this sort of thing. For one, 'net_kernel dist_auto_connect once’ was introduced to allow the application logic to decide _when_ to reconnect two nodes. Other additions was configurable deconflict methods in global, and the decision to let the user decide how to handle inconsistencies in mnesia. Basically, we needed to be able to _impose_ our view of conflict resolution on OTP, and we were pretty sure that the AXD 301 way of resolving netsplits was not a universal method. But it does not make sense to call it ‘arbitrary’ (even though a perspective, and an interpretation of the word, can be chosen, to make it a valid claim). The system was architected to handle netsplits in a consistent and robust way.
The system does handle netsplits in a deterministic way, until the user needs to make the decisions about what data to discard.  The decision the user makes can be seen as arbitrary when looking at the system, since it is undefined within the system.  Users can make the decision deterministic based on their use-case, but that doesn't avoid the potential for data loss or the latency during the decision process (in their custom source code).
>
> Yes, there are solutions that automatically avoid inconsistencies. One of the design decisions in gproc was to only allow processes to register their own names/properties. This is rather a matter of _conflict avoidance_. Registering a property doesn’t impose any restriction on other processes, like unique name registration does.
I understand that registering a local property in gproc does not need to be unique and have failures like unique names do within gproc.  However, it doesn't appear that local properties are accessible through gproc in useful ways that make it comparable with cpg for process groups (or comparable to pg2).  I just don't see how local properties can be useful as they are provided by gproc when compared to basic ets usage or locally registered names for Erlang processes.  So, I can agree that local properties can avoid conflicts in gproc, since they are local, but so what?  Doesn't using gproc only for local properties just impose unnecessary complexity?
>
> BR,
> Ulf W
>
> Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc.
> http://feuerlabs.com
>
>
>
>





More information about the erlang-questions mailing list