[erlang-questions] gproc_dist finicky about latecomers

Oliver Korpilla Oliver.Korpilla@REDACTED
Tue Jun 6 18:20:49 CEST 2017

Hello, Ulf.

First of all: Thanks for gproc! It is very central to what I'm doing, so I'm only documenting a current limitation, I'm _not_ complaining. :)

Looking forward to future improvements. :)


Gesendet: Dienstag, 06. Juni 2017 um 18:16 Uhr
Von: "Ulf Wiger" <ulf@REDACTED>
An: "Oliver Korpilla" <Oliver.Korpilla@REDACTED>
Cc: erlang-questions <erlang-questions@REDACTED>
Betreff: Re: [erlang-questions] gproc_dist finicky about latecomers

Just as an update, I'm (very slowly) finishing a rewrite of gproc to add some extension capability. After that, I thought I'd take a look at making the locks_leader the default. However, there is a reported issue with locks_leader that I'd have to take a look at first (https://github.com/uwiger/locks/issues/30). I apologize for having paid so little attention to this lately.
Ulf W
2017-06-03 7:28 GMT+01:00 Oliver Korpilla <Oliver.Korpilla@REDACTED[mailto:Oliver.Korpilla@REDACTED]>:Hello.

I use gproc/gproc_dist with gen_leader_revival. I have the gproc_dist all application option on and do use global names.

It works fine if and only if I connect nodes first and then start gproc afterwards for additional nodes joining the cluster.

If I start gproc before connecting nodes, every node insists on being the leader (I queried through the gproc API for each node) and they stick with their opinion. global aggregated counters in turn do not work, failing my application's simple loadbalancing.

I ran into this problem twice:

* When originally writing my application startup.
* When redoing the startup and forgetting why I started gproc at a specific time.

I hoped to document this somehow.

Do people observe the same with locks_leader?


erlang-questions mailing list

More information about the erlang-questions mailing list