[erlang-questions] ETS tables and pubsub

Ulf Wiger <>
Wed Nov 13 20:28:53 CET 2013

Do you need the global mode for your example?

If so, how are you replicating the information now?

Otherwise, gproc will default to NOT enabling the global mode.

A branch of gproc, https://github.com/uwiger/gproc/tree/uw-locks_leader, uses
my ‘locks’-based leader election implementation. I will not say that it’s more
reliable, but it does seem to fit better to dynamically changing clusters.

There is also a branch, https://github.com/uwiger/gproc/tree/split-brain, which 
resolves inconsistencies based on gen_leader.

Barring insistent requests to the contrary, or disappointing feedback, I will 
personally focus my efforts on testing and improving the former.

Ulf W

On 13 Nov 2013, at 18:50, Christopher Meiklejohn <> wrote:

> On Wednesday, November 13, 2013 at 12:19 AM, Barco You wrote:
>> Why don't you use gproc?
> Last time I checked, gproc isn’t super reliable under failure conditions in global distribution mode.
> Garret and Ulf have discussed it on erlang-questions [1], section 10 of Ulf’s Erlang Workshop paper [2] covers quite a bit about it, and I’ve also written about it [3].
> [1] http://erlang.org/pipermail/erlang-questions/2012-July/067749.html
> [2] http://svn.ulf.wiger.net/gproc/doc/erlang07-wiger.pdf
> [3] http://christophermeiklejohn.com/erlang/2013/06/05/erlang-gproc-failure-semantics.html
> - Chris
>> On Wed, Nov 13, 2013 at 2:38 AM, akonsu < (mailto:)> wrote:
>>> I have a pubsub system which has a single publisher process that maintains all its subscriber processes' Pid's in an ETS table.
>>> The publisher monitors its subscribers and when the publisher receives DOWN message, it removes the subscriber Pid from the table.  
>>> The table entries are tuples of the form {MonitorRef, SubscriberPid}, and the MonitorRef is used as the key.
>>> Now I would like to make sure that if the publisher dies, and gets restarted by its supervisor, the subscriber table is preserved. So I created a process that creates the ETS table, sets the table's heir to self(), and then gives away the table to the publisher.  
>>> The problem is that I do not know how to handle transfer of the table from the heir to the publisher:
>>> when publisher receives the table, the table contains a list of tuples {MonitorRef, SubscriberPid}, but the MonitorRefs were created by the previous publisher instance, so when the new publisher receives ETS_TRANSFER it needs to monitor all these SubscriberPids again.  
>>> what is the best way to do it? Loop over all ETS entries, attach a monitor to each and reenter the new MonitorRefs into the table? This might be slow, no? Maybe my architecture can be improved? Any advice?  
>>> thanks
>>> Konstantin
>>> _______________________________________________
>>> erlang-questions mailing list
>>>  (mailto:)
>>> http://erlang.org/mailman/listinfo/erlang-questions
>> _______________________________________________
>> erlang-questions mailing list
>>  (mailto:)
>> http://erlang.org/mailman/listinfo/erlang-questions
> _______________________________________________
> erlang-questions mailing list
> http://erlang.org/mailman/listinfo/erlang-questions

Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc.

More information about the erlang-questions mailing list