[erlang-questions] ETS tables and pubsub
Garret Smith
garret.smith@REDACTED
Wed Nov 13 20:13:08 CET 2013
I know Ulf has been doing some work on gproc to make it handle netsplits
better. I keep meaning to try it out...
http://erlang.org/pipermail/erlang-questions/2013-June/074345.html
For me, I'm using a combination of gen_leader and local-only gproc. My
application has partitioned graphs of data flow & process interaction, so I
can use gen_leader to manage moving entire graphs between nodes and local
gproc for processes within a graph to find each other.
A generic process registry that handles netsplit and nodes entering and
leaving the cluster is a Very Hard Problem(TM). You'd still have to write
some bits yourself, like how to merge registries after a netsplit. That's
why a lot of application-specific solutions (like mine) exist to exploit
the inherent properties of the problem.
-Garret Smith
On Wed, Nov 13, 2013 at 9:50 AM, Christopher Meiklejohn <
cmeiklejohn@REDACTED> wrote:
> On Wednesday, November 13, 2013 at 12:19 AM, Barco You wrote:
> > Why don't you use gproc?
>
> Last time I checked, gproc isn’t super reliable under failure conditions
> in global distribution mode.
>
> Garret and Ulf have discussed it on erlang-questions [1], section 10 of
> Ulf’s Erlang Workshop paper [2] covers quite a bit about it, and I’ve also
> written about it [3].
>
> [1] http://erlang.org/pipermail/erlang-questions/2012-July/067749.html
> [2] http://svn.ulf.wiger.net/gproc/doc/erlang07-wiger.pdf
> [3]
> http://christophermeiklejohn.com/erlang/2013/06/05/erlang-gproc-failure-semantics.html
>
> - Chris
>
> > On Wed, Nov 13, 2013 at 2:38 AM, akonsu <akonsu@REDACTED (mailto:
> akonsu@REDACTED)> wrote:
> > > I have a pubsub system which has a single publisher process that
> maintains all its subscriber processes' Pid's in an ETS table.
> > >
> > > The publisher monitors its subscribers and when the publisher receives
> DOWN message, it removes the subscriber Pid from the table.
> > >
> > > The table entries are tuples of the form {MonitorRef, SubscriberPid},
> and the MonitorRef is used as the key.
> > >
> > > Now I would like to make sure that if the publisher dies, and gets
> restarted by its supervisor, the subscriber table is preserved. So I
> created a process that creates the ETS table, sets the table's heir to
> self(), and then gives away the table to the publisher.
> > >
> > > The problem is that I do not know how to handle transfer of the table
> from the heir to the publisher:
> > >
> > > when publisher receives the table, the table contains a list of tuples
> {MonitorRef, SubscriberPid}, but the MonitorRefs were created by the
> previous publisher instance, so when the new publisher receives
> ETS_TRANSFER it needs to monitor all these SubscriberPids again.
> > >
> > > what is the best way to do it? Loop over all ETS entries, attach a
> monitor to each and reenter the new MonitorRefs into the table? This might
> be slow, no? Maybe my architecture can be improved? Any advice?
> > >
> > > thanks
> > > Konstantin
> > >
> > >
> > > _______________________________________________
> > > erlang-questions mailing list
> > > erlang-questions@REDACTED (mailto:erlang-questions@REDACTED)
> > > http://erlang.org/mailman/listinfo/erlang-questions
> >
> >
> > _______________________________________________
> > erlang-questions mailing list
> > erlang-questions@REDACTED (mailto:erlang-questions@REDACTED)
> > http://erlang.org/mailman/listinfo/erlang-questions
>
>
>
> _______________________________________________
> erlang-questions mailing list
> erlang-questions@REDACTED
> http://erlang.org/mailman/listinfo/erlang-questions
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20131113/1e860b49/attachment.htm>
More information about the erlang-questions
mailing list