<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div>Hi,</div><div><br></div><div>Actually right now, I have this implementation : </div><div><br></div><div>Each node has a clientpool</div><div>Each client pool manage an ets table, where all client are inserted / removed / searched</div><div><br></div><div>If an user A sends a message to user B, it will check first the local ets, if not found, will ask to all nodes if they found user B in their ets table, if user B is connected to another node, this node will return the process pid.</div><div><br></div><div>Advantage of this implementation : no synchronization required if one node goes down and back up again…</div><div>Disadvantage of this implementation : I guess the number of message / sec is much higher of number of connection/disconnection / sec</div><div><br></div><br><div><div>Le 31 mai 2013 à 00:36, Chris Hicks <<a href="mailto:khandrish@gmail.com">khandrish@gmail.com</a>> a écrit :</div><br class="Apple-interchange-newline"><blockquote type="cite"><div dir="ltr">Please keep in mind that this is the result of a total of about 5 seconds of thinking.<div><br></div><div>You could have a coordinator on each node which is responsible for communicating with the coordinators on all of the other connected nodes. Your ETS entries would need to be expanded to keep track of the node that the user is connected on as well. The coordinators track the joining/leaving of the cluster of all other nodes and will purge the ETS table of any entries that belong to any recently downed node. As long as you don't have hard real-time requirements, which if you do you're using the wrong tool anyway, then you can come up with a bunch of ways to group together updates between coordinators to make sure they don't get overloaded.</div>
<div><br></div><div style="">Without a lot more details on the exact sort of metrics your system needs to be able to handle it's all really just a guessing game, in the end.</div>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, May 30, 2013 at 8:38 AM, Morgan Segalis <span dir="ltr"><<a href="mailto:msegalis@gmail.com" target="_blank">msegalis@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi everyone,<br>
<br>
I'm currently looking for better ways to do a clustered connected client list.<br>
<br>
On every server I have an ets table, in which I insert / delete / search the process of each client connecting to the server.<br>
<br>
Each row in the ets table is for now a simple {"id", <pid.of.process>}<br>
<br>
I have tried the gproc module from Ulf Wiger, but when a cluster goes down, everything goes wrong… (especially if it is the elected leader).<br>
<br>
If a cluster goes down, other clusters should consider that every client connected on the said cluster are actually not connected (even if is just a simple connection error between clusters).<br>
If it goes back online, back on the nodes() list, other clusters should consider clients on this cluster back online.<br>
<br>
What would be in your opinion the best way to do that ?<br>
<br>
It is a messaging system, so it has to handle massive message passing through processes.<br>
<br>
Thank you for your help.<br>
<br>
Morgan.<br>
_______________________________________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>
<a href="http://erlang.org/mailman/listinfo/erlang-questions" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
</blockquote></div><br></div>
</blockquote></div><br></body></html>