<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Hi Morgan,<br>
    <br>
    Have you taken a look at Basho's Riak Core (its open source) - they
    have solved nicely the consistent hashing mapping to vnodes that
    allow clusters to dynamically change in size while being stable.<br>
    <br>
    <a class="moz-txt-link-freetext" href="http://basho.com/where-to-start-with-riak-core/">http://basho.com/where-to-start-with-riak-core/</a><br>
    <br>
    We are looking at using it to implement distribution in our
    solution.  Haven't dove into yet, so can speak to what is involved
    in adopting their solution.<br>
    <br>
    Cheers,<br>
    Bryan<br>
    <br>
    <div class="moz-cite-prefix">On 5/31/13 3:52 AM, Morgan Segalis
      wrote:<br>
    </div>
    <blockquote
      cite="mid:78DED4DA-C92D-4580-B3EA-185B9A06AD88@gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      Hi Dmitry, 
      <div><br>
      </div>
      <div>I have though about consistent hashing,
        <div><br>
        </div>
        <div>The only issue is that consistent hashing will work if we
          have a fixed number of cluster, If we add dynamically another
          cluster, the hash won't gives me the same cluster…</div>
        <div>I might be wrong…</div>
        <div><br>
        </div>
        <div>Actually right now I have a gateway, which will choose the
          cluster on which there is the less number of connected
          clients, and redirect the client to this one. Working like a
          load balancer among other things the gateway does.</div>
        <div><br>
        </div>
        <div>Best regards,</div>
        <div>Morgan.</div>
        <div><br>
        </div>
        <div><br>
          <div>
            <div>Le 31 mai 2013 à 12:30, Dmitry Kolesnikov <<a
                moz-do-not-send="true"
                href="mailto:dmkolesnikov@gmail.com">dmkolesnikov@gmail.com</a>>
              a écrit :</div>
            <br class="Apple-interchange-newline">
            <blockquote type="cite">
              <meta http-equiv="Content-Type" content="text/html;
                charset=ISO-8859-1">
              <div style="word-wrap: break-word; -webkit-nbsp-mode:
                space; -webkit-line-break: after-white-space; ">Hello,
                <div><br>
                </div>
                <div>You current implementation starts to suffer from
                  performance due to large number of messages to
                  discover process location.</div>
                <div>You have to define a formal rule about "id" and its
                  relation to node where processes exists. Essentially I
                  am talking about consistent hashing.</div>
                <div><br>
                </div>
                <div>To be honest, I am not getting what is wrong with
                  ETS and gproc. I am using a similar approach for my
                  cluster management. I am using P2P methodology </div>
                <div>where local tables get sync periodically + updates
                  on local table is replicated to cluster members. Each
                  node is capable to observe the status of cluster
                  members. Once node is disconnected the table is
                  clean-up. However, I am using that approach for
                  "internal" processes. "Client connection" are not
                  distributed globally.   </div>
                <div><br>
                </div>
                <div>Best Regards, </div>
                <div>Dmitry</div>
                <div><br>
                  <div>
                    <div>On May 31, 2013, at 1:12 PM, Morgan Segalis
                      <<a moz-do-not-send="true"
                        href="mailto:msegalis@gmail.com">msegalis@gmail.com</a>>
                      wrote:</div>
                    <br class="Apple-interchange-newline">
                    <blockquote type="cite">
                      <meta http-equiv="Content-Type"
                        content="text/html; charset=ISO-8859-1">
                      <div style="word-wrap: break-word;
                        -webkit-nbsp-mode: space; -webkit-line-break:
                        after-white-space; ">
                        <div>Hi,</div>
                        <div><br>
                        </div>
                        <div>Actually right now, I have this
                          implementation : </div>
                        <div><br>
                        </div>
                        <div>Each node has a clientpool</div>
                        <div>Each client pool manage an ets table, where
                          all client are inserted / removed / searched</div>
                        <div><br>
                        </div>
                        <div>If an user A sends a message to user B, it
                          will check first the local ets, if not found,
                          will ask to all nodes if they found user B in
                          their ets table, if user B is connected to
                          another node, this node will return the
                          process pid.</div>
                        <div><br>
                        </div>
                        <div>Advantage of this implementation : no
                          synchronization required if one node goes down
                          and back up again…</div>
                        <div>Disadvantage of this implementation : I
                          guess the number of message / sec is much
                          higher of number of connection/disconnection /
                          sec</div>
                        <div><br>
                        </div>
                        <br>
                        <div>
                          <div>Le 31 mai 2013 à 00:36, Chris Hicks <<a
                              moz-do-not-send="true"
                              href="mailto:khandrish@gmail.com">khandrish@gmail.com</a>>
                            a écrit :</div>
                          <br class="Apple-interchange-newline">
                          <blockquote type="cite">
                            <div dir="ltr">Please keep in mind that this
                              is the result of a total of about 5
                              seconds of thinking.
                              <div><br>
                              </div>
                              <div>You could have a coordinator on each
                                node which is responsible for
                                communicating with the coordinators on
                                all of the other connected nodes. Your
                                ETS entries would need to be expanded to
                                keep track of the node that the user is
                                connected on as well. The coordinators
                                track the joining/leaving of the cluster
                                of all other nodes and will purge the
                                ETS table of any entries that belong to
                                any recently downed node. As long as you
                                don't have hard real-time requirements,
                                which if you do you're using the wrong
                                tool anyway, then you can come up with a
                                bunch of ways to group together updates
                                between coordinators to make sure they
                                don't get overloaded.</div>
                              <div><br>
                              </div>
                              <div style="">Without a lot more details
                                on the exact sort of metrics your system
                                needs to be able to handle it's all
                                really just a guessing game, in the end.</div>
                            </div>
                            <div class="gmail_extra"><br>
                              <br>
                              <div class="gmail_quote">On Thu, May 30,
                                2013 at 8:38 AM, Morgan Segalis <span
                                  dir="ltr"><<a
                                    moz-do-not-send="true"
                                    href="mailto:msegalis@gmail.com"
                                    target="_blank">msegalis@gmail.com</a>></span>
                                wrote:<br>
                                <blockquote class="gmail_quote"
                                  style="margin: 0px 0px 0px 0.8ex;
                                  border-left-width: 1px;
                                  border-left-color: rgb(204, 204, 204);
                                  border-left-style: solid;
                                  padding-left: 1ex; position: static;
                                  z-index: auto; ">Hi everyone,<br>
                                  <br>
                                  I'm currently looking for better ways
                                  to do a clustered connected client
                                  list.<br>
                                  <br>
                                  On every server I have an ets table,
                                  in which I insert / delete / search
                                  the process of each client connecting
                                  to the server.<br>
                                  <br>
                                  Each row in the ets table is for now a
                                  simple {"id", <pid.of.process>}<br>
                                  <br>
                                  I have tried the gproc module from Ulf
                                  Wiger, but when a cluster goes down,
                                  everything goes wrong… (especially if
                                  it is the elected leader).<br>
                                  <br>
                                  If a cluster goes down, other clusters
                                  should consider that every client
                                  connected on the said cluster are
                                  actually not connected (even if is
                                  just a simple connection error between
                                  clusters).<br>
                                  If it goes back online, back on the
                                  nodes() list, other clusters should
                                  consider clients on this cluster back
                                  online.<br>
                                  <br>
                                  What would be in your opinion the best
                                  way to do that ?<br>
                                  <br>
                                  It is a messaging system, so it has to
                                  handle massive message passing through
                                  processes.<br>
                                  <br>
                                  Thank you for your help.<br>
                                  <br>
                                  Morgan.<br>
_______________________________________________<br>
                                  erlang-questions mailing list<br>
                                  <a moz-do-not-send="true"
                                    href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>
                                  <a moz-do-not-send="true"
                                    href="http://erlang.org/mailman/listinfo/erlang-questions"
                                    target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
                                </blockquote>
                              </div>
                              <br>
                            </div>
                          </blockquote>
                        </div>
                        <br>
                      </div>
                      _______________________________________________<br>
                      erlang-questions mailing list<br>
                      <a moz-do-not-send="true"
                        href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>
                      <a moz-do-not-send="true"
                        href="http://erlang.org/mailman/listinfo/erlang-questions">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
                    </blockquote>
                  </div>
                  <br>
                </div>
              </div>
            </blockquote>
          </div>
          <br>
        </div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
erlang-questions mailing list
<a class="moz-txt-link-abbreviated" href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a>
<a class="moz-txt-link-freetext" href="http://erlang.org/mailman/listinfo/erlang-questions">http://erlang.org/mailman/listinfo/erlang-questions</a>
</pre>
    </blockquote>
    <br>
    <div class="moz-signature">-- <br>
      <p style="font-size:12px">
        Bryan Hughes<br>
        <b>Go Factory</b><br>
        <a class="moz-txt-link-freetext" href="http://www.go-factory.net">http://www.go-factory.net</a><br>
        <br>
        <i>"Internet Class, Enterprise Grade"</i><br>
      </p>
      <p><br>
      </p>
    </div>
  </body>
</html>