[erlang-questions] gen_server clustering strategy

Evans, Matthew mevans@REDACTED
Tue Feb 1 20:10:05 CET 2011

Without knowing much more another model would be to use your own ets/mnesia table to map the workers.

I am assuming the creation of a worker is relatively infrequent (compared to the times you need to find a worker).

When you create a worker use gen_server:start(?MODULE, [], []).

This will return {ok,Pid}. You can save the Pid in an mnesia table along with whatever reference you need (doesn't need to be an atom). All nodes in the mesh will get the pid and the reference. Starting it as shown above (with an arity of 3)  means you don't need to use the global service and ensure the names are unique atoms for each server. You can even do start via some application that will spread the gen_servers over different nodes.

When a request comes in it only need do a lookup to find the correct pid.

I can't recall as to how bad the process crash messaging is. But what you could do is to monitor a gen_server locally, and monitor each node globally. When a node fails all other nodes can cleanup processes that were registered on that node.


From: erlang-questions@REDACTED [erlang-questions@REDACTED] On Behalf Of Knut Nesheim [knut.nesheim@REDACTED]
Sent: Tuesday, February 01, 2011 12:25 PM
To: erlang-questions@REDACTED
Subject: [erlang-questions] gen_server clustering strategy

Hello list,

We are interested in understanding how the Erlang distribution
mechanism behaves in a fully connected cluster. Our concern is that as
we add nodes, the overhead will become a problem.

Our application is a bunch of gen_servers containing state, which we
need to run on multiple nodes due to memory usage. A request will come
to our application and it needs to be handled by the correct
gen_server. Every request includes a user id which we can use to map
to the process.

We have two specific use cases in mind.

1) Proxy to workers

In this scenario we have a proxy (or multiple proxies) accepting
requests at the edge of our cluster. The proxy will ask the correct
server for a response. Either through using the 'global' module or
gproc or something similar, the proxy node will keep track of the
mapping of user ids to process and nodes. The proxy will call the node
using the Erlang distribution protocol.

2) Mesh

In this scenario a request can be handled by any node in the cluster.
If the request cannot be handled by the local node, the correct node
is called on instead. Every node in the cluster needs to keep track of
which id belongs to which process.

The numbers:
 * We must be able to run 100,000 processes, each may peak at 300kb of memory
 * We expect 5000 requests coming in to our application per second at
peak, generating the same number of messages
 * Out of those 5000 messages, 1000 has a content that may peak at 300kb
 * All the other messages are well under 1kb

Our concerns:

 * In the mesh, will the traffic between the nodes become a problem?
Lets say we have 4 nodes, if the requests are evenly divided between
the nodes the probability of hitting the correct node is 25%. With 100
nodes this is 1%. As we add nodes, there will be more chatter. May
this chatter "drown-out" the important messages, like pid down, etc.

 * Will it be more efficient to use the proxy to message every node?
Will the message then always go to the node directly, or may it go
through some other node in the cluster?

 * For the mesh, we need to keep the 'global' or gproc state
replicated. With 'global' we will run into the default limit of atoms.
If we were to increase this limit, what kind of memory usage could we
expect? Are there any other nasty side-effects that we should be aware

 * In the "Erlang/OTP in Action" book it is stated that you may have a
"couple of dozen, but probably not hundreds of nodes." Our
understanding is that this is because of traffic like "this pid just
died", "this node is no longer available", etc.

 * If we were to eliminate messaging between nodes, how many nodes
could we actually run in a fully connected cluster? Will a
high-latency network like ec2 affect this number or just the latency
of messages? What else may affect the size?


erlang-questions (at) erlang.org mailing list.
See http://www.erlang.org/faq.html
To unsubscribe; mailto:erlang-questions-unsubscribe@REDACTED

More information about the erlang-questions mailing list