[erlang-questions] Help creating distributed server cache
Fri Dec 7 23:55:17 CET 2012
I've never heard of this tech before. Thanks for the heads up, looks
quite interesting :)
On 12/7/2012 13:34, Arthur Ingram wrote:
> Take a look at the following
> On Thursday, December 6, 2012 3:43:22 PM UTC-6, David Fox wrote:
> I'm currently developing a gaming server which stores player
> that can be accessed from any of our games via a REST API.
> So far I've thought of two ways to structure and cache player data:
> 1. When a client requests data on a player, spawn 1 player
> process. This
> process handles: all subsequent requests from clients for this
> retrieving the player data from the DB when created and periodically
> updating the DB with any new data from clients. If the player is not
> requested by another client within... say 30 minutes, the player
> will terminate.
> 2. Just keep previously requested data in a distributed LRU cache
> memcached, redis, mnesia)
> Out of the two, I prefer #1 since it would allow me to separate the
> functionality of different "data types" (e.g., player data, game
> There are just 2 problems with doing it this way that I'd like your
> thoughts and help with:
> I. I would have to implement some sort of "LRU process cache" so I
> terminate processes to free memory for new ones.
> II. If a load balancer connects a client to node #1, but the
> process for
> the requested player is on node #2, how can the player process on
> #2 send the data to the socket opened for the client on node #1.
> Is it
> possible to somehow send an socket across nodes? The reason I ask, is
> that I'd like to prevent sending big messages across nodes.
> Thanks for the help!
> erlang-questions mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the erlang-questions