[erlang-questions] Help creating distributed server cache
Fri Dec 7 19:49:59 CET 2012
There is no hard requirement for a RESTful API, but since this API will
be used in a wide variety of places (e.g., web/html5 games, mobile,
flash, etc) and not just internally, we decided having a RESTful API
would be a good idea and make using the API in development quicker/easier.
On 12/6/2012 18:42, Steve Davis wrote:
> From what you've said, I would guess that the correct answer is:
> 2) memcached protocol
> Your solution (1) starts with "When a client requests data on a
> player, spawn 1 player process." If that had been "when a client
> requests data on themselves from another game" then it could have been
> in the running...
> A memcached implementation will sort out LRU without you having to
> reinvent (stabilize, test) a wheel.
> Not sure why there's a REST requirement. If this MUST be HTTP then I
> see it, otherwise what does it do for you?
> My 2c,
> On Thursday, December 6, 2012 3:43:22 PM UTC-6, David Fox wrote:
> I'm currently developing a gaming server which stores player
> that can be accessed from any of our games via a REST API.
> So far I've thought of two ways to structure and cache player data:
> 1. When a client requests data on a player, spawn 1 player
> process. This
> process handles: all subsequent requests from clients for this
> retrieving the player data from the DB when created and periodically
> updating the DB with any new data from clients. If the player is not
> requested by another client within... say 30 minutes, the player
> will terminate.
> 2. Just keep previously requested data in a distributed LRU cache
> memcached, redis, mnesia)
> Out of the two, I prefer #1 since it would allow me to separate the
> functionality of different "data types" (e.g., player data, game
> There are just 2 problems with doing it this way that I'd like your
> thoughts and help with:
> I. I would have to implement some sort of "LRU process cache" so I
> terminate processes to free memory for new ones.
> II. If a load balancer connects a client to node #1, but the
> process for
> the requested player is on node #2, how can the player process on
> #2 send the data to the socket opened for the client on node #1.
> Is it
> possible to somehow send an socket across nodes? The reason I ask, is
> that I'd like to prevent sending big messages across nodes.
> Thanks for the help!
> erlang-questions mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the erlang-questions