[erlang-questions] Help creating distributed server cache
David Fox
david@REDACTED
Thu Dec 6 22:43:22 CET 2012
I'm currently developing a gaming server which stores player information
that can be accessed from any of our games via a REST API.
So far I've thought of two ways to structure and cache player data:
1. When a client requests data on a player, spawn 1 player process. This
process handles: all subsequent requests from clients for this player,
retrieving the player data from the DB when created and periodically
updating the DB with any new data from clients. If the player is not
requested by another client within... say 30 minutes, the player process
will terminate.
2. Just keep previously requested data in a distributed LRU cache (e.g.,
memcached, redis, mnesia)
Out of the two, I prefer #1 since it would allow me to separate the
functionality of different "data types" (e.g., player data, game data).
There are just 2 problems with doing it this way that I'd like your
thoughts and help with:
I. I would have to implement some sort of "LRU process cache" so I could
terminate processes to free memory for new ones.
II. If a load balancer connects a client to node #1, but the process for
the requested player is on node #2, how can the player process on node
#2 send the data to the socket opened for the client on node #1. Is it
possible to somehow send an socket across nodes? The reason I ask, is
that I'd like to prevent sending big messages across nodes.
Thanks for the help!
More information about the erlang-questions
mailing list