[erlang-questions] Help creating distributed server cache
Garrett Smith
g@REDACTED
Fri Dec 7 00:26:54 CET 2012
On Thu, Dec 6, 2012 at 5:01 PM, David Fox <david@REDACTED> wrote:
> Hi Garret, thanks for the response.
>
> I have not yet finished implementing the API; I'm still in the design phase
> figuring out how everything should be hooked up.
Right. though I think you can start to build actual pieces (i.e.
compiling/running code) based on what's the most obvious to you, even
if it's just super stupidly simple. It will give you a basis for
iteration, which will further you help your understanding. You may
find yourself spending almost no time designing :)
> Totally agree on not limiting yourself to options this early on, I'm just
> asking for some help and opinions on some problems I saw in potential
> implementations :)
>
> David Fox
> m: 630 930 9219
> Chicago
Ah, I'm also in Chicago. Do you know about the Chicago Erlang User Group:
http://www.meetup.com/ErlangChicago/
We haven't met the last few months, but I'd like to do an informal
meetup before years end!
> On 12/6/2012 16:34, Garrett Smith wrote:
>>
>> Hi David,
>>
>> On Thu, Dec 6, 2012 at 9:43 PM, David Fox <david@REDACTED> wrote:
>>>
>>> I'm currently developing a gaming server which stores player information
>>> that can be accessed from any of our games via a REST API.
>>>
>>> So far I've thought of two ways to structure and cache player data:
>>>
>>> 1. When a client requests data on a player, spawn 1 player process. This
>>> process handles: all subsequent requests from clients for this player,
>>> retrieving the player data from the DB when created and periodically
>>> updating the DB with any new data from clients. If the player is not
>>> requested by another client within... say 30 minutes, the player process
>>> will terminate.
>>>
>>> 2. Just keep previously requested data in a distributed LRU cache (e.g.,
>>> memcached, redis, mnesia)
>>>
>>> Out of the two, I prefer #1 since it would allow me to separate the
>>> functionality of different "data types" (e.g., player data, game data).
>>>
>>> There are just 2 problems with doing it this way that I'd like your
>>> thoughts
>>> and help with:
>>> I. I would have to implement some sort of "LRU process cache" so I could
>>> terminate processes to free memory for new ones.
>>> II. If a load balancer connects a client to node #1, but the process for
>>> the
>>> requested player is on node #2, how can the player process on node #2
>>> send
>>> the data to the socket opened for the client on node #1. Is it possible
>>> to
>>> somehow send an socket across nodes? The reason I ask, is that I'd like
>>> to
>>> prevent sending big messages across nodes.
>>
>> It's tough to answer high level "approach" style questions, especially
>> without some hands on work (tinkering) to help you understand the
>> problem.
>>
>> Limiting yourself to either/or options at this stage might also be
>> premature.
>>
>> Do you have a first pass at the public API for this service?
>>
>> If you have an idea of the functions that could define the interface,
>> you can ask, for each unimplemented function:
>>
>> - Can I make this side-effect free -- i.e. calling the function
>> doesn't change state or otherwise tamper with the universe?
>>
>> - Does the function read from or write to long running state?
>>
>> Side effect free functions are easy, which is why you try to solve
>> problems using them exclusively whenever possible.
>>
>> For long running state, you can use a simple gen_server to implement
>> state initialization and mutation. If you have questions about what I
>> mean here, you'll need to bone up on gen_server, or alternatively look
>> at e2 services (see http://e2project.org) as they're simpler to write.
>>
>> Once you have something very basic working, see if you're done! It
>> might just work for you as is, at least for the short term. If it
>> doesn't work, address the specific problem. E.g. if your problem is "I
>> loose my state when the VM crahses," you'll need to implement
>> persistence in some fashion.
>>
>> Questions at that level are much easier to answer :)
>>
>> Garrett
>
>
More information about the erlang-questions
mailing list