[erlang-questions] kicking my servers all day long...
Sun May 20 09:41:45 CEST 2007
Why do you want to spawn a new server each time?
How many instances can there be?
It takes only a few microseconds to spawn a process,
so if the update frequency is 1/50ms (20/sec), the
cost of spawning a process is insignificant.
It is also possible to leave one process running for each
instance. This used to be quite inefficient from a memory
usage perspective, but nowadays, there's the nifty little
BIF hibernate(M, F, A), which will compress the process
while it waits for the next message. This is mainly useful if
you expect relatively long idle periods for the process.
2007/5/20, Jason Dusek <jsnx@REDACTED>:
> I'm writing a distributed, finite differences heat diffusion
> simulation in Erlang. My idea about how to do it goes like this:
> a) break the big grid into many little grids
> b) assign the grids to individual servers, and connect each
> server to those servers with adjacent grids.
> c) for each time step, have the servers evolve their state and
> then spawn a new server with the updated data
> I have sample code which models (c) as updating an int every 50
> milliseconds -- it's posted on pastie:
> Although (c) is conceptually simple, I'm concerned it may be a source
> of evil performance problems -- I have to spawn bazillions of servers,
> over and over and over again! Is there another way to do it? What are
> some other approaches to distributed, finite differences computing in
> I tried posting this on comp.lang.functional and they steered me
> toward shared memory concurrency and mutable state!
> erlang-questions mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the erlang-questions