[erlang-questions] generic replication bahaviour
Tue Jun 1 20:54:49 CEST 2010
Yes, I well aware of the CAP theorem and that a generic replication scheme
will not fit all purposes but we could give it a try.
Since replication per se is not a goal we need to decide what the aim is.
Assume the goal is to increase the throughput for a server and be able to
handle more request per second. We could possibly achieve this by
replicating the server. If the ratio of read to write operations is high
enough and we can handle read operations locally (meaning that you might
not see your writes immediately) we have a good chance of doing this.
In this case we would go for Consistency. The replicated service should of
course be Available but not necessarily more than a non-replicated
service. Should we handle Partitions, well we should not loose Consistency
and if we have partitions we play it safe and in the worst case stop
So who would use a replication scheme that would stop if there is a
partition? Someone that knows that a partition will not occur? What
happens if we run all replicas in the same Erlang node using a multicore
host? Partitions will not happen and we will be able to implement a
Consistent and also more Available service.
If we move to a multi Erlang node (possibly distributed) then there might
be partitions (and node crashes that the system will not be able to tell
from a partition) so we will have to sacrifice either A o C. Since
Consistency is waht we aim for we will simply sacrifice A and in some
cases stop responding. How often will this happen? Not more often than the
non-replicated service would crash (me think).
The system that I'm playing around with now is quite simple and works
similar to gen_leader but it is more focused on only providing consistent
replication. The question is of course if I've already made to many
decisions and therefore land in the "Won't work for my app" slot.
The Dynomite, Scalari etc approach is to move the state of a server to a
separate replicated store. The server it self is then stateless and uses
the replicated store so that multiple servers can run in parallel. This is
of course a very good solution but sometimes it might be over-kill or
simple problematic to extract the state from an existing server
implementation. If one already has a gen_server implemented an alternative
approach would then be to simply (well we'll se) turn it in to a
On Mon, 31 May 2010 22:10:59 +0200, Scott Lystig Fritchie
> Replying to a thead from last week....
> Johan Montelius <johanmon@REDACTED> wrote:
> jm> has anyone played around with a generic replication behaviour in the
> jm> style of gen_server?
> No, sorry. But before doing such a thing, I think you need to put some
> thought into what "replication" means vis a vis Brewer's CAP theorem.
> Or more perhaps more usefully(*), the spectrum of consistency
> vs. availability.
> Then (perhaps?) deciding on an implementation that's based on a state
> machine replication technique or a quorum-based technique. The choice
> of technique may help drive what kind of metadata you're going to
> require for each thingie stored. Are the thingie's key-value pairs, or
> something else? Do you require monotonically-increasing timestamps or
> vector clocks or something else? Does the behavior always resolve
> consistency ambiguity or push those decisions to the client, and how the
> client app inform the behavior of its choice?
> If you're dealing with disk-based persistence, then any Erlang process
> that's doing disk I/O can block at very inconvenient times for very
> inconvenient lengths of time. Syncronous replication across a network
> (even a fast LAN) can result in similiar inconveniences, at least in
> terms of Murphy's Law. The sum of these inconveniences can easily tip
> an implementation into the "Won't work for my app" category.
> Just things off the top of the cuff of my head. :-) I'll make a blind
> guess and say that this is why key-value stores such as Dynomite,
> Scalaris, Riak, Ringo, Scalien, and others are already "out there"(**)
> and useful: they choose a path through the maze of choices above and
> then do it well.
> (*) Reading Brewer's writings about CAP and then the Gilbert & Lynch
> proof, the formal definition of "P" is a tricky thing.
> (**) Sorry, Hibari hasn't been released yet, contrary to what my Erlang
> Factory 2010 San Francisco talk had predicted. Mid-July 2010 is my best
> guess right now.
> erlang-questions (at) erlang.org mailing list.
> See http://www.erlang.org/faq.html
> To unsubscribe; mailto:erlang-questions-unsubscribe@REDACTED
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
More information about the erlang-questions