[erlang-questions] [ANN] ActorDB a distributed SQL database (Sergej Jurecko)

Sergej Jurecko sergej.jurecko@REDACTED
Fri Jan 24 18:35:18 CET 2014


Answers inline.

On Jan 24, 2014, at 5:33 PM, Fred Hebert wrote:

> I'm curious about a few things:
> 
> - You mention using ACID for transactions, but later mention "The master
>  does periodic rebroadcasts of state. Eventually it will correct
>  itself. But it had bad data for X seconds"
> 
>  This points towards an eventually consistent solution, not a fully
>  consistent one.
> 

Global state is separate from transactions to actors. Actors are consistent. Change of configuration when initiated by user is eventually consistent.

> - There is, during your leader election (picking local max) and many
>  times around the text: "A successful 2 phase commit means a majority
>  of nodes."
> 
>  This sounds like a majority-based consensus, but the 2PC algorithm
>  usually waits for *all* participants to have agreed. Unless you're
>  adding majority-reads as a constraints, it sounds like you're going to
>  be breaking ACID in the first place, and that you're not actually
>  using 2PC, but a quorum-based consensus algorithm.
> 

Yes you are right. We will rephrase.

> - It's unclear how 'majority' is determined. Is it a majority of all the
>  nodes *expected* in the cluster, or a majority of the nodes
>  *currently* in the cluster? How does this deal with netsplits?

Majority of nodes expected in the cluster. So a three node cluster is going to tolerate 1 missing node. Nodes can go missing for a period of time or they can shut down. Their missed writes will cause a restore operation for actors whose writes they did not execute. 

> 
> - No mention of timeout. Do you have a threshold under which a master
>  gets de-elected for taking too long to respond? Is there an assumption
>  here about timeouts vs. failures and how to tell them apart? Under
>  such a scenario, how do two nodes who think of each other as masters
>  detect the case and resolve it?
> 

When it comes to actors themselves there are no timeouts. If a node receives a nodedown message, all slave actors whose master seems to be gone are told to close. If a read/write request comes from a client, it will force a master election if there is none. 
For global state it responds to a nodedown message. This is admittedly a part of the system which needs more testing. 

> - How do these mechanism keep working following a netsplit during a
>  multi-shard transaction?

Transaction is done from some server. If that server can reach all actors and those actors have a majority in their clusters it will succeed. If any of those actors do not have a majority transaction will fail. If transaction manager reached the point of committing transaction, but was no longer able to contact actors to tell them to commit, actors themselves will eventually call back to check if it is committed or not. The entire procedure is described in chapter 2.2.3 of documentation.

> 
> - When you redirect requests, are you doing the redirection as a proxy,
>  or asking to retry directly? In the former case, what happens if the
>  proxy node dies, but not the master actually doing the request? Or
>  vice-versa, what if the client dies, but not the proxy node?
> 

As a proxy. If the proxy or client died, transaction will either be committed or not. Like postgres.

> - What happens to requests being sent during a shard migration that
>  hasn't yet completed?
> 

Shard migration is done actor-by-actor. If it hits an actor after migration, request will be redirected to new cluster. If it hits during copy it depends on the phase of copy. Before sending the last packet all writes will be committed. Copying an actor locks it only once it has sent the entire db. After it has sent last packet it waits for confirmation that copy was successful. If it receives it, all requests that have been queued during lock are responded with a redirect. 


> There's probably more to ask, but yeah. Distributed systems are fun and
> hard!
> 
> Regards,
> Fred.
> 
> On 01/24, Sergej Jurecko wrote:
>> hello,
>> 
>> We've put up a documentation page with more info. 
>> 
>> http://www.actordb.com/docs.html
>> 
>> Hopefully it answers more questions than it raises. If not fire away.
>> 
>> 
>> Sergej
>> 
>> On Jan 22, 2014, at 10:19 PM, Valery Meleshkin wrote:
>> 
>>> Hi Sergej,
>>> 
>>> Which algorithms were used to build it? How its architecture looks like? How testing process looks like?
>>> Specifically I’m interested in the details of replication, inter-actor transaction coordination, replica placement and membership service (e.g. raft/paxos/2pc/ 2pc over paxos ensembles/…).
>>> 
>>> -- 
>>> Sincerely,
>>> (Mr.) Valery Meleshkin
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> erlang-questions mailing list
>>> erlang-questions@REDACTED
>>> http://erlang.org/mailman/listinfo/erlang-questions
>> 
> 
>> _______________________________________________
>> erlang-questions mailing list
>> erlang-questions@REDACTED
>> http://erlang.org/mailman/listinfo/erlang-questions
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20140124/b6106adf/attachment.htm>


More information about the erlang-questions mailing list