<div dir="ltr">while we are here let's add cached to the comparison: <div><a href="https://gitlab.com/barrel-db/lab/cached">https://gitlab.com/barrel-db/lab/cached</a></div><div><br></div><div>Only the experiment is public for now. It has different strategies to store k/v in memory and distribute it. Distribution is plugable and by default rely on erlang distribution. </div><div><br></div><div>Quick sample: <a href="https://gitlab.com/barrel-db/lab/cached/-/blob/master/test/cached_SUITE.erl#L47">https://gitlab.com/barrel-db/lab/cached/-/blob/master/test/cached_SUITE.erl#L47</a> <br><div><br></div><div>Benoît</div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Dec 22, 2021 at 1:51 PM Attila Rajmund Nohl <<a href="mailto:attila.r.nohl@gmail.com">attila.r.nohl@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">Roberto Ostinelli <<a href="mailto:ostinelli@gmail.com" target="_blank">ostinelli@gmail.com</a>> ezt írta (időpont: 2021. dec.<br>
21., K, 14:57):<br>
><br>
> Let’s write a database! Well not really, but I think it’s a little sad that there doesn’t seem to be a simple in-memory distributed KV database in Erlang. Many times all I need is a consistent distributed ETS table.<br>
><br>
> The two main ones I normally consider are:<br>
><br>
> Riak which is great, it handles loads of data and is based on DHTs. This means that when there are cluster changes there is a need for redistribution of data and the process needs to be properly managed, with handoffs and so on. It is really great but it’s eventually consistent and on many occasions it may be overkill when all I’m looking for is a simple in-memory ACI(not D) KV solution which can have 100% of its data replicated on every node.<br>
> mnesia which could be it, but unfortunately requires special attention when initializing tables and making them distributed (which is tricky), handles net splits very badly, needs hacks to resolve conflicts, and does not really support dynamic clusters (additions can be kind of ok, but for instance you can’t remove nodes unless you stop the app).<br>
> …other solutions? In general people end up using Foundation DB or REDIS (which has master-slave replication), so external from the beam. Pity, no?<br>
<br>
Have you seen this: <a href="https://gitlab.com/leapsight/plum_db" rel="noreferrer" target="_blank">https://gitlab.com/leapsight/plum_db</a> ? It's only<br>
eventually consistent, but if you want distribution and availability<br>
even in case of network partitioning, you won't get consistency...<br>
</blockquote></div>