[erlang-questions] Using Key/Value store with shared storage

Max Lapshin <>
Mon Dec 10 17:59:26 CET 2012


You speak about many servers but one SAN.
What for? Real throughput is limited about 3 gbps. It means that you will
be limited in writing no more than 10000 values ok 30kb per second.

There will be no cheap way to scale after this limit if you use storage,
but if you shard and throw away your raid, you can scale.



Maybe yo

On Monday, December 10, 2012, Sean D wrote:

> Hi all,
>
> We are currently running an application for a customer that stores a large
> number of key/value pairs.  Performance is important for us as we need to
> maintain a write rate of at least 10,000 keys/second on one server.  After
> evaluating various key/value stores, we found Bitcask worked extremely well
> for us and we went with this.
>
> The solution currently has multiple servers working independently of each
> and we would like to make the solution more resilient by using shared
> storage. I.e.  If one of the servers goes down, the others can pick up the
> work load and add to/read from the same store.
>
> I am aware that Riak seems to be the standard solution for a resilient
> key-value store in the Erlang world, however from my initial
> investigations,
> this seems to work by duplicating the data between Riak nodes, and this is
> something I want to avoid as the number of keys we are storing will be in
> the range of 100s of GB and I would prefer that the shared storage is used
> rather than data needing to be duplicated.  I am also concerned that the
> overhead of Riak may prove a bottle-neck, however this isn't something that
> I have tested.
>
> If anyone here has used a key/value store with a SAN or similar in this
> way,
> I'd be very keen to hear your experiences.
>
> Many thanks in advance,
> Sean
> _______________________________________________
> erlang-questions mailing list
>  <javascript:;>
> http://erlang.org/mailman/listinfo/erlang-questions
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20121210/425918b4/attachment.html>


More information about the erlang-questions mailing list