<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><blockquote type="cite">Now this is interesting to me. High maintenance overheads would be<br>offputting. Was your SAN configuration particularly complex? What<br>sort of issues were you having?</blockquote><div>Once you actually get your fibre/infiniband/10G/whatever adapters, you'll be surprised at how much lower your throughput will be compared to what the spec's say they should be.</div><div>Once you (and your vendor) are done tuning, reconfiguring, etc., You'll almost certainly end up sharding across different zones, oh, other sorts of fun stuff.</div><div>The end result is complexity - complex hardware, complex maintenance/monitoring issues, and complex intellectual grappling w/ the various edge-cases that you'll be dealing with.</div><div>SANs are remarkably good for many things, but for the 'firehose' like throughput that you want/need, they're quite probably not feasible…</div><div><br></div><div>cheers</div><div><br></div><br><div>
<div style="color: rgb(0, 0, 0); font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div style="margin: 0in 0in 0.0001pt; "><font class="Apple-style-span" color="#1f497d" face="Calibri, sans-serif"><span class="Apple-style-span" style="font-size: 15px; "><b><i><div style="margin: 0px; font-style: normal; font-weight: normal; font-family: Calibri; "><a href="http://www.gravatar.com/avatar/204a87f81a0d9764c1f3364f53e8facf.png"><b><i>Mahesh Paolini-Subramanya</i></b></a></div></i></b></span></font></div><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif; "><span style="font-size: 11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); ">That Tall Bald Indian Guy...</span></div></div><div style="margin: 0in 0in 0.0001pt; font-size: 12pt; font-family: 'Times New Roman', serif; "><span style="font-size: 11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); "><div style="margin: 0px; font-family: Calibri; color: rgb(1, 108, 226); "><span style="text-decoration: underline; "><a href="https://plus.google.com/u/0/108074935470209044442/posts">Google+</a></span><span style="color: rgb(31, 73, 125); "> | <a href="http://dieswaytoofast.blogspot.com/"><span style="color: rgb(1, 108, 226); ">Blog</span></a></span><span style="text-decoration: underline; "> </span><span style="color: rgb(31, 73, 125); "> | <span style="color: rgb(1, 108, 226); "><a href="https://twitter.com/dieswaytoofast">Twitter</a></span></span><span style="color: rgb(31, 73, 125); "> | </span><a href="http://www.linkedin.com/in/dieswaytoofast">LinkedIn</a></div></span></div></div>
</div>
<br><div><div>On Dec 10, 2012, at 12:54 PM, Sean D <<a href="mailto:seand-erlang@seand.me.uk">seand-erlang@seand.me.uk</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite">Thanks for the comments. I have included a few points inline.<br><br>Cheers,<br>Sean<br><br>On Mon, Dec 10, 2012 at 10:21:32AM -0800, Mahesh Paolini-Subramanya wrote:<br><blockquote type="cite"> This seems flawed to me. Do you want resiliency, or shared storage?<br><br> Ooooh. Troll! :-)<br><br> Seriously though - as Max Lapshin points out, w/ the exception of doing<br> some very interesting (read complex, and potentially destabilizing)<br> architecting w/ infiniband, multiple links and cascade setups, you are<br> going to be seriously bottle-necking on your shared storage.<br></blockquote><br>I'm wondering if the term "Shared Storage" has caused some confusion. When<br>I was talking about shared storage, I was talking about using a high-end<br>SAN.<br><br>The reason for this is because I believe they are designed to deal with<br>this type of scenario. This is obviously dependent on the quality of the<br>SAN though. Budgetary constraints are likely to determine whether or not<br>this will be worth my while.<br><br>Does anyone have any views on how much overhead there is in keeping nodes in<br>sync in a Riak-type solution. Does this affect I/O performance or does it<br>simply require extra more processing power?<br><br><blockquote type="cite"> And, to Garret's point, resiliency and shared storage are (kinda)<br> orthogonal. An appropriately spilled can of coke can wreak havoc to<br> your shared storage, and BigCouch/Riak/Voldemort/... can be remarkably<br> resilient.<br></blockquote><br>Again, SANs are designed to be highly resilient. I would hope that the<br>appropriate spilling of a can of coke would need to be highly malicious in<br>order to bring down a SAN.<br><br>I am not doubting any of these technologies are not resilient. I would just<br>rather avoid duplicating data.<br><br><blockquote type="cite"> A few years back, we migrated out of the "eggs in one basket" SAN<br> approach to BigCouch. Our SAN setup was increasingly starting to look<br> something that would give even Rube Goldberg nightmares, and the sheer<br> amount of hackery associated with this was starting to keep me up at<br> nights.<br><br> Anyhow, just my two bits...<br></blockquote><br>Now this is interesting to me. High maintenance overheads would be<br>offputting. Was your SAN configuration particularly complex? What<br>sort of issues were you having?<br><br><blockquote type="cite"> [1]Mahesh Paolini-Subramanya<br> That Tall Bald Indian Guy...<br> [2]Google+ | [3]Blog | [4]Twitter | [5]LinkedIn<br> On Dec 10, 2012, at 9:18 AM, Garrett Smith <[6]<a href="mailto:g@rre.tt">g@rre.tt</a>> wrote:<br><br> Aye, as well, this is curious:<br><br> we would like to make the solution more resilient by using shared<br> storage<br><br> On Mon, Dec 10, 2012 at 10:59 AM, Max Lapshin<br> <[7]<a href="mailto:max.lapshin@gmail.com">max.lapshin@gmail.com</a>> wrote:<br><br> You speak about many servers but one SAN.<br> What for? Real throughput is limited about 3 gbps. It means that you<br> will be<br> limited in writing no more than 10000 values ok 30kb per second.<br> There will be no cheap way to scale after this limit if you use<br> storage, but<br> if you shard and throw away your raid, you can scale.<br> Maybe yo<br> On Monday, December 10, 2012, Sean D wrote:<br><br> Hi all,<br> We are currently running an application for a customer that stores a<br> large<br> number of key/value pairs. Performance is important for us as we<br> need to<br> maintain a write rate of at least 10,000 keys/second on one server.<br> After<br> evaluating various key/value stores, we found Bitcask worked<br> extremely<br> well<br> for us and we went with this.<br> The solution currently has multiple servers working independently of<br> each<br> and we would like to make the solution more resilient by using<br> shared<br> storage. I.e. If one of the servers goes down, the others can pick<br> up the<br> work load and add to/read from the same store.<br> I am aware that Riak seems to be the standard solution for a<br> resilient<br> key-value store in the Erlang world, however from my initial<br> investigations,<br> this seems to work by duplicating the data between Riak nodes, and<br> this is<br> something I want to avoid as the number of keys we are storing will<br> be in<br> the range of 100s of GB and I would prefer that the shared storage<br> is used<br> rather than data needing to be duplicated. I am also concerned that<br> the<br> overhead of Riak may prove a bottle-neck, however this isn't<br> something<br> that<br> I have tested.<br> If anyone here has used a key/value store with a SAN or similar in<br> this<br> way,<br> I'd be very keen to hear your experiences.<br> Many thanks in advance,<br> Sean<br> _______________________________________________<br> erlang-questions mailing list<br> [8]<a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br> <a href="http://erlang.org/mailman/listinfo/erlang-questions">http://erlang.org/mailman/listinfo/erlang-questions</a><br><br> _______________________________________________<br> erlang-questions mailing list<br> [9]<a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br> <a href="http://erlang.org/mailman/listinfo/erlang-questions">http://erlang.org/mailman/listinfo/erlang-questions</a><br><br> _______________________________________________<br> erlang-questions mailing list<br> [10]<a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br> <a href="http://erlang.org/mailman/listinfo/erlang-questions">http://erlang.org/mailman/listinfo/erlang-questions</a><br><br>References<br><br> 1. <a href="http://www.gravatar.com/avatar/204a87f81a0d9764c1f3364f53e8facf.png">http://www.gravatar.com/avatar/204a87f81a0d9764c1f3364f53e8facf.png</a><br> 2. <a href="https://plus.google.com/u/0/108074935470209044442/posts">https://plus.google.com/u/0/108074935470209044442/posts</a><br> 3. <a href="http://dieswaytoofast.blogspot.com/">http://dieswaytoofast.blogspot.com/</a><br> 4. <a href="https://twitter.com/dieswaytoofast">https://twitter.com/dieswaytoofast</a><br> 5. <a href="http://www.linkedin.com/in/dieswaytoofast">http://www.linkedin.com/in/dieswaytoofast</a><br> 6. <a href="mailto:g@rre.tt">mailto:g@rre.tt</a><br> 7. <a href="mailto:max.lapshin@gmail.com">mailto:max.lapshin@gmail.com</a><br> 8. <a href="mailto:erlang-questions@erlang.org">mailto:erlang-questions@erlang.org</a><br> 9. <a href="mailto:erlang-questions@erlang.org">mailto:erlang-questions@erlang.org</a><br> 10. <a href="mailto:erlang-questions@erlang.org">mailto:erlang-questions@erlang.org</a><br></blockquote><br><blockquote type="cite">_______________________________________________<br>erlang-questions mailing list<br><a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>http://erlang.org/mailman/listinfo/erlang-questions<br></blockquote><br></blockquote></div><br></body></html>