<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class=""><div><blockquote type="cite" class=""><div class="">On Jun 18, 2016, at 3:54 AM, John Smith <<a href="mailto:4crzen62cwqszy68g7al@gmail.com" class="">4crzen62cwqszy68g7al@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><div class="">For one of my systems in the financial area, I am in need of a disk-backed log that I could use as a backend for an Event Sourcing/CQRS store. Recently, I have read a bit about Kafka [1] and it seems like a good fit but, unfortunately, it is on JVM (written in Scala, to be exact) and depends heavily on ZooKeeper [2] for distribution, while I would prefer something similar for an Erlang ecosystem. Thus, ideally, I would like to have something that is:</div><div class=""><br class=""></div><div class=""> * small,</div><div class=""> * durable (checksummed, with a clear recovery procedure),</div><div class=""> * pure Erlang/Elixir (maybe with some native code, but tightly integrated),</div><div class=""> * (almost) not distributed - data fits on the single node (at least now; with replication for durability, though).</div><div class=""><br class=""></div><div class="">Before jumping right into implementation, I have some questions:</div><div class=""><br class=""></div><div class=""> 1. Is there anything already available that fulfils above requirements?</div><div class=""> 2. Kafka uses different approach to persistence - instead of using in-process buffers and transferring data to disk, it writes straight to the filesystem which, actually, uses pagecache [3]. Can I achieve the same thing using Erlang or does it buffers writes in some other way?</div><div class=""> 3. ...also, Kafka has a log compaction [4] which can work not only in time but also in a key dimension - I need this, as I need to persist the last state for every key seen (user, transfer, etc.). As in Redis, Kafka uses the UNIX copy-on-write semantics (process fork) to avoid needless memory usage for log fragments (segments, in Kafka nomenclature) that have not changed. Can I mimick a similar behaviour in Erlang? Or if not, how can I deal with biggish (say, a couple of GB) logs that needs to be compacted?</div><div class=""><br class=""></div><div class="">In other words, I would like to create something like a *Minimum Viable Log* (in Kafka style), only in Erlang/Elixir. I would be grateful for any kind of design/implementation hints. </div><div class=""><br class=""></div><div class="">[1] <a href="http://kafka.apache.org/" class="">http://kafka.apache.org/</a></div><div class="">[2] <a href="https://zookeeper.apache.org/" class="">https://zookeeper.apache.org/</a></div><div class="">[3] <a href="http://kafka.apache.org/documentation.html#persistence" class="">http://kafka.apache.org/documentation.html#persistence</a></div><div class="">[4] <a href="https://cwiki.apache.org/confluence/display/KAFKA/Log+Compaction" class="">https://cwiki.apache.org/confluence/display/KAFKA/Log+Compaction</a></div></div>
_______________________________________________<br class="">erlang-questions mailing list<br class=""><a href="mailto:erlang-questions@erlang.org" class="">erlang-questions@erlang.org</a><br class="">http://erlang.org/mailman/listinfo/erlang-questions<br class=""></div></blockquote></div><br class=""></div><div class="">I’ve been using this for several years: <a href="https://github.com/jflatow/erlkit/blob/master/src/log.erl" class="">https://github.com/jflatow/erlkit/blob/master/src/log.erl</a><div class=""><br class=""></div><div class="">It’s not checksummed, but the design is meant to be crash-proof. The log is a process with a directory. The files are soft-capped to a chunk size, the log rolls over to a new file when it hits that. The id of each log entry is {Path, Offs} relative to the log directory. Two checkpoints are kept at the top of each log file, when the log is opened, it checks for the greatest one, and checks if its a valid entry, if not, it falls back to the last one and truncates the file. When you write, you can choose whether to wait each time for the checkpoints to hit the disk or not.</div></div><div class=""><br class=""></div><div class="">This is more of a primitive building block for the type of system you are talking about. I use it to build those other features (like compaction) in an ad-hoc way. Sorry for the lack of information, but its a tiny module that might be a good starting point.</div><div class=""><br class=""></div><div class="">jared</div></body></html>