<!DOCTYPE html><html><head>
<style type="text/css">body { font-family:'Times New Roman'; font-size:13px}</style>
</head>
<body>On Sun, 19 Jun 2016 07:34:19 +0200, Mark Bucciarelli <mkbucc@gmail.com> wrote:<br><br><blockquote style="margin: 0 0 0.80ex; border-left: #0000FF 2px solid; padding-left: 1ex"><div dir="ltr"><div>Can you shard your event log by aggregate type and thus avoid the deletion/compaction issue altogether? I've read some people suggest sharding by aggregate ID if the aggregate type shard is not small enough [1].</div></div></blockquote><div><br></div><div>Probably not, or at least, I can't think of such sharding scheme now. Also, apart from that, log compaction is still useful in the case of failure (it is faster to start from an aggregated snapshot than applying all previous events).</div><div><br></div><blockquote style="margin: 0 0 0.80ex; border-left: #0000FF 2px solid; padding-left: 1ex"><div dir="ltr"><div><br></div><div>If you haven't already, you may want to write a simple benchmark that append gobs of data to a file. I found a 2013 thread [2] on Erlang Questions with a similar sequential append use case where the OP was not happy with Erlang's speed. But I can't follow his math: writing 5 Gigabits in 104 seconds seems like a lot more that 504Hz. I also found people complaining that get_line was slow. I guess parallel reads would be possible inside an aggregate boundary ...</div></div></blockquote><div><br></div><div>I do not have any numbers as of now; I am still wandering around this whole "universal log" concept, but I find it appealing.</div><div>I think I will start from a single, simple append-only file with data in the Kafka format and then I could provide some info.</div><div><br></div><blockquote style="margin: 0 0 0.80ex; border-left: #0000FF 2px solid; padding-left: 1ex"><div dir="ltr"><div><br></div><div>CQRS is a topic I am very interested in, I hope you post again!</div><div><br></div><div>[1] <a href="http://cqrs.nu/Faq/event-sourcing">http://cqrs.nu/Faq/event-sourcing</a></div><div>[2] <a href="http://erlang.org/pipermail/erlang-questions/2013-June/074190.html">http://erlang.org/pipermail/erlang-questions/2013-June/074190.html</a><br></div><div><br></div><div>P.S. I've wondered why people don't treat a snapshot (or "compaction") as a command. One that emits a "special" event with the current state of that aggregate. Write this event to a fast durable key/value store (again, one per aggregate type) where the key is aggregate id and the value is the aggregate state and an offset into the main log where you should pick up reading from.</div></div></blockquote><div><br></div><div><br></div></body></html>