<div>Hi, </div><div><br></div><div>we are encountering a strange scenario using mnesia fragmentation in our production system:</div><div>our cluster had around 20 tables spread over 8 mnesia nodes each running on a single server, totalling 1024 frags per table (128 frags per node).</div>
<div><br></div><div>Now we added 8 new machines to the cloud, and started the rehashing process by adding other 128 frags per table on each new node. </div><div>I started this process from a different host in the cluster (lot of free ram space) attached to the mnesia cluster calling mnesia:change_table_frag(Table, {add_frag, [NewNode]}) for each table in order to have 2048 frags per table spread over 16 nodes. </div>
<div><br></div><div>1. The adding_fragments process took a week to rehash all the table records while working on a single core of this "maintenance" node. I read on the mnesia docs and on this list that this kind of op locks the involved table, but I was not able to parallelize on the different tables (parallel processes each running add_frag on different table) in order to take advantage of multiple cores. I had the feeling that add_frag "locks" the entire mnesia transaction manager. Any perspectives or advice on this would be greatly appreciated.</div>
<div><br></div><div>2. At the end of the frags-creation and rehashing process I noticed some size unbalancing between old and new frags so I started a consistency scanner that simply takes each record on each fragment and ensures that the mnesia_frag hashing module actually maps that record on that specific fragment. It turns out that the unbalanced frags have some records that were moved to the new destination frag during the rehashing process, but were not removed from the old source frag! I thought mnesia:change_table_frag(Table, {add_frag, [NewNode]}) was running in atomic transaction context, has anyone ever faced with something like this?</div>
<div><br></div><div>Thank you</div><div><br></div>-- <br><a href="mailto:textonpc@gmail.com">textonpc@gmail.com</a><br><a href="mailto:atessaro@studenti.math.unipd.it">atessaro@studenti.math.unipd.it</a><br>