<div dir="ltr">I've looked at the source code of Mnesia and it has this comment in mnesia_frag.erl:<br> %% Tried to move record to fragment that not is locked<br> mnesia:abort({"add_frag: Fragment not locked", NewN})<br><br>But before this step all required locks are made. <br>Unfortunately I can't help you with this. Sorry. I hope someone more familiar with Mnesia's code will help you.</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">пн, 12 июл. 2021 г. в 15:29, Wojtek Surowka <<a href="mailto:wojteksurowka@me.com">wojteksurowka@me.com</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">> Following the documentation, are you trying to reproduce example for <br>
> {hash_state, Term}? The line "mnesia:change_table_frag(prim_dict, {add_frag, [node()]})." can be found only there.<br>
> Or you've somehow changed those examples for your own needs? If so please provide a full list of your steps.<br>
<br>
I do not try to reproduce this example. I created table using {atomic, ok} = mnesia:create_table(tablename, [{attributes, record_info(fields, tablename)}, {disc_only_copies, [node()]}])<br>
<br>
I write to this table with<br>
ok = mnesia:activity(sync_transaction, fun () -> mnesia:write(RecordToWrite) end, [], mnesia_frag) this is called within mnesia:transaction because also other tables are accessed. A write like this happens several times every minute, and nearly always record is replacing an already existing one with the same key.<br>
<br>
I read from this table with<br>
mnesia:activity(sync_dirty, fun () -> mnesia:read(tablename, RecordSpec) end, [], mnesia_frag) Reading happens occasionally, when end users activity triggers it.<br>
<br>
When I detect that fragment tables are bigger than some threshold, I add a new fragment with mnesia:change_table_frag(tablename, {add_frag, [node()]}) I used [node()] as the second element of add_frag tuple since documentation says it is either the list of nodes or a frag_dist result. I use one node, so [node()] seemed appropriate.<br>
<br>
It was all working for some time, but since some time it stopped - to be precise, reads and writes work correctly, but adding a new fragment fails every time. Right now the table has 16 fragments, total of 1.7M records, and the biggest fragment is now 1.6GB on disk.<br>
<br>
I considered copying all data, recreating the table with more fragments and copying back, but I hope there is something I can do to avoid this procedure.<br>
<br>
(Sorry for posting this by mistake to you instead of the mailing list)<br>
Thanks,<br>
Wojtek<br>
<br>
</blockquote></div>