Fragment not locked
Stanislav Ledenev
s.ledenev@REDACTED
Mon Jul 12 16:01:56 CEST 2021
I've looked at the source code of Mnesia and it has this comment in
mnesia_frag.erl:
%% Tried to move record to fragment that not is locked
mnesia:abort({"add_frag: Fragment not locked", NewN})
But before this step all required locks are made.
Unfortunately I can't help you with this. Sorry. I hope someone more
familiar with Mnesia's code will help you.
пн, 12 июл. 2021 г. в 15:29, Wojtek Surowka <wojteksurowka@REDACTED>:
> > Following the documentation, are you trying to reproduce example for
> > {hash_state, Term}? The line "mnesia:change_table_frag(prim_dict,
> {add_frag, [node()]})." can be found only there.
> > Or you've somehow changed those examples for your own needs? If so
> please provide a full list of your steps.
>
> I do not try to reproduce this example. I created table using {atomic, ok}
> = mnesia:create_table(tablename, [{attributes, record_info(fields,
> tablename)}, {disc_only_copies, [node()]}])
>
> I write to this table with
> ok = mnesia:activity(sync_transaction, fun () ->
> mnesia:write(RecordToWrite) end, [], mnesia_frag) this is called within
> mnesia:transaction because also other tables are accessed. A write like
> this happens several times every minute, and nearly always record is
> replacing an already existing one with the same key.
>
> I read from this table with
> mnesia:activity(sync_dirty, fun () -> mnesia:read(tablename, RecordSpec)
> end, [], mnesia_frag) Reading happens occasionally, when end users activity
> triggers it.
>
> When I detect that fragment tables are bigger than some threshold, I add a
> new fragment with mnesia:change_table_frag(tablename, {add_frag, [node()]})
> I used [node()] as the second element of add_frag tuple since documentation
> says it is either the list of nodes or a frag_dist result. I use one node,
> so [node()] seemed appropriate.
>
> It was all working for some time, but since some time it stopped - to be
> precise, reads and writes work correctly, but adding a new fragment fails
> every time. Right now the table has 16 fragments, total of 1.7M records,
> and the biggest fragment is now 1.6GB on disk.
>
> I considered copying all data, recreating the table with more fragments
> and copying back, but I hope there is something I can do to avoid this
> procedure.
>
> (Sorry for posting this by mistake to you instead of the mailing list)
> Thanks,
> Wojtek
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20210712/8ec5226f/attachment.htm>
More information about the erlang-questions
mailing list