[erlang-questions] running out of memory trying to add_frag on a large mnesia table

Scott Lystig Fritchie fritchie@REDACTED
Fri Nov 7 06:44:41 CET 2008

Marc Sugiyama <marcsugiyama@REDACTED> wrote:

ms> I have a large, fragmented mnesia table.  I'm trying to add more
ms> fragments to it (see below).

Marc, it's been a while since I've tried it, but you may be in a case of
"Doctor, it hurts when....", unfortunately.

IIRC, Mnesia uses temporary ETS tables for items that are being moved
from one fragment to another.  I have dim (and quite negative) memories
not of memory exhaustion but of CPU exhaustion: the ETS tables were
'bag' or 'duplicate_bag' and thus have nasty O(N) behavior when trying
to put more than 10-20K things into them.

Back about two years ago (?), I posted a patch for Mnesia to use 'set'
(or 'ordered_set') ETS tables instead ... then disavowed it, after I
discovered bad behavior with the patch, too.  :-(

My "solution" was to create a table with 30 (approx.) or so fragments
from the start.  Then when adding a 31st fragment, the total number div
30 would be less than The ETS Major Pain Threshold of 10K or 20K or
whatever it was.

I'm going to guess that you'll need to use the mnesia dump, filter, and
restore.  Using the same hash function that mnesia_frag uses by default,
your filter would spit out dump records with table names based on 70
fragments instead of your current 10.  Er, and also replace the internal
'pvobject' state (inside the 'schema' table?) that contains the current
fragmentation info with the new fragment number.  And perhaps a few
other bits.

See mnesia:traverse_backup(), and get your fingers dirty with a few
Mnesia internal details.  A simple transformer fun to pretty-print the
backup stream, helping to show what bits you may need to transform with
a real fun.


More information about the erlang-questions mailing list