[erlang-questions] running out of memory trying to add_frag on a large mnesia table
Chandru
chandrashekhar.mullaparthi@REDACTED
Sun Nov 9 08:06:13 CET 2008
2008/11/7 Scott Lystig Fritchie <fritchie@REDACTED>
> Marc Sugiyama <marcsugiyama@REDACTED> wrote:
>
> ms> I have a large, fragmented mnesia table. I'm trying to add more
> ms> fragments to it (see below).
>
> I'm going to guess that you'll need to use the mnesia dump, filter, and
> restore. Using the same hash function that mnesia_frag uses by default,
> your filter would spit out dump records with table names based on 70
> fragments instead of your current 10. Er, and also replace the internal
> 'pvobject' state (inside the 'schema' table?) that contains the current
> fragmentation info with the new fragment number. And perhaps a few
> other bits.
>
>
That is going to be very painful and error prone. I've tried it before and
it is not pretty. I would do it this way:
1. Take an mnesia backup
2. Delete the fragmented mnesia table from the schema
3. Create it again with the number of fragments you want
4. Traverse the backup and re-insert each record and let mnesia spread them
across fragments.
If the whole process of recreating the table takes too long, there are a
couple of ways to speed it up.
When doing (3), create the table as a ram_copies table
After (4), change the table type to disc_copies using
mnesia:change_table_copy_type/3
Chandru
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20081109/199e3513/attachment.htm>
More information about the erlang-questions
mailing list