Loading Mnesia dbase
tty@REDACTED
tty@REDACTED
Thu Nov 24 16:55:22 CET 2005
I decided to retest using fragmented tables (15 fragments) and I/O threads. Also I did the inserts in stages. The first 9 million records to ram_copy went in below 7 mins. The change table to disc_copy took 0.6 mins. A mnesia backup worked fine (< 13 mins) but it failed on the restore (unable to allocated heap). I proceeded with a fallback which worked well.
The next 3 million records went into disc_copy with warning messages:
Mnesia is overloaded: {dump_log, write_threshold}
(see discussion: http://www.erlang.org/ml-archive/erlang-questions/200007/msg00098.html)
It however stablized and the records were in.
The final 2 million failed to get in with a couple hundred missing records. Their insertion processes died. I redid the missing entries and have all 14 million records in. The erl shell is now at a crawl but now using a good 1.8 GB of the 2 GB RAM.
The two things I found helped was increasing I/O threads (erl +A) and using fragmented tables. I suspect more RAM would improve things greatly.
Thanks for the help.
t
More information about the erlang-questions
mailing list