Large DBs, mnesia_frag ????

chandru chandrashekhar.mullaparthi@REDACTED
Tue Mar 28 18:45:51 CEST 2006

I'll try.

On 28/03/06, Sanjaya Vitharana <sanjaya@REDACTED> wrote:
> Hi ... !!!
> What will be the best way to handle 3 million records (size of the record
> = 1K) in mnesia with 4GB RAM. Please help anyone with such experience.

We have an mnesia database with 25 million records in  128 fragments split
across 2 erlang nodes. Server has 8GB of RAM and the two nodes use about 4GB

Currently I'm testing with HP Server with 2GB RAM (there are plenty of
> harddisk space).
> I'm using beow to create the table, but getting problems when the table
> getting bigger (~350000 records).
> mnesia:create_table(profile_db,[
>                       {disc_copies, NodeList},
>                       {type, ordered_set},
>                       {index, [type, last_update_date,
> first_creation_date, fax_no]},
>                       {frag_properties, [{n_fragments, 30},{n_disc_copies,
> 1}]},
>                       {attributes, record_info(fields, profile_db)}
>                       ]),

Bear in mind that when you have an ordered set fragmented table, each
fragment is an ordered set. The property does not apply across all tables.

Problems: (little bit details added to the end of the file, but may be not
> sufficient, if anyone needs more details I can send)
> 1.) unexpected restarts by heart. I have increase the heart beat timeout
> from 30 to 60 & 90. It will bring me from (~100000 receords to ~350000
> records). But again it comes again this time
> 2.) some unexpected errors, which was not happend earlier (I mean upto the
> current size of the DB)
> 2.1) {aborted,{no_exists,profile_db_frag25}}

This is strange.  It seems to suggest that this fragment isn't available
yet. Are all tables fully loaded before you start populating? Check using
the mnesia:wait_for_tables function.

2.2) ** exited: {timeout,{gen_server,call,[vm_prof_db_svr,db_backup_once]}}
> **

The backup is taking quite a long time. Have you tried increasing the

2.3)     error_info: {{failed,{error,{file_error,
> "/usr2/omni_vm_prof/db/vmdb/db/backup/db_back_2006-3-28_14-3-4.BUPTMP",
>                                      enoent}}},
>                   [{disk_log,open,1}]}

enoent - The temporary backup file does not exist.  Dunno why.

2.4) {error,{"Cannot prepare checkpoint (replica not available)",
> [profile_db_frag10,{{1143,528317,121399},vmdb@REDACTED}]}}

Looks like your fragments are spread across a few nodes and one of the
fragments is not available - are all nodes connected to each other.

2.5) eheap_alloc: Cannot allocate 122441860 bytes of memory (of type
> "heap").
> Aborted

Your node ran out of memory. You seem to have quite a lot of secondary
indices. Bear in mind that each one will consume more memory. It is trying
to allocate about 122MB of memory. Have you tried this on a machine with 4GB
of RAM.

I have idea to changing the below properties and try, but I don't no this
> will be the best way or not.
> disc_copies -> disc_only_copies

Performance will suffer.

ordered_set -> set (of course I could not find any direct function for this
> in mnesia reference manual, are there any way ?)

You will have to delete the table and recreate it.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the erlang-questions mailing list