Some new mnesia benchmarking results

Per Bergqvist per@REDACTED
Wed Nov 7 13:59:32 CET 2001

Hi Sean,

agree that mnesia is now getting ready for prime time as long as you have small
records ;-).
(Results are even more impressive on my little 1.7 GHz P4 Linux box )

I have a nasty problem that I still haven't resolved.
I you have more and larger records than you have available RAM the system will
get on its knees.

I have tried to use disk_only_copies but it after 1 hour it had only written ~1M
records so I aborted it.
This should be compared with the 15Krecords/sec i get with disk_copies.

Does anyone have a solution on how to store 10Mx2K records with mnesia without
20GB RAM ?


Sean Hinde wrote:

> Hi all,
> I just spent some time benchmarking mnesia for huge disk_copies tables with
> the latest versions. Measurements on a SPARC 400MHz with Veritas FS.
> Create a table with 1 million rows:
> R7B-1 with mnesia-3.10.0  310 secs
> R8B   with mnesia-4.0     315 secs
> R8B with 10 threads       270 secs
> Beam process size afterwards
> R7B-1                     167M
> R8B                       160M
> R8B threads               158M
> Stop mnesia
> R7B-1 with mnesia-3.10.0  155 secs (and whole node is blocked)
> R8B   with mnesia-4.1     3.7 secs
> R8B threads               15 secs
> Start mnesia and wait for 1M row table to be available
> R7B-1                     25 to 50 secs
> R8B                       30 secs
> R8B threads               56 secs
> Subsequent stop of mnesia
> R7B-1                     1.9 secs
> R8B                       12 secs
> R8B threads               25 secs
> Start up mnesia after CTRL-C CTRL-C stopping node which is writing as fast
> as possible. (Noting disk log repair of DCL file messages on startup).
> R7B-1                     40 secs maximum.
> R8B                       40 secs
> R8B threads               61 secs
> Apart from quite a lot of variability in timer:tc results it would appear
> that the old behaviour of mnesia where it could potentially take hours to
> rebuild tables from badly closed dets tables has been completely cured.
> Perhaps someone could confirm that this is always the case and I have not
> just been lucky in the timing of my node kills!
> I wonder also if anyone knows of any other reasons not to use mnesia for
> large databases now?
> Very impressive.
> Sean
> This email (including attachments) is confidential.  If you have received
> this email in error please notify the sender immediately and delete this
> email from your system without copying or disseminating it or placing any
> reliance upon its contents.  We cannot accept liability for any breaches of
> confidence arising through use of email.  Any opinions expressed in this
> email (including attachments) are those of the author and do not necessarily
> reflect our opinions.  We will not accept responsibility for any commitments
> made by our employees outside the scope of our business.  We do not warrant
> the accuracy or completeness of such information.

More information about the erlang-questions mailing list