Mnesia disk performance (was RE: multi-attribute mnesia indexes?)

Shawn Pearce spearce@REDACTED
Tue Jan 2 21:49:16 CET 2001

Sean Hinde <Sean.Hinde@REDACTED> scrawled:
> > One thing about mnesia is that it's not really prepared for
> > applications that write constantly to disk-based tables.
> It is not optimal I agree. There are some relatively simple things which
> could be done to improve this though.
> One simple idea would be to have independently specified paths to the
> various log and dets files. Certainly having the log file on its own disk
> could substantially increase performance of the dumper.

Oracle does this with their database and it is a big performance
booster.  The other thing they do is allow a table to be striped
across multiple disks by making a table exist in multiple file system
files at once.  (They stripe disk allocations across the files.)  This
does help to manage larger tables as well.

> Files could also be striped across multiple disks using RAID type systems.

We don't have a RAID system on this machine yet, but your correct, we
really should be working with RAID if we're really serious.  (Which we
aren't yet, we're still in development, I had just hoped for better
performance before RAID was added, as IMHO, a good RAID array only adds
so much before it too becomes a bottleneck.)

> Another more complex enhancement would be to treat the log file as a simple
> recovery log and use a memory based store as the actual source for data to
> be propogated into the dets files. This could even just contain a list of
> keys which have been updated in each transaction and the dumper could get
> the data from the main memory table (with some extra stuff for detection of
> multiple updates of the same record. Hmmm).

Again, Oracle does this.  They identify a single record by its physical
disk position (and never, ever move the record).  They can then make
their log merely a redo log (indeed that is the name).  By making the
changes to the data table in memory, and writing them to disk in bulk,
without needing to execute the log file against the table's data file(s),
it saves time later on.

I for one would like to be able to force my transactions (write that is)
to wait for the log writer to finish writing them if the problem is that
mnesia will overload like this.  Should I step up the frequency of my
log writer in order to force it to write more frequently, and hopefully
at least simulate that the writes are waiting for the log?


  ``If this had been a real
    life, you would have
    received instructions
    on where to go and what
    to do.''

More information about the erlang-questions mailing list