Mnesia disk performance (was RE: multi-attribute mnesia index es?)
Tue Jan 2 22:31:26 CET 2001
> Oracle does this with their database and it is a big performance
> booster. The other thing they do is allow a table to be striped
> across multiple disks by making a table exist in multiple file system
> files at once. (They stripe disk allocations across the files.) This
> does help to manage larger tables as well.
Mnesia has fragmented tables which, if the directory of each table fragment
could be specified separately, could give much the same gains.
> They identify a single record by
> its physical
> disk position (and never, ever move the record).
This does not fit very well with mnesia as the size of entries can vary with
each write (with the obvious problem if they get bigger). Dets can't match
the performance of this Oracle method directly.. but with some help from the
RAM copy table (perhaps holding the disk address with each record) maybe it
could get closer.
> I for one would like to be able to force my transactions
> (write that is)
> to wait for the log writer to finish writing them if the
> problem is that
> mnesia will overload like this. Should I step up the frequency of my
> log writer in order to force it to write more frequently, and
> at least simulate that the writes are waiting for the log?
As I understand it, Oracle does much the same thing as mnesia. All it does
on a commit is write to the redo log and mark the record as dirty in the
SGA. The DB_WRITER processes propogate the updated records from shared
memory into disk asynchronously sometime later. One can have multiple
DB_WRITERs (maybe running on different processors) to make it faster but the
idea is much the same. The same overload problem with Oracle would manifest
itself if this propogation into the main disk tables couldn't keep up, and
the SGA shared memory segment filled.
Oracle is just much faster as a result of its use of memory as the "log",
its very strong ability to take advantage of SMP and disk arrays, and some
very nice design.
I agree though - given the current limitations it would be nice to have some
synchronous write ability for ram_copies tables.
Just stepping up the frequency of the log writer won't really help. The only
thing I have found helps is using a Journaling File System (I use Veritas
with pretty stunning results - 3 time faster) but many new Linux releases
come with open source versions these days.
NOTICE AND DISCLAIMER:
This email (including attachments) is confidential. If you have received
this email in error please notify the sender immediately and delete this
email from your system without copying or disseminating it or placing any
reliance upon its contents. We cannot accept liability for any breaches of
confidence arising through use of email. Any opinions expressed in this
email (including attachments) are those of the author and do not necessarily
reflect our opinions. We will not accept responsibility for any commitments
made by our employees outside the scope of our business. We do not warrant
the accuracy or completeness of such information.
More information about the erlang-questions