mnesia replication (Are there checksums?)

Hakan Mattsson hakan@REDACTED
Mon Sep 5 11:51:50 CEST 2005

On Thu, 1 Sep 2005, Francesco Cesarini (Erlang Training & Consulting) wrote:

FC> I am amazed we never came across this bug (ok, feature :-) ) before. I
FC> would have expected an alarm to be generated as soon as the databases
FC> became inconsistent. I guess a way to come around the problem is to hash
FC> the dirty writes across the nodes based on the key.
FC> How hard would it be to add a checksum to each table? It should not
FC> generate any major overheads... The subject had been discussed, but
FC> probably before you took over the reins.

So you was involved with Mnesia internals before 1996
and still don't know the semantics of dirty access? ;-)

I don't think that adding a checksum mechanism would
make any substantial improvement. When the system would
be able to detect that there is a checksum mismatch, it
would be too late to make the database consistent again
without stopping the system. The only sensible action
at a "sloppy usage of Mnesia" alarm would be to restore
the database from backup.

Adding a table checksum mechanism would imply a
performance penalty for all database updates, even
though they not are needed for transaction protected
ones. I assume that the overhead would not be
negligible for disk resident tables, as it would imply
an extra file access.

Of course you could try to reinvent some of the
properties of transaction protected access for dirty
access operations. But by adding all these mechanisms
(checksums, hashed access, tunnel updates through a
central process, buffering updates at re-configuration
etc.) you would loose the single reason (better
performance) for using dirty access. Use transactions
instead. Avoid dirty updates as far as possible.


More information about the erlang-questions mailing list