safedets
Klacke
klacke@REDACTED
Wed Sep 29 22:53:13 CEST 1999
I've been hacking madly on the dets server lately.
Basically I've been having two problems with it
that are dependant on each other.
1. It takes a very long time to create a dets file with say
1. million entries in it.
2. If such a file is not properly closed, it takes ages
to repair it.
My first attempt was to hack the dets server and pull some
of the hash-ndex structures that are now on disc up into RAM.
This helped speed with about 30% but increased memory
consumption substantially. Not ok.
My second attempt was to scrap dets alltogether and use
gdbm instead. So I wrote a linked in driver to gdbm.
It turns out that gdbm is about equally fast as dets on
lookups, and about 4 times faster on the first 10.000 items
on insertions.
However on the last 10.000 objects (Inserting 1 million itemes)
gdbm is >100 times slower. It starts to degenerate
around 100.000 items. So, this is good news, dets was
better than I thought. Much better.
Nevertheless I attacked the second problem instead
and came up with the idea to use 2 dets files instead.
I posted the code to
http://www.erlang.org/user.html#safedets-1.0
Each safedets:open_file/2 makes a directory holding the
2 detsfiles + a log. safedets files are never repaired.
If one of the dets files need to be repaired, the other
doesn't. A regular file copy is performed. This is much much
faster that a logical dets copy which is performed by the
repair process.
Another thought for the mnesia guys (Dan+Hakan), maybe
it's agood idea to apply the same tactics on all
mnesia files that are disc_only_copies. This way we
can have really large mnesia databases.
Any thoughts ... ??
/klacke
More information about the erlang-questions
mailing list