[erlang-questions] Dets workarounds for large datasets?

Mikkel Jensen mj@REDACTED
Fri Feb 2 15:11:54 CET 2007


Hi everyone,

I've been following the topic of storing large amounts of data in Mnesia.
The overall problem seems to be the fragmentation of free space when
deleting records, correct?

My question is: Can this issue be avoided if records where not physically
deleted but only _marked_ as deleted by a field in the record?

Alternatively: Would it be possible to have a timer job which would take the
node offline, repair the table and put it online again?
In a clustered environment this could be done in a round-robin kind of
fashion with no visible downtime to users.


I would really like to use Mnesia in a large web application but the size
limitation scares me.
If anyone can think of a workaround I would very much like to know...

- Mikkel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20070202/3c7831ff/attachment.htm>


More information about the erlang-questions mailing list