[erlang-questions] clarify: why is Mnesia not fit for large databases
Dominic Williams
erlang@REDACTED
Tue Dec 11 23:45:13 CET 2007
Hi,
Ulf Wiger wrote :
> Mnesia doesn't, but dets files are currently limited to 2
> GB per file, and mnesia uses dets files for disc_only
> tables. In a fragmented disc_only table, that would
> amount to max 2 GB per fragment (which is not something
> that mnesia will check or enforce, so living close to
> that limit is inadvisable to say the least).
Thanks, that confirms my understanding that the 2GB limit
can be overcome with fragmented tables.
Would monitoring the size of each fragment and raising an
alarm when it approaches the 2GB limit be a simple way to
stay out of trouble? A scheme in which additional fragments
get automatically created does not seem all that hard to
imagine, either...
> It's reasonable to assume that mnesia can safely handle
> databases of a number of gigabytes, perhaps (tounge in
> cheek) a hundred or so gigabytes, if one thinks
> carefully, and the access patterns are favourable. But I
> don't know of anyone who actually does that, and most
> people who have databases that large tend to not want to
> be guinea pigs. (:
Right. The reason I ask, though, is that we need the nice
real-time and distributed characteristics of mnesia, and our
current use of MySQL is proving to be a performance
bottleneck and a constant source of operational problems. I
am just trying to decide whether the time is better spent
redesigning our MySQL database and replication scheme, or
pushing mnesia's limits (we use mnesia anyway for other,
smaller tables, so having a single tool would be nice).
Ulf, could you elaborate on what you mean by "favourable
access patterns"?
Is there actually anyone out there using mnesia in the 100GB
range?
Regards,
Dominic Williams
http://dominicwilliams.net
----
More information about the erlang-questions
mailing list