Mnesia dB size limits
Dan Gudmundsson
dgud@REDACTED
Mon Sep 4 11:06:18 CEST 2006
I just want to point out that disc_copies don't use dets at all,
only disc_only_copies do.
disc_copies caches everything in ets though.
/Dan
Scott Lystig Fritchie writes:
> >>>>> "pr" == Philip Robinson <philar@REDACTED> writes:
>
> pr> Damir asks:
>
> >> What are the upper limits where system can still be called
> >> "production quality" in terms of dB speed?
>
> In a few months, a project I've been working on will go "live" with
> approx 6GB total of data in about a half-dozen 'disc_copies' tables,
> using both replication and fragmentation (though only on 2 machines,
> but they have 16GB RAM each :-).
>
> pr> When I wanted to retrieve a specific event there was no noticable
> pr> delay, but most of my queries were for a date/time range.
>
> It all depends on how the index(s) is defined and whether you were
> using foldl/3 or index_match_object/3 or match_object/1 or
> scary-but-works-quite-well-at-what-it-does select/2.
>
> pr> I think the mnesia issues being mentioned on this list were to do
> pr> with database recovery across nodes after a node failure...?
>
> Step 0: Decide when to start the dead node.
> Step 1: Start the dead node.
> Step 2: There is no step 2.
>
> Overall throughput can take a hit when starting a dead node, because
> all out-of-date tables will be copied as-is from a live node. So
> doing Step 1 during your 1% or 5% peak usage time isn't a good idea.
>
> Dealing with network partitions is trickier. But if you're
> clustering, then you've probably also got some kind of infrastructure
> build for dealing with logging messages, events (of whatever sort,
> including 'mnesia_down' events from Mnesia itself), alarms, etc
> ... and all that makes it tricky enough to wuss out with an answer
> like, "It depends."
>
> -Scott
More information about the erlang-questions
mailing list