[erlang-questions] MNesia Distribution Questions

Ngoc Dao <>
Thu Oct 22 07:04:02 CEST 2009


Mnesia has 4 GB (or 2 GB?) limit per node. Is there a tutorial or doc
about how to create fragmented Mnesia DB, so that a very big DB can be
cut into pieces and saved on many nodes?

Thanks.


On Thu, Oct 22, 2009 at 2:06 AM, Ulf Wiger
<> wrote:
> Rob Stewart wrote:
>>
>> 1. And example - I have a compex query to apply to a dataset, of, say
>> 10,000,000 rows in an MNesia database. This database is replicated over 10
>> nodes in a network. Will the query be split for equal computation to each
>> of
>> the 10 nodes, or will the query be executed on either one random or one
>> selected MNesia node.
>
> If it's one homogeneous table that is simply replicated across all nodes,
> the query will be executed on the originating node only.
>
> If the table is fragmented, a subset of fragments will be identified
> (if the whole key isn't bound, this will be all fragments), and the
> query will be executed on all fragments in parallel. The resulting
> sets will be merged, respecting the ordering semantics of the table
> (i.e. sorted if it's an ordered_set table, otherwise not).
>
> BR,
> Ulf W
> --
> Ulf Wiger
> CTO, Erlang Training & Consulting Ltd
> http://www.erlang-consulting.com
>
> ________________________________________________________________
> erlang-questions mailing list. See http://www.erlang.org/faq.html
> erlang-questions (at) erlang.org
>
>


More information about the erlang-questions mailing list