mnesia usage optimisation

Ulf Wiger etxuwig@REDACTED
Fri Jul 26 00:32:21 CEST 2002


Hmmm, you're of course right. This is a general problem when
wielding transactions with 50,000 updates in them, given the way
mnesia handles the transaction store. (:

Looking at mnesia_tm.erl, it sets up the transaction store as a
bag table, which of course destroys the good match times in
large transactions. I have not attempted to grasp the complexity
of it all (hacking mnesia is not something one should do after
midnight), but one could perhaps modify mnesia:write_to_store/4
slightly (and lots of other functions, I'm sure) to use multiple
transaction stores: one with ordered_set semantics, and one with
bag semantics... and see what that does for performance in your
case. Since the objects in the transaction store have an object
identifier of {Table, Key}, you should get similar good
performance on matches in the transaction store as on the
original table.

/Uffe

On 25 Jul 2002, Luke Gorrie wrote:

>Ulf Wiger <etxuwig@REDACTED> writes:
>
>> You could perhaps try using two ordered sets:
>
>Nearly works :-)
>
>In the start of the transaction, all operations are very fast, even if
>the table is already big. But as a large transaction proceeds, the
>'match' operations get slower. It looks like it goes O(N)ish on the
>number of table updates. My access pattern is basically a chain of
>read-write-read-write..
>
>I'm guessing it's less efficient to match on the data structure for
>pre-committed values?
>
>Any other ideas?
>
>I'm probably overdoing my Mnesia usage really :-)
>
>Cheers,
>Luke
>
>

-- 
Ulf Wiger, Senior Specialist,
   / / /   Architecture & Design of Carrier-Class Software
  / / /    Strategic Product & System Management
 / / /     Ericsson Telecom AB, ATM Multiservice Networks




More information about the erlang-questions mailing list