mnesia usage optimisation

Luke Gorrie luke@REDACTED
Fri Jul 26 01:02:36 CEST 2002


Ulf Wiger <etxuwig@REDACTED> writes:

> Hmmm, you're of course right. This is a general problem when
> wielding transactions with 50,000 updates in them, given the way
> mnesia handles the transaction store. (:

For my 5th rewrite of the module this week :-) I switched back to an
older design, with one mnesia record per vertex, which contains 'to'
and 'from' sets of edges in gb_sets structures.

The problem with this was that large sets (containing lists of some
thousands of atoms) were slow (>10ms) to read/write to mnesia, and I
was doing it each time I added an edge. I don't know where all that
time goes, but I'm guessing it's copying between heaps and garbage
collecting (unified heap! unified heap!) - though that doesn't seem
quite right.

To solve that problem, I put a cache in the process dictionary to
avoid reading/writing a record on each of the thousands of updates. It
now runs, as they say, faster than shit.

Maybe I'll think of a cleaner way on the bus, otherwise at least I
strongly retain my "bluetail #1 process dictionary abuser" status :-)

(And just in the nick of time, before you tempt me to hack
Mnesia.. I've tried that before with disastrous results :-)

Cheers,
Luke




More information about the erlang-questions mailing list