beating mnesia to death (was RE: Using 4Gb of ram with Erlang VM)

Ulf Wiger (AL/EAB) <>
Mon Nov 7 18:51:54 CET 2005

Ulf Wiger wrote:
> I've been able to get my hands on some SPARCs (2x 1.5 GHz) 
> with 16 GB RAM.

I thought I'd push mnesia a bit, to see how far I could 
get with 16 GB of RAM and 64-bit Erlang.

(First of all, something seems to happen around 8GB. I get
into lots of page faults (but no swapping). When this
happens, responsiveness goes down the drain. Does anyone
have an idea what it is that's happening?)

I created a ram_copy table and first stuffed it full of 
minimal records (well, almost: {mytab, integer(), integer()}).

I pushed in 10 million records easily with no visible effect
on write() read() cost. But the memory usage of the table 
was only 485 MB, so I decided to add some payload:


With this, I could insert 900,000 records, with the table 
occupying 7.6 GB of RAM. 1 million records though, and the
batch function never returned. My perfmeter showed lots of 
page faults, and user CPU utilization went down to zero.

While the node remained responsive, a dirty_write() took
ca 10 usec with no payload, and 87 usec with payload.
dirty_read() took 3.4 usec without and 72 usec with payload.

I've now changed the table copy type to disc_copies,
and am trying to find out how huge disc_copies perform.
Again, I got a bit impatient and went for the 900,000
record version with payload.

I first ran with default dump_log_write_threshold -- not
a very good idea. I then changed it to 10,000, and after
that to 100,000 writes. At the time of writing, Mnesia
is bending over backwards trying to handle a burst of 
900,000 records with checkpointing to disk. It's not 
liking it, but it keeps going... slowly. I got lots of 
page faults for a while, and several "Mnesia is 
overloaded" messages (mostly time_threshold). The 
transaction log was 1.6 GB at its peak.  (:

The whole system is running like molasses. Still some 
60,000 records to insert. The I/O subsystem seems 
saturated. The process mnesia_controller:dump_and_reply()
has a heap of > 24 MB. A disk_log process has a 40 MB

Of course, I ran all this with a thread pool size of 0.
That was probably another mistake...

With 100,000 records (incl payload), the table loaded
from disk in 45 seconds. With 200,000 records, loading
took 87 seconds.

I'm probably going to let it finish overnight. Will
report any further developments.


More information about the erlang-questions mailing list