Loading Mnesia dbase

tty@REDACTED tty@REDACTED
Mon Nov 21 16:30:30 CET 2005


Its possible I'm hitting swap. The box isn't running anything else (other then the basic Linux stuff xinit etc). 

Enabling threads did help early one but the final process ended 12.8 hours later.
It took a mere 10 mins to hit the 10 million records mark. 17 mins for 10.34 million, 38 mins for 10.688 million. 

For some reason the 10 million mark seems to be the start of slowdowns. The current reported run is my 6th attempt at this. 

Any other hints - I would not discount my code being less then optimal :)

Thanks
t

-------- Original Message --------
From: Sean Hinde <sean.hinde@REDACTED>
Apparently from: owner-erlang-questions@REDACTED
To: tty@REDACTED
Cc: ulf@REDACTED, erlang-questions@REDACTED
Subject: Re: Loading Mnesia dbase
Date: Fri, 18 Nov 2005 21:20:07 +0000

> >
> > I'll give the thread pool a try. I'm maxing out at 1.8 GB RAM so  
> > everthing is still nicely in RAM space.
> 
> That doesn't leave much room for everything else running on the  
> machine. These symptoms do tend to indicate that you are going into  
> swap.
> 
> Sean
> 
> 
> >
> > -------- Original Message --------
> > From: "Ulf Wiger" <ulf@REDACTED>
> > To: tty@REDACTED, erlang-questions@REDACTED
> > Subject: Re: Loading Mnesia dbase
> > Date: Fri, 18 Nov 2005 21:53:37 +0100
> >
> >>
> >> First of all, try enabling the thread pool.
> >> I would pick a nice round number like 255, i.e.
> >> erl +A 255.
> >>
> >> Also, is your erlang node running out of physical
> >> RAM? It seems like it wouldn't, with 2 GB RAM, but
> >> I guess if you keep items both in process space and
> >> in mnesia, there would be a chance... Anyway, you
> >> can easily track memory use using 'top' or 'vmstat'.
> >>
> >> /Uffe
> >>
> >> Den 2005-11-18 20:13:39 skrev <tty@REDACTED>:
> >>
> >>> Hello,
> >>>
> >>> I have total of 14 million entries in several files to load into a
> >>> Mnesia dbase. I used one process per file and found that the  
> >>> first 10
> >>> million entries took around 15 mins to load into Mnesia (ram_copy).
> >>> However after this initial 10 million entries things started to  
> >>> crawl. A
> >>> 'ps' shows beam mainly blocking in I/O with around 3% CPU usage.
> >>>
> >>> I then restarted the test with a table of 15 fragments (ram_copy)
> >>> thinking I hit some Mnesia limit. This dropped my initial 10 million
> >>> entries to 8.5 mins. However its now back down to a crawl. The last
> >>> 72000 entries took over 40 mins.
> >>>
> >>> I'm running on SuSe 10 (64bits), dual AMD Opteron 246 with 2GB  
> >>> RAM. I
> >>> have 5 processes with 3 million entries each which is around 50MB of
> >>> disc space per file. Each entry is a list of 6 integers. Each  
> >>> record in
> >>> the dbase has 7 integers. All processes and Mnesia are running of  
> >>> one
> >>> instance of the VM. Also using dirty_write's.
> >>>
> >>> Does anyone have any suggestions on speeding this ?
> >>>
> >>> Thanks
> >>>
> >>> t
> >>
> >>
> >>
> >> -- 
> >> Ulf Wiger



More information about the erlang-questions mailing list