Batch-loading Mnesia

Sean Hinde sean.hinde@REDACTED
Wed Jul 13 23:59:59 CEST 2005


If you are using the same value of datetime for all rows in your test  
routine then this could lead to that behaviour.

The reason is that all the primary keys must be added to a single  
entry in the index lookup table, which gets slow after a while. Even  
so, that is spectacularly slow compared with my experience of such  
small tables

Sean

On 13 Jul 2005, at 10:22, Joel Reymont wrote:

> Folks,
>
> It's taking me way over an hour (maybe two hours or three hours) to  
> load 119,275 records into Mnesia. Parsing the same file without  
> adding record takes about 6 seconds.
>
> I'm doing everything on the same node, adding record sequentially.  
> Is there a way to speed things up?
>
> The table has the following format
>
> -record(tick, {
>       symbol,
>       datetime,
>       timestamp,
>       price,
>       size
>      }).
>
> Symbol is a string (list) of 4 characters, datetime is a bignum  
> formed from a date and time 20040901,09:39:38 by stripping the  
> comma and the colons. Timestamp is now(), price and size are float  
> and integer respectively.
>
> Table is created like this:
>
> mnesia:create_table(tick,
>                  [
>                   {disc_copies, Nodes},
>                   {index, [datetime]},
>                   {type, bag},
>                   {attributes, record_info(fields, tick)}])
>
> The function that adds a record to the database looks like this:
>
> add_tick(Symbol, Date, Time, Price, Size) ->
>     Tick = #tick {
>       symbol = Symbol,
>       datetime = Date * 1000000 + Time,
>       price = Price,
>       size = Size,
>       timestamp = now()
>      },
>     %%F = fun() -> mnesia:write(Tick) end,
>     %%mnesia:transaction(F).
>     mnesia:dirty_write(Tick).
>
> --
> http://wagerlabs.com/uptick
>
>
>
>




More information about the erlang-questions mailing list