Constantly archiving old mnesia records
Mon Dec 7 23:59:15 CET 2009
The solution I came up with:
Every week, create a new table. Use this week's table and last week's
table to check for duplicates. Each week, remove the table from the
week before last.
I created a table_manager server with a timer:interval to manage the
Hope this helps someone else (though I'm guessing it probably won't)!
On Nov 16, 7:34 pm, adam <> wrote:
> I have an application in which I have several million new records
> coming in each day, and I need to make sure that no duplicate ids get
> past a certain point in the system. To do this, I'm using a
> distributed mnesia database.
> Duplicates can only occur within a few days of each other, so I want
> to ship all old ids off to an archive every week. I could do this by
> simply selecting old records, iterating through, saving them somewhere
> else, and deleting them, but I'm afraid of this locking the entire
> table, and I'm also afraid that this will cause the db file to get
> pretty bloated over time.
> The other way to do it is to make a module implementing the
> mnesia_frag_hash callback behavior which would create a fragment
> containing old records when I call mnesia:change_table_frag with
> add_frag. Then perhaps I could just move the data using the filesystem
> Any suggestions for doing this?
> Thanks in advance!
> erlang-questions mailing list. Seehttp://www.erlang.org/faq.html
> erlang-questions (at) erlang.org
More information about the erlang-questions