Handling sets in mnesia, in a scalable way

Christian S <>
Thu May 4 17:37:41 CEST 2006


One can easily handle sets in mnesia by storing a list with items. Example:

-record(group, {id, members, ...}).
-record(user, {id, ...}).

and then have #group.members keep a list of #user.id. This allow one to
quickly
find the users in a given group.


However, this practice can in certain applications lead to huge lists.
* search engines mapping keywords to documents
* web-two-dot-oh-ishious tag-your-{bookmark,images,...} sites with many
users

A keyword could easily be used in thousand of documents. Having 10000
documents in the set
would lead to huge records that are slower than necessary to update.

This is mostly a theoretical question, as I havent hit this limit yet. It is
just a problem
i have been thinking about for some time, as it sometimes pops up when i
think through
interesting coding projects.

Have anyone experienced a problem with sets in mnesia records growing too
large, and worked
around it in some clever way. Or does things run on suprisingly well even
with sets that grow to ten
thousand or more members?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20060504/b544203a/attachment.html>


More information about the erlang-questions mailing list