Mnesia and the performance of variable size records

Sean Hinde sean.hinde@REDACTED
Sun Jul 6 17:23:52 CEST 2003

On Sunday, July 6, 2003, at 11:37 AM, Rudolph van Graan wrote:

> Hi all,
> I have a question regarding the use of records with fields where the 
> content size changes. As an example, take the following record:
> -record(some_record,
>     {key,field1,detailrecords=[]}).
> This is stored in mnesia as an ordered set.
> What would the performance implications be where you have a large 
> number of records, where the content of detailrecords is a list, where 
> the size is not constant? In a relational database, one would normally 
> define a field with a specific size or of variable length and the 
> database will allocate pages for the storage of the extra data. How 
> does mnesia do this? If I  have to guess for best performance in 
> mnesia, the records all need to be more or less the same size? We can 
> potentially have millions of records like this, but the size of the 
> variable field is a couple of entries, usually less than 10.

This is exactly how we use mnesia and it works well. As far as I can 
tell what happens is that the underlying ets table allocates just as 
much memory as it needs for each new row regardless of the size of the 
entry - hence there is no additional performance impact in creating 
variable sized rows.

If the same row is overwritten with a larger entry sometime later then 
ets "flags" the old entry as deleted and creates a new entry. There is 
presumably a slight performance hit at this point but nothing very 
noticeable. If the new entry the the same size (or smaller I think?) 
then the old one is just overwritten.

ets re-hashes itself at various trigger points at which times the 
entries flagged for deletion are removed completely.

It all works extremely well in my experience while remaining very fast 


More information about the erlang-questions mailing list