Mnesia and Memory question
Casper
casper2000a@REDACTED
Tue Jan 4 04:57:26 CET 2005
Hi all,
Thanks for your replies. Those help me a lot.
Cheers,
Eranga
-----Original Message-----
From: Valentin Micic [mailto:valentin@REDACTED]
Sent: Tuesday, January 04, 2005 1:00 AM
To: Casper; erlang-questions@REDACTED
Subject: Re: Mnesia and Memory question
>
> 1. When I created disc_copies table, the full table resides in the memory
as
> well. Due to this, if I load about millions of records, the memory
> utilization is far too great (few Giga bytes). For a HLR, SMSC kind of
> system, is this a good practice?
>
HLR "kind of application" does not operate on huge sets of data, but consist
of series of specific lookups. In my experience a disk_only_copy may be a
reasonably good fit as well.
> 2. Is there anyway to limit the usage of memory and let the rest keep in
the
> Disk? For example the latest loaded pages worth 600MB in memory only.
>
Not out of the box... at one stage I've been developing an ETS-based table
cache, that relied on key collision and hashing to limit the size of ets
table (i.e. use size of the table as the RANGE argument in erlang:phash/2).
I had reasonably good results, but then after disconvering that
disc_only_copy was fast enough, I gave up -- there were few chalenges, i.e.
how to provide simple replication and synchronisation between remote table
copies and local cache.
>
> 4. To maintain CDRs, can I use disc_only tables of Mnedia?
>
That depends on what do you mean by maintenance... if you think of CDR
logging, I'd say, use disk_log instead. We're doing it and we're quite happy
with it. If you wan CDRs for analysys, then collect them into a disk_log,
and periodically move into MNESIA (or some relational database, if you must
;-).
Valentin.
More information about the erlang-questions
mailing list