beating mnesia to death (was RE: Using 4Gb of ram with Erlang VM)

Jani Launonen jabba@REDACTED
Wed Nov 9 07:40:49 CET 2005


Hello!

On Wed, 9 Nov 2005, Ulf Wiger (AL/EAB) wrote:
> To sum up my experiences on the 16 GB machine:
>
> - 64-bit erlang seems to work like a charm
> - I stored 70 million records in a table in
>  one benchmark (15 GB of RAM usage), and
>  built a 5-million record replicated disc_copies
>  table in another. Everything worked pretty
>  much as expected, which was more than I
>  expected, actually(*).  ;-)
> - Loading 5 million records from disk took about
>  27 seconds; loading over the network took
>  about as long. Building the table from
>  scratch took over two hours.
> - I ran 6 million simultaneous processes; 20 M
>  when I used hibernate/3. Spawn and send times
>  were unaffected.
>
> My main gripe is that erlang terms take up
> a huge amount of space on the heap, e.g.
> 16 bytes per character in a string -- not
> particularly sexy. In order to get large
> amounts of data in there, you pretty much
> have to use binaries.

Great experiments to push Erlang to where Erlang hasn't gone before (to my 
knowledge)! I did try also push Erlang a bit in the past with university's 
8x900MHz UltraSparcs with communicating ring of 20M (or so) processes. What 
I did try to find out was how bigger page sizes (4k, 64k, 512k, 4M or so) 
would affect such a test. The results didn't reveal any useful information. 
Perhaps your test with Mnesia would benefit from bigger page size? In case 
you're using Solaris, you could use ppgsz command to set different page 
sizes. Or you send the test scripts to me.

> /Uffe
>
> (*) I know that OTP doesn't have this type
> of hardware in their lab, so this stuff isn't
> really tested before release. Nice then to see
> that it works exactly as advertised.

Cheers,

-+-+-+-
Jani Launonen



More information about the erlang-questions mailing list