[erlang-questions] Any canonical bandwidth benchmarks for clusters?

Allen McPherson mcpherson@REDACTED
Wed Dec 3 19:06:08 CET 2008


Depends on how you define significantly.  We have MPI benchmarks
that use the IP over IB interface that run at 700MB/sec.  Mine
is 1% of that.  I would expect better.  In any case I will be
working on the benchmark to test different message sizes.

We have MPI benchmarks that run qt 1.5GB/sec over the IB DDR
NIC.  I would not expect to achieve that because Erlang would
not interleave over the two IB interfaces.

--
Al



> First of all you would always expect IP over IB to be significantly
> slower than the MPI/IB interface, because the later uses RDMA and
> bypasses the OS's TCP/IP stack.  I my experience with IP/IB, sending
> large messages degrades throughput quite noticeably.  Try to split
> messages in smaller chunks and redo the benchmark for different  
> message
> sizes.
>
> Serge
>
> Allen McPherson wrote:
>> I'm testing some code that is showing terrible bandwidth
>> on our 40+ node Infiniband cluster (7 MB/sec for a ring benchmark!).
>> This code is using the IP over IB interface and send big (1+ MB)
>> messages around (binaries and non-binaries).  MPI code runs
>> at good rates on this cluster.
>>
>> Before I post my code I thought I'd see if there were
>> existing bandwidth benchmarks for distributed Erlang
>> on a locally connected cluster. Haven't been able to
>> find any via Google. Does anyone know of, or have, code
>> that I might use to test.
>>
>> Lacking other tests I'll put together a longer post with
>> my code included and maybe someone else with a similar
>> cluster could run it?
>>
>> Thanks
>> --
>> Allen McPherson
>> Los Alamos National Laboratory
>> _______________________________________________




More information about the erlang-questions mailing list