distributed performance test
Mon Apr 25 12:40:50 CEST 2005
On 25 Apr 2005, at 10:30, Matthias Lang wrote:
> Sean Hinde writes:
>> We have seen such problems when networks/NICs are set up incorrectly
>> (e.g. one end forced to 10/half duplex, the other to 100/full
> Methinks the details have gotten messed up in your memory.
That, or perhaps it was an expression of artistic licence designed to
increase the impact. Clearly such flights of fancy get the drubbing
they deserve on this mailing list :-)
(BTW I have copied the list on your answer because this is actually
quite useful information)
> There's no way you can have one end at 10Mbit and the other at 100 and
> still have communication (10Mbit uses manchester coding, 100Mbit uses
> something completely different).
> You're probably remembering one end forced to 100/full duplex with no
> autonegotiation and other end on autonegotiate. One end then uses
> 100/full and the other 100/half---the standard requires this (broken)
Very interesting. This figures more closely with my real memory. We
have seen this (I was told, by another well meaning techie) when
auto-negotiation didn't "work properly" with Solaris, meaning that both
ends need to be pinned to the same settings. I never dug very deeply,
and the UNIX guys always needed to fix it, but it sounds like the UNIX
guys were fixing a problem caused by the network guys.
On reflection that sounds about right.
> The clearest symptom is that the half-duplex side sees late
> collisions. _Late_ collisions should never be present on an ethernet.
> Up higher, you see TCP connections which have occasional low
> throughput and difficulty getting started.
> I've seen this twice in the field, both times it's involved an
> expensive switch and a well-meaning techie who insisted on configuring
> everything for "maximum performance".
More information about the erlang-questions