[erlang-questions] why is gen_tcp:send slow?

Per Hedeland <>
Wed Jun 25 08:51:27 CEST 2008

Johnny Billquist <> wrote:
>No. RTT can not be used to calculate anything regarding traffic bandwidth.
>You can keep sending packets until the window is exhausted, no matter what the 
>RTT says. The RTT is only used to calculate when to do retransmissions if you 
>haven't received an ACK.

Well, yes and no - RTT by itself cannot be used to calculate bandwidth,
and TCP itself doesn't need to "know" the bandwidth anyway, but the
possible throughput is dependant on RTT: Since you can have at most one
window size of un-ack'ed data outstanding, and data can't be ack'ed
until it's been received:-), the throughput is bounded by the ratio of
(max) window size to RTT. With only 16 bits of window size available and
an RTT of 300 ms, the theoretical max throughput is 65535/0.3 bytes/s or
~ 1.75 Mbit/s.

Of course this problem, a.k.a. "long fat pipe", was solved long ago as
far as TCP is concerned - enter window scaling (RFC 1323), which allows
for the 16 bit window size to have a unit of anything from 1 to
(theoretically) 2^255 bytes. These days it should also actually work
most everywhere. Nevertheless, the max window size is under the control
of the TCP "user", and if the kernel and/or the application limits the
size of the receive buffer to something less than 64kB, window scaling
can't help.

Whether this is Edwin's problem I don't know - the "fixed packet rate"
observation may actually be more or less correct: As you explained, TCP
doesn't ack packets, it acks bytes - but the actual *sending* of acks is
definitely related to the reception of packets (or "segments" if you
prefer), in particular in a one-way data transfer where there are no
outgoing data packets that can have acks "piggy-backed". The details may
vary, but in general in such a case an ack is sent for every other
packet received, or after a ("long" - 200 ms) timeout if no packets are

--Per Hedeland

More information about the erlang-questions mailing list