Mon Aug 23 19:11:42 CEST 2010
On Linux (and maybe some others), by default, when binding a socket to
the IPv6 wildcard address, the socket accepts both IPv6 and IPv4, the
latter in the form of IPv4-mapped IPv6. Trying to bind another socket
with the same port and the IPv4 wildcard results in EADDRINUSE.
On *BSD, by default (and always on Windows I believe), the situation is
the opposite - the IPv6-wildcard socket accepts only IPv6, and you need
to bind another socket for IPv4 if you want to support both.
I believe there is some consensus in the networking community that the
first behaviour is "bad" for various reasons. Be that as it may, it is
unlikely to change, and it seems applications tend to ensure the second
behavior by means of unconditionally using the IPV6_V6ONLY socket option
in order to get consistent behavior across OSes (and avoid the "badness").
Since one strong point of Erlang/OTP is to as far as possible isolate
the Erlang programmer from these annoying differences between OSes,
could it be considered to make inet_drv always apply IPV6_V6ONLY (if it
exists) for IPv6 sockets?
More information about the erlang-questions