[erlang-questions] On OTP rand module difference between OTP 19 and OTP 20
Thu Aug 31 17:19:26 CEST 2017
On 2017年08月31日 木曜日 14:20:55 Jesper Louis Andersen wrote:
> On Thu, Aug 31, 2017 at 3:42 PM Loïc Hoguin <> wrote:
> > I certainly hope this is not the general policy for OTP. We program
> > against the documentation. The documentation *is* our reality.
> I think it is fair to evaluate on a case by case basis. Some times, the
> documentation and the implementation are not matching up. This means either
> the documentation or the implementation is wrong (not xor here!). Which of
> which is wrong depends a bit on the case, and there are definitely
> borderline situations where it is very hard to determine which way you
> should let the thing fall.
> I don't think you can make blanket statements on which way you should lean
> because there are good counterexamples in both "camps" so to speak.
> Another view is that the documentation is the specification. But again,
> both the specification and the implementation can be wrong and some times
> the correct operation is to change the specification. When I worked with
> formal semantics, it was quite common that you altered the specification in
> ways that let you prove a meta-theoretic property about the specification.
> Not altering it would simply make the proof way to complicated and hard.
> Perhaps even impossible. It is an extreme variant of letting the
> documentation match the reality in a certain sense.
The other route is to make existing functions do what they say they are going to whenever possible, add functions that provide the prescribed functionality, and deprecate and annotate (with warnings where appropriate) the ones that cannot provide whatever they originally claimed to. And be quite noisy about all of this.
OTP has many, many examples of this. It prevents surprise breakage of old code that depends on some particular (and occasionally peculiar) behavior while forging a path ahead -- allowing users to make an informed decision to review and update old code or stick with an older version of the runtime (which tends to be the more costly choice in many cases, but at least it can be an informed decision).
Consider what happened with now/0, for example. Now we have a more complex family of time functions but never was it viewed as an acceptable approach to simply shift documentation around a bit here and there in a rather quiet manner while adding in contextual execution features (that is to say, hidden states) that would cause now/0 to behave in a new way. And now/0 is deprecated.
> I think it is fair to evaluate on a case by case basis.
OK. I'll buy that.
In an EXTREMELY limited number of cases you will have a function that simply cannot live up to its spec without a ridiculous amount of nitpicky work that wouldn't really matter to anyone. This is not one of those cases. And in this case we are talking about providing a largely pure API in the standard library, not some meta behavior that acts indirectly through a rewrite system based on some proofing mechanics where the effects of improper definitions are magnified with each transformation.
So I get what you're saying, but this is not one of those cases, and for those odd cases it is much safer to deprecate functions, mark them as unsafe, provide compiler warnings and so on if the situation is just THAT BAD, and write a new function that is properly documented in a way that won't suddenly change later. For a functional language's standard library the majority of functions aren't going to be magically tricky, and specs are concrete promises while implementations ephemeral.
At least this change happened in a major release, not a minor one. If it is forgivable anywhere, it is in a major release. The tricky bit is that the promises a language's standard libs make to authors are a bit more sticky than those made by separate libraries provided in a given language. And yes, that is at least as much part of the social contract inherent in the human part of the programming world as it is a part of the technical contract implicit in published documentation. The social part of the contract is more important, from what I've seen. Consider why Ruby and many previously popular JS frameworks are considered to be cancer now -- its not just because things changed, it is that the way they changed jerked people around.
The issue I am addressing is a LOT more important than whether `0 =< X < 1.0`, of course (yeah, on this one issue, we'll figure it out). It is a general attitude that is absolutely dangerous.
>> It is in general safer to change the documentation to match the reality.
This is as corrosive a statement as can be. We need to think very carefully about that before this sort of thinking starts becoming common in other areas of OTP in general.
More information about the erlang-questions