[eeps] Multi-Parameter Typechecking BIFs
Wed Mar 11 01:10:30 CET 2009
On 7 Mar 2009, at 12:32 am, Bjorn Gustavsson wrote:
> [I am going trough my star-marked emails...]
> On Thu, Feb 26, 2009 at 12:52 AM, Richard O'Keefe
> <> wrote:
>> In average/1, what is the reason for
>> V0 = if
>>> V10 =:= V20 -> V10;
>>> is_float(V10) -> 0.5*(V10+V20)
>> rather than
>> V0 = (V10+V20)*0.5
> To save heap space. Instead of allocating another three words on the
> we'll just return an already existing float.
Oddly enough, I was just reading a paper last night about the
TILT compiler for ML. Originally they accessed the intermediate
data structures directly using pattern matching, e.g.,
fun f (ADD (x,y)) = ...
So that they could do experiments, they switched over to an
approach they called "the curtain", which is rather like
structure S = struct ...
data x = ADD of (t,t) | ....
val expose : t -> x
val hide : x -> t
where the abstract data type is S.t and the type used for pattern
matching is S.x. The code now looked like
fun f tree =
case expose tree of
ADD (x,y) ...
The next thing they did was to try a range of alternative
representations for the intermediate languages, including
Their conclusion was that spending a lot of extra time to
save space wasn't a good idea, at least for their problem.
There are two questions about this floating-point example.
(It's only 3 words for doubles? Oh, unaligned doubles.)
(1) This test always costs time. But how much space
does it actually save? I know it saves a boxed double
when the test succeeds, but what proportion of the time
does that actually happen?
Avoiding allocation saves allocation and garbage collection
time. But this avoidance here costs time, so what's the
reason to believe that it saves more than it costs?
(2) Look at the code again:
V0 = if V10 =:= V20 -> V10
; is_float(V10) -> 0.5*(V10+V20)
IEEE 754 arithmetic has been around for a *long* time
now. So does it really fall to me to point out that
the expressions V10 and 0.5*(V10+V20) are *not*
always equivalent when V10 =:= V20 (in IEEE arithmetic).
Erlang messes with IEEE 754 arithmetic enough to be
dangerous. I do have a fair idea of what laws IEEE
arithmetic satisfies. I *don't* have a clear idea of
what laws Erlang floating-point arithmetic satisfies.
It may well be that Erlang's messing around makes this
particular difference moot, but do you want to bet on it
doing so forever?
More information about the eeps