Speed of floating point math

Robert Virding <>
Tue Mar 28 18:20:07 CEST 2000

James Hague <> writes:
>Consider this simple function:
>mag_squared(X,Y,Z) -> X*X + Y*Y + Z*Z.
>and this little test program:
>mag_test(0) -> ok;
>mag_test(N) -> mag_squared(0, 1, 0), mag_test(N-1).
>This takes a barely noticible amount of time for mag_test(100000), less
>than 1/8 second.  Now change the last line to this:
>mag_test(N) -> mag_squared(0.0, 1.0, 0.0), mag_test(N-1).
>Now mag_test(100000) takes 4 seconds on the same machine; at least 32 times
>slower.  Adding type guards to mag_squared makes no difference.  I haven't
>torn into the runtime yet, but it's difficult to come up with a 32x
>difference.  Are floating point values being heap allocated?  Is most of
>the loop being optimized away in the integer version because the arguments
>to mag_squared are constants?

The main reason is that floats are heap allocated.  Type guards are only 
used for determining if a clause is selected, at the moment they are 
not used for optimisation.

So far the speed, or rather lack of, of floating point arithmetic has 
not really been a problem.  I think that very few applications use 


More information about the erlang-questions mailing list