# [erlang-questions] Any way to correct the round off errors?

Richard O'Keefe ok@REDACTED
Mon Sep 21 03:50:23 CEST 2009

```On Sep 21, 2009, at 4:08 AM, Richard Kelsall wrote:

> Witold Baryluk wrote:
>>> so we probably shouldn't trust more than 12 digits of precision
>>> because
>>> each calculation will lose some precision from the end of the
>>> number.
>> Substraction (and addition) can lose  any number of digits you wish.
> I would be horrified if I added two doubles
>
> 0.111111111111 +
> 0.111111111111
>
> and got
>
> 0.225745048327

Yes, but you missed the point.  Those two numbers have the
same sign.  Witold Baryluk was talking about subtraction.
Adding two numbers of opposite signs does subtraction.
If you have x+y+e and x+z+f, where x is the "common" part
of two similar numbers, y and z are the true differences,
and e and f are errors, then
(x+y) - (x+z) = y-z
but	(x+y+e) - (x+z+f) = y-z + e-f
and the errors  e, f that were small compared with x may
be extremely large compared with y-z.

specifically the section beginning "Catastrophic cancellation.
Devastating loss of precision when small numbers are computed
from large numbers by addition or subtraction."

> I have no idea what the IEEE standard specifies,

Well, shouldn't you _find out_?  I mean, before using something
as weird (but widespread) as floating point arithmetic,
shouldn't you take the trouble to find out what it is *supposed*
to do?  The IEEE 754 standard is small and tolerably clear;
there are drafts and summaries and review articles about it
all over the web.

> but I can't imagine
> anybody ever implementing or using a version that gave this answer.

They don't.  Nobody ever suggested it would.

The original example subtracts two numbers with similar values.
Any fixed-width floating point system ever built is going
to have trouble with that.  IEEE floating-point arithmetic
was designed with exceptional care and a demand for good
behaviour even when that conflicted with speed.

```