11

Possible Duplicates:
Comparing IEEE floats and doubles for equality
Most effective way for float and double comparison

I'm writing an algorithm which requires testing if the results of large floating point calculations are "equal"; obviously doing such a thing just using a == b is silly since floating point numbers are infamous for their little inaccuracies.

Traditionally I have written the function something like this:

isEqual(float a, float b)
{
    return abs(a - b) < epsilon; //epsilon = a very small number;
}

However, as a friend of mine pointed out this only works properly for certain sizes of floating point numbers, very large numbers need a different epsilon value to very large ones. His suggested fix for this is this:

isEqual(float a, float b)
{
    return (abs((a - b) / (a + b)) < epsilon);
}

I have a feeling you could probably do something clever using bit twiddling, along the lines of this simpler example for integers:

isEqual(unsigned byte a, unsigned byte b)
{
    return ((!(a ^ b)) & 1) == 0; //all bits except the last 1 have to be equal
}

So, what's the best way to do this?

Extra points are allocated for fast solutions, since this is used many times in an algorithm running in real-time in a game, so speed is really rather important!

Final note: my final implementation will be in C#, which can do pretty much anything C/C++ can in terms of floating point numbers, so feel free to go wild with bit twiddling and strange pointer magic if you really want to.

6

You've covered absolute and relative error, and are basically doing what is covered in Comparing floating point numbers by Bruce Dawson. You can look at bit twiddling collections like HAKMEMC, HAKMEM, Bit Twiddling, Hacker's Delight, and The Aggregate Magic Algorithms.

1

One way is to let the caller decide what error margin they are willing to accept:

 isEqual(float a, float b, float epsilon)

Allowing the client to choose a value for epsilon on a case-by-case basis can be fast and accurate, although it does mean that the client has a little more thinking to do (not necessarily a bad thing though).

But it'd be nice to have a safe and easy default to fall back on. I think your friends solution of testing for a percentage difference is a good idea. It's better to have a working solution than one that is fast but fails in some cases. Note that his suggestion will fail when a and b are both 0. You should test for exact equality first to prevent this error.

I'm not going to suggest a bit twiddling solution mainly because I think that your actual test will take most of the time, and evaluating the result for correctness will only be a small percentage of the total time, so I'd think that bit-twiddling here is probably a premature optimization.

1

You can exploit the IEEE 754 float format and mask some of the mantissa bits (see Wikipedia). However, it won't solve the problem - imagine you have a minor inaccuracy that causes the exponent of the floating-point number to change while the actual value isn't changed significantly, how would you deal with this case?

1

As far as I know there is no much more to do in the isEqual function. You just vary how much You'll ignore differences. I'd rather do some math on error propagation in the code that provides Your big numbers and see what are the possible sizes of errors depending on numbers.

It is possible that Your errors will be maximum when a is nearing a n which can be any number depending on what You do. So tolerating larger errors when a is larger may not be right

Only then can You try to ignore a certain size of error depending on a and b separately.

0

If you can do all your C pointer magic, then simply write it into memory as a double, and then access that memory as a ulong.

So you could do something similar to your integer example, but require that only, say, they be equal to 50 bits of mantissa (ignoring the last few).

0

There is a precision that your application needs and a (hopefully better) precision your computation delivers. Pick the geometric mean between these two as your epsilon.

0

Repeating my answer to http://stackoverflow.com/questions/2242593/when-comparing-for-equality-is-it-okay-to-use/2242682#2242682

if ( std::abs( a - b )
    < std::abs( a ) * ( std::numeric_limits<float_t>::epsilon() * error_margin ) )

This is C++; epsilon should be available from your numeric environment if it complies to IEEE 754.

The units of error_margin are bits of error. epsilon is the magnitude of one bit of error. Multiply the magnitude of one bit by the expected scale of error and you have a scale-invariant margin.

-1

Probably not too useful, but

abs(a-b)/(a+b) < epsilon 

is probably better written as

abs(a/b -1) < epsilon
-1

It's not particularly fast, but:

#include <math.h>

bool isNearlyEqual(double a, double b, double prec = 1e-6) {
    int mag1, mag2;

    double num1 = frexp(a, &mag1);
    double num2 = frexp(b, &mag2);

    if (abs(mag1-mag2) > 1)
        return false;

    num2 = ldexp(num2, mag2-mag1);

    return fabs(num2-num1) < prec;
}

#ifdef TEST
#include <iostream>

int main() {

    std::cout.setf(std::ios_base::boolalpha);

    // these should yield false -- the differ after 5 digits
    // and we've specified 6 digits by default.
    std::cout << isNearlyEqual(1e-20, 1.00001e-20) << std::endl;
    std::cout << isNearlyEqual(1e200, 1.00001e200) << std::endl;
    std::cout << isNearlyEqual(-1e10, -1.0001e10) << std::endl;

    // and these should yield true.
    // all are equal to at least 6 digits
    std::cout << isNearlyEqual(1e-20, 1.000001e-20) << std::endl;
    std::cout << isNearlyEqual(1e200, 1.000001e200) << std::endl;
    std::cout << isNearlyEqual(1.0, 0.99999998) << std::endl;
    std::cout << isNearlyEqual(-1e10, -1.000001e10) << std::endl;

// Added to check on magnitude differences:
std::cout << isNearlyEqual(2.0, 1.9, 1e-1) << std::endl; // should be true
std::cout << isNearlyEqual(2.0, 1.9, 1e-2) << std::endl;  // should be false
    return 0;
}

#endif

I should point out, however, that almost anything you do along this line needs to be used very carefully. Many possibilities are asymmetric, so compare(a, b) might say a and and b are equal, but compare(b, a) can say they're not equal (this applies to almost any that derive a magnitude from one number or the other, but not both) . Essentially all possibilities break transitivity, so a result of a=b and b=c does not imply that a=c. Likewise, almost all that do even basic math (addition, subtraction, multiplication, division) on the raw numbers risk overflow or underflow.

Edit: This forum is (intentionally) unsuited to extended discussion of the techniques. There's an extended discussion (including code similar to that above) on Google's Usenet archives.

-3

If you have problems with big numbers, you can use logarithms