4
$\begingroup$

I’m writing some Python code using NumPy. Since I got an overflow warning, I decided to check for underflows as well at all places in the code, using np.seterr(over='raise',under='raise') and found that underflows were occurring at unexpected places, which makes me distrust every single operation.

In the past, having written FEM codes in C, C++, Fortran, and CUDA. I do not recall putting in any checks for overflow and underflow (unless the compiler did it for me automatically), and the results of those solvers turned out to be okay.

Also, from browsing open-source codes, I don’t think I’ve noticed floating point checks on every line.

So, my question is: Should every line of a numerical computation be checked for overflow and underflow and other floating-point errors?

$\endgroup$

1 Answer 1

9
$\begingroup$

I am here using the words under- and overflow as np.seterr does. Be aware that this is not universal.

Underflows can occur in many cases where they are not a problem. Consider for example:

import numpy as np
np.seterr(under="warn")

some_numbers = np.array([0.5,0.7,0.03,1e-16])
np.sum(some_numbers**20)

While an artificial example, similar operations can occur in many situations dealing with probabilities and similar. You get an underflow here because: $$\left(10^{-16}\right)^{20} = 10^{-320} < ε = 2.2·10^{-308},$$ where $ε$ is the smallest positive number representable as a floating point number without loss (np.finfo(float).smallest_normal). For the calculation itself, this has a negligible impact since you are summing the number with other numbers that are far bigger and thus the detail would get lost anyway.

Note that the 1e-16 seems like the odd one out here, but such a number can easily appear as the result of another floating-point inaccuracy, e.g.:

import numpy as np
np.seterr(under="warn")

some_numbers = np.array([10.1,-10,-0.1])
should_be_zero = np.sum(some_numbers)
print(should_be_zero) # -3.608224830031759e-16
assert should_be_zero**30 == 0 # Don’t compare floats with == at home, kids.

Here, the last line will throw the warning, although it’s making the result correct again.

Since you mentioned FEMs, a more practical example: Suppose you run some initial conditions where initially barely anything happens in some region. Computing the derivative in such a region may easily involve an underflow, e.g., making it zero instead of a very small number. However, that does not make your results problematically inaccurate.

This is why most underflows are not a problem and underflow warnings are off by default. However, they can be useful at times:

import numpy as np
from scipy.special import binom
np.seterr(under="warn")

some_numbers = np.arange(45,55)
a,b = 106,112
print( np.prod(binom(a,some_numbers)) * np.prod(1/binom(b,some_numbers)) )
# 1.8998344012922272e-16

Here, the underflow tells you that you are losing precision (and it’s worse for b=113). Ideally, this leads to a saner computation such as:

print( np.prod( binom(a,some_numbers) / binom(b,some_numbers) ) )
# 1.9357208784926316e-16

Overflows on the other hand are almost always a problem or at least something you want to know about. Thus they raise a warning by default. For example:

import numpy as np

print( np.prod(10**np.arange(30.)) ) # np.float64(inf)
print( np.uint8(23) - np.uint8(42) ) # np.uint8(237)
$\endgroup$

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.