Interval Arithmetic

Mathematical topics covered:

  • Operations on sets
  • Idea of interval arithmetic
  • Error bounds using interval arithmetic

Programming concepts covered:

  • Interval arithmetic with mpmath
  • comparisons in interval arithmetic
  • Rigorous comparisons using increasing precision
  • Rigorous computation of approximations to desired accuracy

Note: This material is not covered in the various textbooks.

Mathematical underpinnings

If $A$ is a set of real numbers, we write $a \in A$ to denote the statement "$a$ is in $A$".

In mathematics, we can extend the usual operations on real numbers to sets of real numbers.

For example, the addition operation is $$A+B = \{a+b:~\text{$a \in A$ and $b \in B$}\}.$$ So, for instance $$\{1,3\}+\{5,7\}= \{6,8,10\}.$$

Basically interval arithmetic implements these operations for intervals. Most operations between intervals yield new intervals. For example, $$[a,b] + [c,d] = [a+c, b+d].$$

The one operation that is an issue is division. Since for example, $$[1,2] \div [-1,1]$$ is not an interval. But then one of the numbers in the resulting set should be $1 \div 0$ which is undefined. So, when implemented as a computer program, this operation results in an error instead of a strange set.

Programming basic ideas

Recall that most numbers you want to work with can't be stored exactly in Python. When we do arithmetic, we round at every step. As we do more operations, the value stored slowly diverges from the result of the true computation.

Interval arithmetic gives a partial resolution to this. If $x$ is a real number, we use a (hopefully small) interval containing $x$ to represent $x$. When we do computations involving $x$, we round outward (making the intervals bigger) to be sure that the real value is somewhere inside. This guarantees that the true value that would result from the computation must be inside the interval.

An interval arithmetic package

We will demonstrate the Arbitrary-precision interval arithmetic built into the mpmath package. (Warning: The package states that interval arithmetic support is still experimental.)

The inteval arithmetic package can be loaded with:

In [1]:
from mpmath import iv

The multiprecision library was discussed at the end of the Number Representations notebook. It was from the same library, so you can load both with:

In [2]:
from mpmath import iv, mp

Basically iv does interval arithmetic with endpoints represented by numbers from the multiprecision library.

We will work with $10$ bits of precision just so we can see what is going on easily. You would normally want to set this higher. You can also set the number of digits of precision with iv.dps = ???.

In [3]:
iv.prec = 10     # Use 10 bits of precision, which is about 3 digits.

The following repesents the interval $[1, 1]$. That is $1$ is the only point in the interval.

In [4]:
x = iv.mpf(1)
mpi('1.0', '1.0')

Recall that $1/3$ can not be stored as a floating point number. Here is $1/3$ as an interval:

In [5]:
y = x/3
mpi('0.33301', '0.3335')

Observe that $1/3$ is in the interval above. Also if we add $y$ three times, we get an interval containing one:

In [6]:
y + y + y
mpi('0.99902', '1.002')

The number $\pi$ can't be stored exactly and so is represented as an interval.

In [7]:
pi = iv.pi
mpi('3.1406', '3.1445')

The standard functions are included and are applied to intervals with rounding outwards. For example, computing $cos(\frac{\pi}{2})$ results in a interval containing $0$.

In [8]:
mpi('-0.0014706', '0.00048399')

Observe that this rigorously guarantees that the true value has absolute value less than $1.4 \times 10^{-3}$. This isn't the best example, because we know the answer is zero. But we don't know the true value of $\cos(1)$.

In [9]:
mpi('0.54004', '0.54102')

The above guarantees that $$\cos(1)=0.54\ldots$$ is correct to the decimal places displayed (and the next digit is either a $0$ or a $1$).

Manually constructing an interval

The following command constructs an interval with endpoints $2$ and $3$.

In [10]:
mpi('2.0', '3.0')

Accessing the endpoints

Suppose you want to explicitly work with the endpoints of an interval. Here is an example interval approximating $\frac{1}{3}$.

In [11]:
x = iv.mpf(1)/3
mpi('0.33301', '0.3335')

You can access the left endpoint with x.a and the right endpoint with x.b.

In [12]:
mpi('0.33301', '0.33301')
In [13]:
mpi('0.3335', '0.3335')

Unfortunately, as seen above, the endpoints are returned as degenerate intervals. (An interval is degenerate if it has the same left and right endpoint.) If we want to work with them as numbers, we can convert them to multiprecision numbers. Before we do so, you should make sure that the mp library is working at the same precision as we are using for iv.

In [14]:
mp.prec = 10
In [15]:
a = mp.mpf(x.a)
In [16]:
b = mp.mpf(x.b)

The width of an interval

The absolute error between an actual value $x_\text{real}$ and an approximation $x_{\text{approx}}$ is $$|x_\text{real} - x_{\text{approx}}|.$$

Often you want to know how good an approximation is in terms of absolute error. It is difficult to access this when working with floats, because errors accumulate with each operation. (If you are really careful, you can carefully keep track of these errors.)

With interval arithmetic, such error computations are built in. With interval arithmetic you will be told something like $x_{\text{real}}$ is in the interval $[a,b]$. Then if $x_{\text{approx}}$ is some other point in $[a,b]$, then you know that the absolute error is $$|x_\text{real} - x_{\text{approx}}| \leq b-a.$$ A natural choice for an approximate is $x_{\text{approx}} = \frac{b-a}{2}$. In this special case you get an absolute error not more than $\frac{b-a}{2}$.

Both the error expressions have the width of the interval $b-a$ appearing. So, this quantity is very important. You can access an upper bound on the width of an interval as below:

In [17]:
mpi('0.00048828', '0.00048828')

Again, the width is typically returned as a degenerate interval, and you can convert it to a multiprecision float.

In [18]:


Intervals are only equal if they have the same endpoints. (Overlapping is not good enough.) This does create some issues. For example $$\frac{2}{7}+\frac{1}{5} = \frac{17}{35}$$ but:

In [19]:
x = iv.mpf("2/7") + iv.mpf("1/5")   # 2/7 + 1/5
[0.4853516, 0.4863281]
In [20]:
y = iv.mpf("17/35")
[0.4853516, 0.4858398]
In [21]:

(Above we made use of conversion from string to an interval.)

Less than and greater than comparisons return True, False, or None. Here None is reserved for the case that the answer can not be determined (usually because the intervals overlap.)

In [22]:
print( iv.mpf([2,3]) < iv.mpf([4,5]) )
In [23]:
print( iv.mpf([2,3]) < iv.mpf([0,1]) )
In [24]:
print( iv.mpf([2,3]) < iv.mpf([3,4]) )
In [25]:
print( iv.mpf([2,3]) <= iv.mpf([3,4]) )

Application: Rigorous comparisons through increasing precision

The number $x = \sqrt[3]{3}$ is very close to $y=26639450/18470763$. Suppose we want to know for sure which is bigger. Below we use floats, but what if there is a round off error issue?

In [26]:
x = 3.0**(1/3)
In [27]:
y = 26639450/18470763

The following makes the check with intervals:

In [28]:
x = iv.mpf(3)**iv.mpf("1/3")
mpi('1.4414', '1.4434')
In [29]:
y = iv.mpf("26639450/18470763")
mpi('1.4414', '1.4434')
In [30]:
print(x < y)

The intervals overlap so the above check is inconclusive. We can figure out for sure by checking higher and higher precision. (Quantities computed at higher precision will have smaller interval lengths.)

In [31]:
result = (x < y)
while result is None:
    iv.prec = 2 * iv.prec           # Double the bits of precision.
    x = iv.mpf(3)**iv.mpf("1/3")    # Compute x to the new precision.
    y = iv.mpf("26639450/18470763") # Compute y to the new precision.
    result = (x < y)
print("The statement x<y is {}.".format(result))
print("We had to use {} bits of precision.".format(iv.prec))
The statement x<y is True.
We had to use 80 bits of precision.
In [32]:
iv.dps = 5 # set the precision back to 5 digits

Application: Computation of exact decimal representations

This section assumes you have an understanding of significant digits. See § 2.2 of TAK if you need to review this idea.

Lets say we really wanted to know $$x = \cos\big(\sqrt{3}-\sqrt{2}\big)$$ to $30$ significant digits.

To get a rough idea of what is going on, we can compute the quantity using the math library:

In [33]:
import math as m
m.cos(m.sqrt(3) - m.sqrt(2))

Observe that the most significant digit is the tenths place (value $10^{-1}$). The 30th most significant digit then has value $10^{-30}$, so we want to find an approximation within $10^{-30}$ of the true value.

The following computes an interval containing the true value at $5$ digits of precision.

In [34]:
iv.dps = 5
x = iv.cos(iv.sqrt(3) - iv.sqrt(2))
mpi('0.94991302', '0.94991493')

Examining the width of the interval, we see that we do not have enough precision:

In [35]:
mpi('1.9073486e-6', '1.9073486e-6')

Below we increase the precision to $53$ bits of precision (the precision of floats) and try again.

In [36]:
iv.prec = 53
x = iv.cos(iv.sqrt(3) - iv.sqrt(2))
mpi('3.3306690738754696e-16', '3.3306690738754696e-16')

We still only have 16 digits of precision or so. We need around $2$ times more precision.

In [37]:
iv.prec = 106
x = iv.cos(iv.sqrt(3) - iv.sqrt(2))
mpi('2.465190328815661891911651766508707e-32', '2.465190328815661891911651766508707e-32')

This means that the absolute error in our computation is less that $2.5 \times 10^{-32}$. We only needed to ensure our error was smaller than $10^{-30}$ so this is good.

Below we extract the endpoints and convert them to the multiprecision library.

In [38]:
mp.prec = 106
a = mp.mpf(x.a)
b = mp.mpf(x.b)
print(f'a={a} and b={b}')
a=0.9499135278648561104606030334159 and b=0.9499135278648561104606030334159

Let's say we want the digits of precision in a string. We can use the mp.nstr(n) function to convert a multiprecision number to a string with n significant digits. You can learn about this function using the following in Jupyter:

In [39]:

Below we use mp.nstr to convert the left endpoint into a string representation with 30 digits of precision.

In [40]:
a_str = mp.nstr(a,30)

You can see there really are 30 digits of precision because len(a_str)=32. (The two leading characters '0.' are not significant digits and so don't contribute to the precision.)

In [41]:

We do the same with the right endpoint below.

In [42]:
b_str = mp.nstr(b,30)
In [43]:

Below we check that the two approxmations agree.

In [44]:
a_str == b_str

The above check was probably unneccessary given our knowledge about the length of the interval, but makes it clear that the common string is a good answer to our question.

Remark: Suppose we are trying to approximate say $\pi$ to four significant digits. Since the most significant digit is in the ones place, the fourth most significant digit would be the thousandths place. So, we might try to compute an interval containing $\pi$ such that the length of the interval is less than $10^{-3}$. Since $\pi=3.14159\ldots$, the interval $[3.1413, 3.1417]$ is a possible outcome. But in this case the lower interval will round to $3.141$ while the upper one rounds to $3.142$. The issue is that the interval contains the point halfway between $3.141$ and $3.142$. But, both $3.141$ and $3.142$ are accurate to four significant digits since $\pi$ differs by less than $10^{-3}$ from each.

In general, there might be two approximations to a fixed number of significant digits. The two valid approximations will differ by increasing or decreasing the least signficant digit by one.

The example of $\pi$ is illustrated below.

In [45]:
iv.prec = 13 # number of bits of precision
In [46]:
pi = iv.pi
mpi('3.14111', '3.1416')
In [47]: < 10**-3
In [48]:
mp.nstr(mp.mpf(pi.a), 4)
In [49]:
mp.nstr(mp.mpf(pi.b), 4)


Mainly, interval arithmetic is useful to rule out the possibility of accumulation of errors due to rounding in operations. It also is useful for computing mathematical quantities to a desired accuracy.