Summation

Rapidly convergent infinite series are easy to approximate by directly summing a finite number of terms:

>>> from mpmath import *
>>> mp.dps = 50
>>> print sum(1/factorial(n) for n in range(100))
2.7182818284590452353602874713526624977572470936999

This is only practical if the series converges at least as quickly as of the order 2^(-n). More slowly convergent series can nevertheless often be summed numerically to high precision by applying a suitable transformation that accelerates the rate of convergence. Mpmath implements three such algorithms:

  • Euler-Maclaurin summation, suitable for smooth analytic summands
  • Richardson extrapolation, suitable for some nonalternating series
  • Shanks transformation, suitable for alternating series

The Euler-Maclaurin algorithm has the advantage of working for a large class of series. In cases when it does not work, it is able to produce a fairly reliable error estimate. The method is however slow unless the user supplies additional symbolic input.

Since the Richardson and Shanks methods are based purely on extrapolation from a few initial terms, they are comparatively fast. On the other hand, extrapolation assumes that the summand has a specific behavior, so in many cases these algorithms do not work at all.

Euler-Maclaurin summation (sumem)

sumem(f, a, b) calculates the sum of f(n) for n = a...b using Euler-Maclaurin summation, which approximates the sum by the integral of f(x) from a to b and then uses derivatives of f(x) to approximate the error. As an example, we can sum the Basel series:

>>> mp.dps = 15
>>> print sumem(lambda n: 1/n**2, 1, inf)
1.64493406684823
>>> print pi**2 / 6
1.64493406684823

The method relies on approximating the sum by an integral, so f(x) must be smooth and well-behaved enough to be integrated numerically. High-order derivatives of f(x) are also needed. By default, these are computed using numerical integration, which is the most expensive part of the calculation. A custom nth derivative function fderiv(x, n) can be provided as a keyword parameter; if the derivatives are known analytically, this is generally much more efficient, enabling relatively fast summations to tens or hundreds of digits:

>>> f = lambda n: 1/n**2
>>> fp = lambda x, n: (-1)**n * factorial(n+1) * x**(-2-n)
>>> mp.dps = 50
>>> print sumem(lambda n: 1/n**2, 1, inf, fderiv=fp)
1.6449340668482264364724151666460251892189499012068
>>> print pi**2 / 6
1.6449340668482264364724151666460251892189499012068

If b = inf, f(x) and its derivatives are all assumed to vanish at infinity (this method is therefore not good for alternating sums). It is assumed that a is finite, so doubly infinite sums cannot be evaluated directly.

The keyword argument N optionally specifies the number of terms to compute directly before using the Euler-Maclaurin formula to approximate the tail. It must be set high enough; often roughly N ~ dps is the right size.

With error=True, a tuple (s, err) is returned where s is the calculated sum and err is the estimated magnitude of the error. With verbose=True, detailed information about progress and errors is printed.

Richardson summation (sumrich)

Richardson extrapolation is an alternative to the Euler-Maclaurin summation that avoids the expensive computation of higher derivatives. The following computation is almost instantaneous:

>>> mp.dps = 200
>>> print sumrich(lambda n: 1/n**2, 1, inf)    # doctest: +SKIP
1.644934066848226436472415166646025189218949901206798437735558229370007470403200
87383362890061975870530400431896233719067962872468700500778793510294633086627683
17333093677626050952510068721400547968116
>>> print pi**2 / 6    # doctest: +SKIP
1.644934066848226436472415166646025189218949901206798437735558229370007470403200
87383362890061975870530400431896233719067962872468700500778793510294633086627683
17333093677626050952510068721400547968116

Generally, the algorithm tends to works well when the summand is a rational function:

>>> mp.dps = 50
>>> print sumrich(lambda n: (n + 3)/(n**3 + n**2), 1, inf)
2.9348022005446793094172454999380755676568497036204
>>> print pi**2/2 - 2
2.9348022005446793094172454999380755676568497036204

However, with a slight change to the summand, sumrich might turn out not to provide a useful acceleration:

>>> mp.dps = 15
>>> print sumrich(lambda n: 1/n**2.5, 1, inf)
1.34151163359885
>>> print zeta(2.5)
1.34148725725092

Shanks summation (sumsh)

The Shanks transformation is especially useful for accelerating the convergence of alternating series (it sometimes works for nonalternating series as well, but less often than Richardson extrapolation).

The Maclaurin series for log(1+x), for example, converges very slowly for x close in magnitude to 1. With sumsh, we can quickly sum it at x = 1:

>>> mp.dps = 40
>>> print sumsh(lambda n: (-1)**(n+1) / n, 1, inf)
0.6931471805599453094172321214581765680755
>>> print log(2)
0.6931471805599453094172321214581765680755

A related sum is Leibniz slowly convergent series for pi:

>>> mp.dps = 50
>>> print 4 * sumsh(lambda n: (-1)**n / (2*n+1), 0, inf)
3.1415926535897932384626433832795028841971693993751

In fact, the Shanks transformation can even sometimes be used to sum divergent alternating series, such as the Maclaurin series for log(1+x) outside its region of convergence:

>>> mp.dps = 15
>>> print sumsh(lambda n: (-1)**(n+1) / n * 9**n, 1, inf)
2.30289174139398
>>> print log(10)
2.30258509299405

Here are some additional examples of divergent series for which sumsh does find the appropriate values:

>>> print sumsh(lambda n: (-1)**n, 0, inf)
0.5
>>> print sumsh(lambda n: n * (-1)**n, 0, inf)
-0.25

In general, sumsh is capable of summing divergent geometric series (even nonalternating ones). The general formula for the sum of a^n for n from 0 to infinity is 1/(1-a), and the Shanks transformation does give the expected -1/6 if we put a = 7:

>>> print sumsh(lambda n: 7**n, 0, inf)
-0.166666666666667

The Shanks method is based on extrapolating from a table of n terms. To be effective, this method is applied recursively m times. Custom values for n and m can optionally be passed to sumsh as keyword arguments to improve the performance or accuracy.

Comparison

Let us try applying each method to the Basel series:

>>> mp.dps = 15
>>> print sumem(lambda n: 1/n**2, 1, inf)
1.64493406684823
>>> print sumrich(lambda n: 1/n**2, 1, inf)
1.64493406684823
>>> print sumsh(lambda n: 1/n**2, 1, inf)
1.64277092999229

The Shanks transformation is not useful at all here. If we instead consider the alternating Basel series, only the Shanks transformation is good:

>>> print sumem(lambda n: (-1)**(n+1) / n**2, 1, inf)
(0.822148701224003 - 0.000683411047277414j)
>>> print sumrich(lambda n: (-1)**(n+1) / n**2, 1, inf)
243859856691.745
>>> print sumsh(lambda n: (-1)**(n+1) / n**2, 1, inf)
0.822467033424113
>>> print pi**2 / 12
0.822467033424113

The Euler-Maclaurin method is capable of providing an error estimate in this case:

>>> sumem(lambda n: (-1)**(n+1) / n**2, 1, inf, error=True)[1]
mpf('0.0017926587291892238')

Here is an example for which all three algorithms give different answers.

>>> mp.dps = 15
>>> print sumem(lambda n: sqrt(n) / n**2, 1, inf)
2.6123753486854
>>> print sumrich(lambda n: sqrt(n) / n**2, 1, inf)
2.55616488912461
>>> print sumsh(lambda n: sqrt(n) / n**2, 1, inf)
2.58951117865808

The Euler-Maclaurin algorithm should have no problem with a smooth summand like this, so it is likely that this series has the wrong rate of convergence for either the Richardson or Shanks transformations to be effective. Re-computing at higher precision shows that the value obtained with the Euler-Maclaurin method is indeed (nearly) correct:

>>> mp.dps = 25
>>> print sumem(lambda n: sqrt(n)/n**2, 1, inf)
2.612375348685488342465765

In fact, the series has the exact value:

>>> print zeta(1.5)
2.612375348685488343348568

If the series is turned into its alternating equivalent, the Shanks method again becomes effective, however:

>>> mp.dps = 20
>>> print sumsh(lambda n: (-1)**(n+1) * sqrt(n) / n**2, 1, inf)
0.76514702462540794537

The series can be summed analytically to:

>>> print (2-sqrt(2))*zeta(1.5)/2
0.76514702462540794537

Table Of Contents

Previous topic

Integration

Next topic

Differentiation

This Page