# The story of e

## Subject:

## Overview

What do we mean by exponential growth? How quickly does your bank balance grow? How quickly does a cup of tea cool, or radium decay? What shape is a washing line and what is the link with Gresham College?

## Extra lecture materials

## Transcript of the lecture

**THE STORY OF**

*e*In my last two lectures I outlined the story of π, a geometrical story, and *i*, an algebraic story. Today I'll tell you about the exponential number *e*, an analytic story, and I'll combine the three numbers into one of the most famous results in mathematics.

*How fast do things grow?*

We have all heard of the phrase 'exponential growth', to indicate something that grows very fast. But how fast is that, and what does the word 'exponential' mean? What is the exponential number *e*?

To answer this, let's see how fast different sequences can grow. A very slow form of growth is the sequence of natural numbers 1, 2, 3, 4, ..., which is 'linear growth', symbolised by *n*. Somewhat faster is 'quadratic growth', symbolised by n²: 1², 2², 3², 4², ... , or 1, 4, 9, 16, ... , involving the perfect squares. Next is 'cubic growth' n³: 1³, 2³, 3³, 4³, ... , or 1, 8, 27, 64, ... , involving the cubes. These are all examples of 'polynomial growth', since they involve powers of *n*. On the other hand, we could look at powers of 2, or of any other number. The sequence 2^{n} of powers of 2 starts off rather slowly: 1, 2, 4, 8, ... , but gradually gathers pace, since each term is twice the previous one, and the sequence of powers of 3 takes off even more quickly: 1, 3, 9, 81, 243, 729, ... . These are examples of 'exponential growth'.

Just to see how much faster exponential growth becomes, let's draw up a table comparing the running times for *n* = 10 and *n* = 50, for various polynomial and exponential expressions on a computer performing one million operations per second:

__ n =10 n = 50 __* *

* polynomial*

*n *0.00001seconds 0.00005 seconds

*n²* 0.0001seconds 0.0025 seconds

*n³* 0.001seconds 0.125 seconds

__ n^{5} 0.1seconds 5.2 minutes __

*exponential*

2* ^{n}* 0.001seconds 35.7 years

__ 3^{n} 0.059seconds 2.3 × 10^{10} years __

A familiar example of this is simple and compound interest. Suppose that we have £100 to invest at 5 per cent over a number of years. With simple interest the amount increases linearly, giving us after each successive year £100, £105, £110, £115, £120, and so on. After *k* years we have £100 + 5*k*.

With compound interest we recalculate the extra 5 per cent every year, giving us £100, £105, £110.25, £115.76, £121.55, and so on - initially only a little more than before, but gradually starting to increase very quickly. There are tables of compound interest going back at least to the early 17th century.

Another calculation we could do is to find out how much more we should earn in a year if the compound interest were calculated more frequently - say, *n* times per year. We obtain the following table:

annually (*n* =1): £105.00

every 6 months (*n* =2): £105.06

every month(*n* = 12): £105.12

every day(*n* =365): £105.13

In fact, if the year is divided into *n* parts, then the amount we receive after a year is £100(1 + *r*/*n*)* ^{n}*. Does this increase without bound as the year is further subdivided, or does it settle down to a limiting value? In fact, it tends to a limit, which is obtained by multiplying £100 by

*e*

^{0.05}, where

*e*is the exponential number. So what is this number

*e*?

To find out, we'll increase the rate of interest to 100 per cent every year, so that we have the limit of (1 + 1/*n*)* ^{n}* as

*n*increases. This sequence increases steadily, but can never exceed 3. In fact, it tends to a limiting value, the number

*e*= 2.718281828459... .

We can remember *e* by counting the number of letters in each word of the following sentences:

* By omnibus I traveled to Brooklyn,
In Gresham I lectured on geometry,*

or, for more decimal places:

* To disrupt a playground is commonly a practice of children,
In showing a painting to probably a critical or venomous lady, anger dominates - O take guard, or she raves and shouts.*

*Logarithms*

Early ideas of logarithms are given in works of Chuquet and Stifel around the year 1500. They listed the first few powers of 2 and noticed that to multiply any two of them it is enough to add their exponents - for example, to multiply 8 and 32 we write:

8 × 32 = 2^{3} × 2^{5} = 2^{3+5} = 2^{8} = 256,

and 16 × 128 = 2^{4} × 2^{7} = 2^{4+7} = 2^{11 } = 2048.

So the idea of a logarithm is to turn complicated sums involving multiplication into simpler ones involving addition, and similarly, complicated sums involving division into simpler ones involving subtraction.

But the idea wasn't developed until the early 17th century, when John Napier (1550-1617), Eighth Laird of Merchiston near Edinburgh, produced his *Mirifici logarithmorum canonis descriptio* in 1614. This work contains extensive tables of the logarithms of the sines and tangents of all the angles from 0 to 90 degrees, in steps of 1 minute; the emphasis on trigonometry arises because he intended them to be used by navigators and astronomers.

Napier's logarithms are not the ones that we use now. In order to avoid decimal fractions he took the radius of his circle to be very large - 10 million. He considered two points moving along straight lines. The first travels at constant speed for ever; the second, representing its logarithm, moves from *P* along a finite line *PQ *in such a way that its speed at each point is proportional to the distance it still has to travel. It follows that the logarithm of 10,000,000 is 0, and that we have

Nlog (*ab*) = Nlog (*a*) + Nlog (*b*) - Nlog (1).

**Enter Gresham College **

*S*hortly after their invention, Henry Briggs, first Gresham Professor of Geometry, heard about logarithms and became wildly excited, including them in his Gresham lectures, saying:

*[John Napier] set my Head and hands a Work with his new and remarkable logarithms. I never saw a Book which pleased me better or made me more wonder. *

However, he felt that they could be redefined so as to avoid subtracting the term Nlog (1), which was 161,180,956: *I myself, when expounding this doctrine to my auditors in Gresham College, remarked that it would be much more convenient that 0 should be kept for the logarithm of the whole sine.*

Another problem was that multiplication by 10 involved the addition of log 10 = 23,025,842.

Briggs went up to Edinburgh two summers running to stay with Napier. The outcome was Briggs's *Logarithmorum chilias prima* ('the first thousand logarithms') of 1617, a small pamphlet of sixteen pages of 'logs-to-the-base-10' in which log 1 = 0, log 10 = 1, and log (*ab*) = log (*a*) + log (*b*).

Seven years later, after he had left Gresham College to become the first Savilian Professor of Geometry in Oxford, he was to follow this with his *Arithmetica logarithmica*, an extensive collection of logarithms to base 10 of the integers from 1 to 20,000 and 90,000 to 100,000, calculated by hand to fourteen decimal places. The gap in the tables between 20,000 and 90,000 was filled in by the Dutch mathematician and publisher Adriaan Vlacq in 1628.

Meanwhile another Gresham professor had become involved. Edmund Gunter, whom Briggs defeated for the Savilian Chair in Oxford, became Gresham Professor of Astronomy and in 1620 published his *Canon triangulorum*, the first table of logarithms of trigonometrical functions to base 10.

In the 1630s a number of people created mechanical instruments based on logarithmic scales, which could be used for complicated calculations. Most notable was the slide rule, one version of which was Gunter's sector of 1636.

*Enter the calculus*

Throughout the 17th century, mathematicians tried to find the area under various curves, a process known as integration. In 1647 the French mathematician Saint-Vincent showed that the area under the hyperbola *y* = 1/*x* is precisely the logarithm function.

This was a great breakthrough, and was developed by Isaac Newton who showed how a logarithm could be written as an infinite series:

log (1 + *x*) = *x* - *x*^{2}/2 + *x*^{3}/3 - *x*^{4}/4 + ... .

He then used this to calculate a logarithm to no fewer than 55 decimal places.

*Leonhard Euler*

But it was in the mid-18th century that the greatest advances were made, when Leonhard Euler linked the logarithmic function with the exponential function in his 1748 book *Introductio in analysis infinitorum* which brought together results from various of his earlier works.

A detailed description of his achievements in this area would take many lectures, but the following list summarises some of the most important of the results that he obtained:

(1) *e* is the limit of (1 + 1/*n*)* ^{n}*; more generally,

*e*is the limit of (1 +

^{x}*x*/

*n*)

*;*

^{n} (2) the slope at each point of the curve *y* = *e ^{x}* is

*e*;

^{x}(3) the exponential function can be expanded as an infinite series:

* e ^{x}* = 1 +

*x*/1! +

*x*

^{2}/2! +

*x*

^{3}/3! + ...,

and in particular, *e* = 1 + 1/1! + 1/2! + 1/3! + ... ;

(4) the exponential function *y* = *e ^{x}* and the log function

*y*= log

*(*

_{e}*x*) are inverses of each other: log

*(*

_{e}*e*) =

^{x}*x*and e

^{log x}=

*x*.

Euler then turned his attention to complex numbers, and produced one of the most famous results in the whole of mathematics. At first sight the trigonometrical functions sine and cosine may seem to have nothing in common with the exponential function e* ^{x}*. But starting from the series

sin *x* = *x* ? *x*^{3}/3! + *x*^{5}/5! ? *x*^{7}/7! + . . . and cos *x* = 1 ? *x*^{2}/2! + *x*^{4}/4! ? *x*^{6}/6! + . . . ,

he investigated what would happen if you replace the number *x* by the complex number *ix*. He deduced the fundamental formula linking them,

e* ^{ix}* = cos

*x*+

*i*sin

*x*,

from which we deduce on putting *x* = π (though Euler did not state this specifically) that **e^{i}^{π} + 1 = 0**

This famous formula includes the five great constants of mathematics.

Using this formula, one can obtain various new functions. Starting from the exponential functions *y* = *e ^{x}* and

*y*= e-

^{x}, we can add or subtract them to give the 'hyperbolic functions' cosh

*x*= (

*e*+ e-

^{x}^{x})/2 and sinh x = (

*e*- e-

^{x}^{x})/2.

Although they are defined in terms of the exponential function, their properties are remarkably similar to those of the trigonometrical functions:

for example, cos^{2} *x* + sin^{2} *x* = 1 and cosh^{2}*x* - sinh^{2 x =}* *1;

this follows since cosh *ix* = cos *x* and sinh *ix* = *i* sin *x*.

The curve *y* = cosh *x* is called a *catenary*, and is the shape taken by a hanging chain.

Another question that was causing difficulties in Euler's lifetime was how to define the logarithm of a negative number. Euler took this much further and defined the logarithm of any complex number: if *z* = *re ^{it}* =

*r*(cos

*t + i*sin

*t*)

is any complex number (written in polar form), then log *z* = log *r* + *it*.

Since, for each complex number *z*, the angle *t* can take infinitely many values (all differing by multiples of 360°), the complex logarithm also takes infinitely many values. In particular, we can show that the logarithm of *i* is π*i*/2 (+ multiples of 2π*i*) and we can deduce the remarkable result that *i ^{-i}*=

*e*

^{-π/2}.

*Some applications of the exponential function*

The exponential function y = e^{x} arises when we have something that is growing or shrinking exponentially.

We have already seen how it arises in the study of compound interest. It also arises with the cooling of a cup of tea, where by Newton's law of cooling, the rate at which the tea cools is proportional to the difference in temperature between the tea and the surrounding room: if T is the temperature of the tea, T_{0} is the temperature of the room, and t is the time, then dT/dt, the rate of cooling, is -K(T - T_{0}), where K is a constant.

Another application concerns radium, which decays exponentially according to a similar formula. We can then solve the equation to find the 'half-life', the time taken for the radium to shrink to half its original size.

**Conclusion**

I have only begun to describe the enormous range of properties and applications of the exponential function, but I hope that I have given at least an idea of them. This concludes my series of three lectures on 'Three remarkable numbers', π, *i* and *e*, and Euler's remarkable result, e^{πi} + 1 = 0 relating them. I cannot think of a better way to finish the series.

© Robin Wilson, 2007