Daniel S Silver. American Scientist. Volume 104, Issue 5. Sep/Oct 2016.
London was cold in March 1837. Indeed, the spring months of that year still retain the record for the lowest average temperature ever recorded in the English Midlands. But on a Saturday evening of that frigid month, mere cold could not discourage those lucky souls invited to the home of mathematician Charles Babbage.
Parties at his abode, Number One Dorset Street, were famous. They promised amusements, from dancing and fine food to the sight of titled aristocrats chatting with noted savants. On this particular night, the most popular amusement was to be Babbage’s demonstration of his Difference Engine.
At this time, Babbage was Lucasian Professor of Mathematics at University of Cambridge, the position that Isaac Newton had once occupied and Stephen Hawking held for 30 years. Unlike Newton or Hawking, however, Babbage spent scant time in Cambridge. Instead, he spent his time in London, obsessed with the idea of building mechanical calculators that performed flawlessly and automatically.
A small working section of Difference Engine No. 1—the “beautiful fragment,” as Babbage called it-rested on a polished walnut stand, waiting to draw gasps of astonishment that evening.
As Babbage turned the handle in front of the assembled guests, numbered wheels clicked into place. At first, it appeared as if Babbage had predirected the machine to produce a simple numerical list, likely the following:
Then he turned the crank again, and to the guests’ surprise, a new pattern emerged.
What single repeated calculation could produce results that transform from one orderly sequence to a completely different-yet still orderly-sequence?
Actually, the answer didn’t matter to Babbage’s larger point. If a mere man can arrange this demonstration, he told his onlookers, then certainly the Almighty can design infinitely shifting patterns of even more subtle complexity, which to mere mortals would appear miraculous. To Babbage, then, the performance of the Difference Engine suggested that miracles were not the consequence of celestial tinkering. Rather, miracles were prearranged when the world began. (Today, we might say that they were “programmed.”)
Those words must have resonated with at least one partygoer that night: a young Charles Darwin, just returned from his historic voyage aboard the HMS Beagle and working through his thoughts on the origin of species.
Another idea was on Babbage’s mind that night, an idea that was already familiar to mathematicians but not yet well formed. It was the idea of mathematical induction.
Newly transcribed early essays from Babbage reveal his reflections on induction not just as a mathematical or mechanical construct, but also as a metaphysical construct-one that transcends our knowledge of a finite human world and allows us to reach for something more.
Grasping the Infinite
In science, induction is the process of guessing a general law from particular instances, such as “the Sun rises every day.” In mathematics, induction is a method of proof found in the toolkit of every professional and, one hopes, every college mathematics student. It lets us establish the truth of an infinite collection of mathematical statements without having to prove each separately, laboring until the end of time.
In short, induction is a tool that lets us probe the infinite, despite our disappointingly finite mortal existence.
Induction might be the fuel of science, as early 19th-century Cambridge philosophy professor William Whewell once proclaimed, but it is a dangerous fuel for mathematics. History is replete with examples of mathematicians asserting proofs of infinite series of numbers that were later proven false. One of Babbage’s manuscripts, The Philosophy of Analysis, cites a cautionary example given by the great 18th-century Swiss mathematician Leonhard Euler. It considers a polynomial function, f(n) = n2+n +41, which when evaluated for numbers 1 to 39 yields values that are all prime numbers, divisible only by 1 and themselves. Does Euler’s polynomial function yield only prime numbers? Alas, no. The value yielded by the number 40 is not prime, but divisible by 2.
The example, which had a profound and lasting effect on Babbage, is found in his essay On Induction, one of a bundle of 200-year-old mostly unpublished works which were, until recently, languishing in the British Museum Manuscripts Room:
To place in a strong point of view the danger of trusting to induction evidence, I shall show that even in a very simple case, two expressions may coincide with each other for any given number of terms and yet fail at some other.
To use Babbage’s formula, we must first fix on a positive integer, say 100,000. His formula then computes values from successive inputs. If the number 1 is entered, then the output is 1. Similarly, if 2 is entered, then 2 is received, and so forth. A very dull formula, one would think. When the number 100,000 is entered, however, the value returned is 100,001. Watching successive inputs and outputs without any knowledge of the formula, we would assume that what had occurred was an error-or possibly a miracle.
We could replace 100,000 with a number that is a bit larger, say 1020, and arrange for a very durable computer to generate a new value every second. Then the initial, predictable pattern won’t alter until long after our Sun has become dim. (In fact, 1020 seconds is longer than 3 trillion years.)
A remarkable feature of On Induction is its careful description of what one must prove to conclude safely that a pattern will continue forever.
Babbage realized that we know much less about the world generally than we do about numbers. In most investigations about nature, mathematical induction is useless. As much as we wish to do so, we cannot prove that any repeating natural phenomenon will repeat forever. Unlike mathematics, nature gives no guarantees.
Early in his career, Babbage attempted to publish these essays in the Edinburgh Journal of Science. The rejection letter that he received in 1821 from the journal’s editor, Sir David ‘Brewster, was full of regret. Brewster wrote: “The subjects you propose for a series of Mathematical and Metaphysical Essays are so very profound, that there is perhaps not a single subscriber to our Journal who could follow them.”
Such was the sad state of British mathematics at the time. The controversy between 17th-century mathematicians Isaac Newton (in England) and Gottfried Leibniz (in Germany) over who had first discovered the fundamental ideas of calculus continued to have its effects: Stubborn British defenders of Newton’s claim had led to Britain’s isolation from the European continent, where mathematicians had already begun making exciting discoveries using algebraic symbolic methods-that is, analytics-rather than Newton’s geometric methods. Fear of France-its military and its deistic ideas-helped to justify Britain’s continued seclusion. Analytics without direct mercantile applications was thought at Cambridge to be useless, and possibly even subversive.
In 1812, a group of ambitious, mathematically minded Cambridge students managed to form a club, the Analytical Society, so that they wouldn’t miss out on the new mathematics. They hired a meeting room, read papers, published a memoir, argued, and enjoyed themselves. Its members included Babbage, aspiring astronomer John Herschel, and aspiring mathematician George Peacock. William Whewell was another, more conservative member of this club, one who was less willing to abandon Newton. The son of a carpenter, Whewell made contributions to a bewildering array of fields including geology, crystallography, tidal theory, economics, architecture, philosophy, and theology.
Whewell is also responsible for coining a host of new words, such as anode, cathode, and ion, which he gave to physicist Michael Faraday. In 1833, at a meeting of the British Association for the Advancement of Science, a new organization that Whewell had recently helped found, the word scientist was first spoken. Whewell thought of it by analogy with the word artist.
During the same period in which the Society met formally, Babbage, Herschel, Whewell, and future economist Richard Jones also met in Herschel’s rooms at Cambridge on Sunday afternoons after compulsory chapel for food, wine, and discussions about science. Peacock often joined in.
Breakfast parties were common at Cambridge. They were often frowned upon by the faculty as a waste of time and good claret. But these “philosophical breakfasts,” as Herschel and his friends called them, were not frivolous. The friends dedicated themselves to the ideas of Francis Bacon, the late16th-century father of the scientific method. Specifically, they embraced Bacon’s methods of induction, convinced that they fostered discovery much more easily than did the ancient deductive methods of Aristotle.
The spirited gatherings went on for only a couple of years, but their inspiration can be felt in Babbage’s essays, as well as in Whewell’s two masterworks, History of the Inductive Sciences in 1837 and Philosophy of the Inductive Sciences in 1840. Their spirit lives also in Peacock’s A Treatise on Algebra, a fearless 685-page volume published in 1830 with the goal of freeing algebra from the confines of arithmetic and placing it on a solid scientific foundation. In it, Peacock provided the first clear description of mathematical induction in the English language:
We advance considerably upon the scale of evidence when the facts upon which our inductions are founded are connected with each other: thus, if a formula be found to be true for several successive numbers, and there is a discoverable dependence of the several cases upon each other when taken in the order of their succession, we may assume it to be true for any number; the demonstration, however, is not complete, unless we can shew, that assuming its truth for any given number, it must be true for the number immediately succeeding. We can then pass upwards from the known cases, by means of this proposition, to any extent we please.
The same idea is introduced to college freshmen today: Imagine the mathematical statements that we wish to prove as dominoes, standing upright in an infinite row. Think of proving a single statement as toppling a domino. If the falling of any one domino causes the next to fall, then all the dominoes must fall. All the statements will be proved.
Peacock offered a somewhat complex example involving binomial factors (such as r + a, x+b, etc.) to prove his point. A simpler example is at the heart of Euclid’s proof from around 300 bce of the infinitude of prime numbers. His Proposition 20 of Book 9 of Elements can be paraphrased as follows:
If three primes a, b, c are given, then abc + 1 must be divisible by a prime other than a, b, c. Hence four primes exist. Therefore prime numbers are more numerous than any assigned multitude.
In effect, Euclid said that, having shown that there are three primes (that is, having toppled three dominoes) there must be a fourth prime (down goes domino number four). Of course, there is nothing special about three primes. Euclid’s argument is completely general. (Down go all the dominoes!) Hence there are infinitely many prime numbers.
But what motivated Peacock to explain in detail a concept of induction that today seems so intuitively clear? Surprisingly, many mathematicians of Peacock’s time indulged in what some called “incomplete induction,” relying on only a few cases to draw a general conclusion (much like the previous example of the polynomial function evaluated for 1 to 39). According to Babbage, incomplete induction led the French mathematician Pierre de Fermat to claim in 1650 that 22”+1 is a prime number for any positive integer n. (Euler later showed that the claim is false for «=5.) In his essays, Babbage remarked that Euler, too, was guilty of the charge in other cases.
The sloppy practice continued despite Peacock’s admonishment. Even 28 years later, the British mathematician Arthur Cayley published a theorem about square matrices of arbitrary size with only the following proof:
I have verified the theorem, in the next simplest case, of a matrix of order 3… but I have not thought it necessary to undertake the labour of a formal proof of the theorem in the general case of a matrix of any degree.
The general result that Cayley claimed was finally proven by the German mathematician Ferdinand Frobenius in 1878. (Confusingly, it is known today not as the Cayley-Frobenius Theorem but rather as the Cayley-Hamilton Theorem, a tribute to the colorful Irish mathematician William Rowan Hamilton, who proved a special case earlier in 1853.) The theorem is used widely in mathematics and science generally, in particular for generalized matrix inversion.
Standards of acceptable proof in mathematics were beginning to tighten. George Boole, a mathematical amateur whose work in the mid-19th century was finally recognized in 1937 by mathematician Claude Shannon of the Massachusetts Institute of Technology for its utility in computer science, voiced his criticism of Cayley’s cavalier approach. He wrote:
If generally true [the theorem] ought to be admit of a symbolical proof not involving much more complexity but resulting from first principles of symbolic algebra-this being the kind of proof, which according to analogy and form the intrinsic character of the theorem ought to be sought for.
In fact, Cayley could have proven his theorem using mathematical induction.
O! Be Some Other Name
Babbage was not the only British mathematician worrying about induction. Another was Augustus De Morgan. Born in India, where his father was in the service of the East India Company, De Morgan later attended mediocre schools in England, where his mathematical abilities went unnoticed until he was 14 years old. Nevertheless, De Morgan eventually came to Cambridge University, graduated in 1827, and became a professor one year later, at the age of 21, at the newly created University College London.
De Morgan despised teaching methods that emphasized problem-solving over discovery and comprehension. One learns best, he believed, by inductive learning-generalizing observations that one should later prove rigorously.
University College London was a secular university that, unlike Cambridge or Oxford, would admit students regardless of their religion (or lack of religion). Lord Henry Brougham, a scientifically minded liberal leader in Parliament at the time, helped found the university as well as an organization called the Society for the Diffusion of Useful Knowledge (SDUK), which was committed to promoting morality in the lower and working classes through secular education.
The SDUK shunned politics and religion as it tried to elevate the working and middle classes. Its name was a poke-in-the-eye of the Society for Promoting Christian Knowledge (SPCK). Predictably, members of the conservative Tory party as well as the Church of England viewed them as a threat.
Although the SDUK was nonsectarian, Brougham believed that knowledge of mathematics could lead to spiritual understanding. The idea was not new. René Descartes expressed a similar opinion some 200 years earlier. Others had doubts, including the conservative Whewell, who had decided to take the Holy Orders in the Anglican Church in 1826.
Availing itself of new steam printing that made cheap publications possible, the SDUK produced short booklets on a variety of subjects that included history, navigation, electricity, the art of brewing, and, of course, mathematics. Each was 32 pages long and sold for a mere sixpence, roughly the cost of a pint of beer.
In 1830, De Morgan published the first volume of On the Study and Difficulties of Mathematics. In Part II of the volume, he wrote:
It is certain that most great discoveries have been made by means of [induction]; and the mathematician knows that one of his most powerful engines of demonstration is that peculiar species of induction which proves many general truths by demonstrating that if the theorem be true in one case, it is true for the succeeding one.
The statement functions well as the skeleton of mathematical induction conveyed in the least technical language possible. The skeleton requires some muscle in order to stand, however. Consider, for amusement, the following “proof” that all ponies are red, an incomplete induction serving as a cautionary example of the concept:
Assume that in any corral with n ponies, all ponies have the same color. Suppose that we have a corral with n+1 ponies. Ride the last pony away. Now we have n ponies, and by our hypothesis all are the same color. Return the last pony to the corral but now ride the first pony away. Again we have n ponies, and all must have the same color. We conclude that all of our n+1 ponies have the same color. Finally, we appeal to the title of a classic novel of John Steinbeck, The Red Pony, to assert that there is at least one red pony. Hence all ponies must be red.
The trouble with our equine argument it is that the inductive step needs at least three ponies. (Toppling the first or even second domino will not cause the third to fall.) In an entry to Volume 14 of Penny Cyclopaedia, published by SDUK in 1838, De Morgan gave a more careful explanation with illustrative examples:
There is however one particular method of proceeding which is extremely common in mathematical reasoning, and to which we propose to give the name of successive induction.
At the very end of his article, De Morgan let slip a term that would endure:
An instance of mathematical induction occurs in every equation of differences, in every recurring series..
Many books on the history of mathematics, too many web pages, and even the Oxford English Dictionary erroneously credit De Morgan with inventing the term mathematical induction. But the term, usually meaning a type of incomplete induction, was already in common use.
A notable example is found in the second volume of Elements of the Philosophy of the Human Mind, published more than 20 years earlier by the popular Scottish mathematician Dugald Stewart of the University of Edinburgh. Although he recognized the risks, Stewart believed that this type of induction, when used in mathematics, is generally safe, because our expectations are based on “a process of thought” rather than “instinctive expectation of the continuance of the laws of nature.”
Regardless, the term induction gradually lost its earlier meanings as it became the property of the mathematical community.
The Nature of Miracles
Francis Henry Egerton, the eighth and last Earl of Bridgewater, was a scholar and an antiquarian. He was also disturbingly eccentric. At his Paris mansion in Rue St. Honoré, dogs sat at the dinner table, dressed in the finest handmade clothes and booties, each served by a footman. Egerton’s cats were rolled through the streets in carriages. Partridges and pigeons with clipped wings were stocked in the garden so that he could have the pleasure of shooting game.
The aged nobleman’s death would bring about the destruction of whatever friendship remained between Babbage and Whewell.
In his will, the earl directed that 8,(XX) pounds be paid for the publication of works “On the Power, Wisdom and Goodness of God as Manifested in the Creator.” After his death in 1829, his estate decided to commission eight volumes in a set to be called The Bridgewater Treatises. Each author would be paid 1,000 pounds, a fabulous sum of money at the time.
Many who reviewed the resulting works, including Edgar Allen Poe, were unimpressed. Indeed, the surgeon Robert Knox referred to the volumes as The Bilgewater Treatises. The scientific ideas used by the authors to buttress their religious arguments quickly became outdated.
One volume would carry a lasting effect, though. Whewell was asked to write a treatise, and he was delighted to accept the handsome remuneration. His contribution, On Astronomy and General Physics, was the third in the series. In a chapter titled “On Deductive Habits,” Whewell compared mathematicians unfavorably to scientists. Although scientists ascend to principles of nature by induction, he contended, mathematicians descend from those principles by deduction. Locked into a mechanical view of the universe, mathematicians are blinded to higher truths, Whewell concluded:
We may thus, with the greatest propriety, deny to the mechanical philosophers and mathematicians of recent times any authority with regard to their views of the administration of the universe; we have no reason whatever to expect from their speculations any help, when we ascend to the first cause and supreme ruler of the universe.
Perhaps not surprisingly, Babbage-who had not been invited to write a treatise-felt insulted by Whewell’s declaration. His response was to write his own, unauthorized Ninth Bridgewater Treatise and publish it at his own expense. On tine bottom of the title page, Babbage reproduced Whewell’s scornful paragraph verbatim, as if repeating aloud an insult that must be avenged. He further noted in the preface that he received no pecuniary reward for his work, as if to suggest that Whewell’s scholarly motives were actually financial, and thus not honorable. Babbage was known to have a bad temper at times. Here it was displayed for all the world.
In his book, Babbage offered much more than a defense of mathematicians. He described his calculating engine in plain terms, and he propounded his theory about the nature of miracles. It was the same theory that he gave at his soirées, a theory that had been inspired by his early thoughts about mathematical induction. Applying it to evolution, Babbage imagined “one comprehensive law” that had governed the creation and destruction of species on Earth for all time. In modem parlance, God was the Divine Programmer.
As it happens, The Ninth Bridgewater Treatise was already in proof stage and much talked about prior to the night that Charles Darwin visited Babbage’s home. The young geologist might have imagined that his host was speaking directly to him. It is more likely that Babbage was addressing the imaginary presence of his former friend Whewell.
One question that remains unanswered is how far back in time the origin of mathematical induction extends. For example, in 1654, French mathematician Blaise Pascal explored the properties of the arithmetic triangle, known today as Pascal’s triangle. The Twelfth Consequence of his Traité du triangle arithmétique contains what is essentially a proof by mathematical induction.
However, other 17th-century mathematicians, including Jakob Bernoulli, Pierre Fermat, and John Wallis, have each in turn been named the originators of mathematical induction. Franciscus Maurolycus, a 16th-century Italian astronomer, has been cited as well.
The list goes on, further back in time. Rabbi Levi ben Gershon, a 13th-century Jewish savant, has been described by one historian as the “earliest writer known to have used induction systematically in all generality and to have recognized it as a distinct mathematical procedure.” Another has nominated the 10th-century Iranian mathematician and engineer Abu Bakr Muhammed al-Karajl.
In fact, deciding who first discovered mathematical induction is hopeless. As the American historian of science Florian Cajori and others have noted, the seed of the idea was planted in ancient soil. It had already sprouted in Euclid’s Elements.
Closely related to mathematical induction is the method of recursion, defining numbers in terms of previous ones. The Fibonacci sequence 1,2,3,5,8,… is the most famous example. It was introduced in 1202 by Leonardo of Pisa (known today as Fibonacci) in his book Liber Abaci. Each term is the sum of the two previous terms (the first term, 1, has nothing before it, so the second term is 1+0, and thus also 1). Recursion such as this is the foundation of many computer programs today.
Another example of recursion, one that particularly pleased Babbage, is the sequence 1,3,6,10,… They are numbers that appear in Pascal’s triangle arithmétique. Here the Mth term is the sum of the numbers from 1 to n. Pythagoras and his followers called these numbers triangular because they can be visualized as the numbers of dots filling equilateral triangles of increasing size, like stacked cannonballs. Notice that the fifth triangular number is the fourth triangular number plus 5. A similar recursive pattern holds for the others.
Mathematical induction can be used to prove that the nth triangular number has a polynomial formulation, Vi(«2+«). For example, the 100th triangular number is 5050, found simply by plugging « = 100 into the polynomial rather than laboriously adding all of the numbers from 1 to 100.
To wow his party guests on that cold March night in 1837, Babbage had first directed his Difference Engine to reach the number 1,000,000 by increments of 1. Then he threw in Pascal’s successive triangular numbers multiplied by 10,000.
The idea of mathematical induction-or “reasoning by recurrence,” as still others have preferred to call it-is more than simple recursion. It asserts that a property holds for all members of an infinite set, not just any large finite subset. In his popular essay Science and Hypothesis, published in 1894, the mathematician Henri Poincaré declared that without such a tool, mathematics would be nothing other than verification of cases. To prove even the smallest theorem, Poincaré asserted, one “must use reasoning by recurrence, for it is the only instrument which enables us to pass from the finite to the infinite.” He continued:
Induction, as applied in the physical sciences, is always uncertain, because it rests on the belief in a general order in the universe- an order outside of us. On the contrary, mathematical induction-that is, demonstration by recurrence-imposes itself as a necessity, because it is solely a property of the mind itself.
According to Poincaré, mathematical induction does not follow from ordinary principles of arithmetic. An Italian mathematician, Giuseppe Peano of the University of Turin, went even further by regarding mathematical induction as an axiom. In his 1889 pamphlet, Arithmetices principia, nova methodo expósita, Peano derived the theory of natural numbers from just three primitive ideas and five axioms. The ideas are zero, number, and successor. The first four axioms keep mathematics earthbound. The fifth allows it to take flight. The axioms are: 0 is a natural number; the immediate successor of a natural number is a natural number; 0 is not the immediate successor of any number; no two natural numbers have the same immediate successor; and finally, any property belonging to 0 and to the immediate successor of any number that also has the property belongs to all natural numbers. Peano’s fifth axiom is really mathematical induction.
British mathematician and philosopher Bertrand Russell noted that Peano’s primitive notion of number is problematic. We think of numbers as 0,1, and so on, but we assume that we know what numbers are. Russell continued:
The key to our problem lies in mathematical induction. It will be remembered that this was the fifth of the five primitive propositions which we laid down about the natural numbers. …This was then presented as a principle, but we shall now adopt it as a definition.
Russell went on to define natural numbers in terms of 0, successor, and mathematical induction. From this perspective, mathematical induction is part of the very definition of the term number. Nobody can rightfully claim the credit for its discovery.
Our Will to Permanence
Mathematical induction remains an alluring tool for constructing proofs. Like any tool, however, it is effective only in certain situations. Recognizing those situations requires intuition and experience.
Why does mathematical induction remain so compelling, even today? One reason is that it transcends our knowledge of a finite world, reaching for something more. The mathematician Tobias Dantzig proposed such an answer in his 1954 book Henri Poincaré, Critic of Crisis, as he wandered the same metaphysical realm that captured the imaginations of Whewell and Babbage:
That a physical or physiological execution of such an interminable series of operations is impossible is readily recognized by us; but we cannot or will not subject to the same limitations the power of our mind. Indeed, when our mind contemplates the future, it flees its mortal shell to find refuge within an infinite being endowed with limitless memory and eternal life.
Dantzig attributed our urge to carry mathematical expressions onward to our will to permanence: We will the future to resemble the past, the universe to obey immutable laws, and the Sun to rise tomorrow and again every day-forever.