What is a mathematical proof? Way back when I was a college freshman, I could give you a precise answer: A proof of a statement S is a finite sequence of assertions S(1), S(2), … S(n) such that S(n) = S and each S(i) is either an axiom or else follows from one or more of the preceding statements S(1), …,S(i-1) by a direct application of a valid rule of inference.

But I was so much older then, I’m younger than that now.

After a lifetime in professional mathematics, during which I have read a lot of proofs, created some of my own, assisted others in creating theirs, and reviewed a fair number for research journals, the one thing I am sure of is that the definition of proof you will find in a book on mathematical logic or see on the board in a college level introductory pure mathematics class doesn’t come close to the reality.

For sure, I have never in my life seen a proof that truly fits the standard definition. Nor has anyone else.

The usual maneuver by which mathematicians leverage that formal notion to capture the arguments they, and all their colleagues, regard as proofs is to say a proof is a finite sequence of assertions that *could be filled in* to become one of those formal structures.

It’s not a bad approach if the goal is to give someone a general idea of what a proof is. The trouble is, no one has ever carried out that filling-in process. It’s purely hypothetical. How then can anyone know that the purported proof in front of them really is a proof?

I wrote about this dilemma in my MAA “Devlin’s Angle” column way back in 1996, in an article titled Moment of Truth.

I picked up the theme again in 2003 with my Devlin’s Angle piece When is a Proof?

These days I have a very pragmatic perspective on what a proof is, based on the way people use them in the day-to-day world of mathematics:

*Proofs are stories that convince suitably qualified others that a certain statement is true.*

If I present you with a proof, and you have the appropriate background knowledge and ability, you can – usually with some time and effort – as a result of reading my story, become convinced that what I claim is true.

But if you take that as your working definition of proof, you have to acknowledge it is fundamentally about communication, not truth. In particular, whether an argument classifies as a proof depends as much on the intended reader as on its creator.

Of course, in order to function in that way, the “story” has to be pretty heavily constrained.

Moreover, the creators and the consumers of those stories have to be familiar with the genre. That part takes time to acquire.

On the other hand, once a person becomes familiar with both the genre and the particular mathematical focus, reading and understanding those stories becomes natural and fluent.

The system works – as any professional mathematician will affirm. It’s how mathematics advances.

To an outsider, however, the whole thing is usually incomprehensible.

Today, many proofs stretch over several pages, not infrequently hundreds of pages. A key feature that allows such proofs to function effectively in the mathematical community is that many steps are left out.

In some cases this is because the step has already been established, either by the same author in a previous piece of work, or by someone else. In such cases, the author simply refers the reader to that source.

In other cases, the author judges that the intended reader should be capable of supplying the missing steps on the fly. The author may provide a hint to help the reader provide the missing steps, but not always.

There is, then, a huge element of audience design in constructing effective proofs. A proof designed for an undergraduate mathematics class is in general very different from one constructed to present at a research seminar.

To the beginner, trying to make the transition from high school mathematics to university level, coming to terms with real proofs is not only difficult, it can be traumatic, with a once comforting illusion of crisp, clean certainty rapidly giving way to a panicked feeling of sinking into shifting quicksand.

At this point, it can be of some comfort to learn that Euclid screwed up big-time when he penned his famous geometry proofs in *Elements*. Yes, those iconic proofs may seem logically sound, and indeed for two thousand years were held up as models of logically sound reasoning. But as David Hilbert observed in the late Nineteenth Century, Euclid’s arguments are riddled with logical holes.

To give just one example, he often tells you to construct a point by intersecting an arc of a circle with a straight line. But how do you know there is an intersection? Sure, when you draw the arc and the line on a sheet of paper, the arc may cross over the line. But do they actually *intersect*? That is, do they have an actual (dimensionless) point in common?

That is not only not obvious, it takes a lot of work to answer. (The answer is, it depends on the underlying number system. But it requires some deep machinery not developed until the Nineteenth Century.)

Of course, high school teachers rarely, if ever, tell their students that the geometry proofs they are presented as models are at best sketches of how proofs can be constructed. As a result, those students typically enter university with a totally false impression of what a proof is. In particular, they believe proofs are fundamentally and exclusively about truth, and that they are either right or wrong.

In reality, proofs *are* about truth, but not fundamentally, and definitely not exclusively. The key property of a proof is not that it is logically correct (it almost certainly is not, but more pertinent, how could you ever be *sure* it is?), rather that it is expressed in a manner that enables a suitably qualified reader to fill in any holes they notice, to check any steps they have any doubt about, and to correct any errors they find (as they surely will if they dig deep enough).

It’s very much like software engineering, where the most important thing about a program is not that it is bug free (it almost certainly is not), rather that – in addition to working – it is structured and annotated so that someone else can come along later and either fix bugs or else modify the code to do something else.

Ridding high school graduates of the “proofs are about logical correctness” misconception is generally a difficult (for both instructor and student) and painful (for the student) process. Just what it entails has been a focus of a study I have been making in my MOOC Introduction to Mathematical Thinking, currently being offered for the fifth time. I describe my most recent observations in a new post on my other blog, MOOCtalk.org,

*where this account continues…*

If it’s being decided by popular voting whether two lines intersect in a point is obvious or not, then there is little doubt that the vote will be for “obvious”. What is true is that it does take a lot of work to answer.

The article is very much to the point of the difficulty agreeing on what constitute proof. However, for good or bad, the disagreement becomes practically moot when the question is about HS proofs.

>The trouble is, no one has ever carried out that filling-in process.

I think you do a big disservice to the formal proof verification community here. Large parts of mathematics are being formalized in this way, and many believe that the “gold standard” of mathematical proof will someday be a proof which has been checked by a computer.

I got a number of comments like this, so I’ll make this response more general (and hence longer) to cover the various points raised.

I think you misread my article, which is about proofs as understood and used in modern mainstream mathematics. Namely, they are devices we use to understand properties of, and relationships between, mathematical objects.

True, the theorem-proof fashion in which we typically present mathematical discoveries gives the impression that the focus is on what it true, but it’s really about why something is true.

In short, mathematics is about the why, not the what.

Proofs (as used by mathematicians) are stories humans tell to humans to explain the why.

Computer constructed formal proofs are something entirely different. They can provide us with a high degree of certainty that something is true, but by their nature they don’t satisfy our thirst to understand.

For instance, the general reaction in the mathematical community to the computer proof of Appel and Haken’s proof of the Four Color Theorem was, “That’s nice, but it just shows that the question was not a good math problem.”

The computer proof of the 4CT provided a mathematical fact, but no new insights into coloring networks. (That had already come from the initial reduction.) Likewise, the subsequent formal proof did not increase our understanding of the problem. It was not intended to, the goal was to increase our confidence in the result. It certainly did to me. But then, I follow those kinds of development (as an interested outsider).

These reactions by the mathematical community don’t constitute a dismissal. They are just an acknowledgement that formal proofs simply do not interest (most) mathematicians.

A computer proof generator or proof checker can increase our confidence in the truth of an assertion. But mathematicians view proofs as a means of understanding, which means the moment they suspect a problem might have to be attacked by a computer, they are likely to look for another problem.

Hales’ proof of Kepler’s Conjecture is an obvious exception, but he backed himself into that. There will always be problems that yield partially to an understanding-driven investigation, but result in having to check a set of possibilities too large to be done by a human. I for one am glad Hales kept going at that point (with the Flyspeck Project). But the way his result was published, and the difficulties he encountered, illustrates very well how the mathematical community views computer-aided proofs. (See his articles referenced here.)

Hales is not the only example, of course. But mathematically, they are all outliers. I predict that the points of productive contact between mainstream mathematics and the formal (computer) proofs community are going to remain few and far between for the foreseeable future. Mathematicians are not going to fire up a proof generator or proof checker when they start work each day. That’s not how the mathematical community works.

The best we can do is meet for beer on a regular basis. After all, we both share the predicament that no one else really understand what we do.

This sounds very scary: how do we know that such and such “theorem” are true, if the proofs are almost certainly full of errors?

Well, we don’t. In mathematics, as in large parts of natural science, it is an unfair fight. We can “easily” show a purported proof is false by finding an error, but we can never know for sure that it is correct. (Even if you apply a formal proof checking technique, you can never know for sure there was not an error in that process.)

What happens in practice is that once several experts in the domain have examined the argument and declared themselves to be satisfied it is correct, then the rest of the mathematical community accepts that fact and moves on. The more people look at it, the more confident we get.

However, there have been times when this process was eventually found to have gone wrong. Arguments that have been accepted for some time are subsequently found to have a fatal flaw. That’s why mathematicians are reluctant to ever say something is totally correct.

There is nothing wrong with this situation. Mathematicians don’t lose sleep worrying if “proved” really means “true”. We proceed as if it does. We actually do stake our livelihood on the fact (actually, it’s an assumption!) that all the theorems in the standard undergraduate canon are correct.

There’s a spectrum here. No mathematician would question Euclid’s theorem that there are infinitely many primes. But some very good mathematicians do still question whether the Poincare Conjecture is true. That may change over time. (The second one, not the first.)

For sure, our knowledge of mathematical “truth” is more accurate and reliable than the “truth” in any other parts of our lives. But if you live by the sword of precision, you should be prepared to die by it. As mathematicians, we have to admit we can never be totally sure a proof is correct.

OK, here’s an amateur’s question: What about Euclid’s Elements?

I’ll admit that some of his proofs are not exactly crystal clear – he didn’t have the benefit of degrees of arc, and he restricted his constructions to straightedge and compass (I think that was to show that size had no bearing on the proofs – and besides, you cannot measure an angle accurately).

And, of course, that’s why he had to prove that all right angles are equal.

(On another forum, someone claimed to be able to construct a 20 degree angle. He said that’s what he got when he measured it.)

The famous I.47 is probably the most-proved theorem in mathematics (President Garfield even came up with a proof while he was in Congress) Considering that you only need one good proof, that’s probably a bit of overkill.

I like your characterization of proof as communication. Early on, I learned that a proof is something you can show to someone else – someone suitably versed in the subject – and if he agreed with all the steps, he was bound to agree with the conclusion.

That’s one of the reasons Lincoln shut himself in a room with a copy of Book I until he learned it by heart – so he could learn to construct legal arguments that would take a jury step by step to a logical conclusion.

I don’t think the Hilbert criticisms are either applicable or destructive. Euclid didn’t know – or care about infinitesimals, or about the difference between continual or continuous. For him, there was a point everywhere you looked on a line.

The notion of “point” is a bit elusive. “Has no parts” could easily mean “isn’t there”. I’ve come to think of a point as a location, the most precise location imaginable. A bit like stars – no matter how powerful the telescope, a star is just a point.

Then along came Lobachevsky &c, who said, suppose we take a different set of definitions and axioms, and see what we get.

I will check into your MOOC sites.

You mention some proofs that are pages long. Is there a site where I could view an example of said proofs, or could you at least link to a proof so we can see what a “real” proof looks like? Thanks!

Sure. How about a modern classic, Andrew Wiles’ proof of Fermat’s Last Theorem.

And when you get to the point where Wiles draws on a result he obtained with his student Richard Taylor, you will need to check out that paper. I thought you might need to know that.😉

BTW, I have never read either paper. I take it on trust that the theorem is proved.

There’s another: In 2012, “Shinichi Mochizuki, a mathematician at Kyoto University, has released four papers on the internet describing his proof of what is known as abc conjecture.

The paper, which is 500 pages long, ……”

Both that and Wiles’ contradict the notion that a proof be understandable by “someone versed in the subject” – as there is no-one else with those mathematician’s grasp of the subject.

Suppose we change the idea to read “… and I understand and accept each step, so therefore I accept the conclusion”.

People being fallible (mathematicians included), he just might be wrong about step 798 – just as the guy who calculated pi to lots of places, but made an error somewhere around the 700th digit.

http://mathworld.wolfram.com/ has a good selection of artices on proofs.

Here’s a telling quote from a math site (http://m-phi.blogspot.com/):

“However, in the sense that in fact matters for mathematicians, Mochizuki’s ‘proof’ is not (yet) a prof because it has not been able to convince the mathematical community of its correctness;…”

Prof. Devlin: you’ve given the world a new bumper sticker:

“Mathematics is about the why, not the what.”

That opens a lot of doors here. Even when I’m doing elementary calculus, I’m just following rules, manipulating symbols – something computers do rather well.

Yup, these days we have to take most new mathematical results on trust. I believe Wiles proved FLT because Katz and Coates and a bunch of other mathematicians I trust said he did.

So we are still in the realm of stories told by mathematicians to mathematicians, but in some cases only a handful are able to fully understand those stories.

There’s also a leap of trust involved with a computer proof. Let’s not forget the Pentium FDIV math bug.

Certainty is still restricted to death and taxes. Mathematics does not make the cut.

I think this overstates the case. Indeed, the Elements certainly passes the standard of rigor modern logicians require.

See Avigad, Dean and Mumma’s paper “A Formal System for Euclid’s Elements” (in the Dec 2009 issue of the Review of Symbolic Logic). In this paper, they give a formal calculus (named E) for Euclidean reasoning, and show that (a) it is sound and complete for ruler-and-compass constructions, and (b) nearly everything in the Elements can be directly transcribed into their calculus.

The only exceptions they found are that Euclid is sometimes cavalier about considering all the cases in a case analysis (a fact already noted in the fifth century by Proclus), and that proposition I.9 needs a few more words of justification for its construction than is given in the Elements.

Remember we are talking about the present day concept of proof here, as used by the mathematical community. In that light, an argument that makes use of [the point where a circle intersects a line] that does not prove that such a point exists, is not rigorous.

It never occurred to Euclid or indeed anyone back then that this was something that requires proof. In fact, it required the construction of the real number continuum, which came centuries later. So in this case, filling in the missing details was a lot more than a few little niceties.

This does not render Euclid’s arguments logically incorrect. They were not. But just because they were logically correct does mean they are rigorous

proofs, since they omit steps that the intended reader (say, a professional mathematician) would find non trivial.If we are going to allow proofs that miss out significant steps, then I can give you a logically correct proof of Fermat’s Last Theorem that will fit in a tweet, let alone a margin. Here it is:

Let n>2. Suppose there are positive integers x, y, z such that x^n + y^n = z^n. Then 0 = 1. This is a contradiction, proving FLT.That sequence of statements is logically correct. It just happens to miss out some steps. Some fairly hefty steps, as it turned out.

Most of us would say my tweet is not a

proof, and rightly so. But now we are talking about providing enough steps for the reader to be able to follow it, perhaps with some effort, filling in any missing steps as she goes. Andrew Wiles likely requires a lot less filling in of missing steps than the rest of us. I would need fewer steps than the average high school student. And so on.In Euclid’s time, it was not recognized there were any missing steps in his arguments, so what he wrote was regarded as a proof. Back then, it was. But it surely was not a proof in our present day sense. We would probably say it is a strategy for constructing a proof. It can be built out to yield a proof. The standard mathematical way is to construct the real (Euclidean) plane.

I don’t believe I am disagreeing with what you say. But I think the different interpretation I put on it is significant if we want to understand what mathematicians today (really) mean by

proof.Not exactly mathematics (and not something the average person can read), but perhaps worth mentioning that Russell/Whitehead famously took 300+ pages to “prove” that 1+1=2. Anyway, I think what you’re saying here about proofs is so subtle, and at variance with peoples’ entrenched notion of the term, that it’s hard for many to grasp. The bottom-line (seems to me) is that mathematics is still integrally tied up with language and semantics, more than some might admit.

It is a realization that took me many years of living with proofs most of my working days. We absorb the standard story we are told about proofs back in high school, and those of us that choose to pursue mathematics often carry it with us through college and beyond. Part of the problem is that it offers a comforting certainty. We humans tend to hang on to anything that offers certainty. (Exercise for the reader: Find five examples of this from different walks of life. Okay, maybe that is too easy.)

A number of almost identical comments came in over the past couple of days, all on the theme that I did not discuss automated proof checkers. The reason I did not was that my focus was on proofs not as mere binary truth indicators, but as a way to achieve understanding why something is the case.

Understanding, I noted, is what mathematicians primarily look to proofs to achieve.

I did, however, address the issue of proof checkers when I responded to the first round of comments. I guess it’s possible the later commentators did not read through those initial exchanges. Since almost all in that second wave accused me of being unaware of proof checkers, it is also clear they had not read my 2005 article about Goerges Gonthier’s use of Coq to obtain a verified proof of the Four Color Theorem.

Since a Google search on “keith devlin proof checker” brings that article up (for me) as number 2 behind this current blog post, it appears those afficonados of automated proof-checking do not seem to have the same affinity for automated web search.

Even more strange, those second wave proof-checker fans also did not bother to click on the second Web-link

in my original post, which brings up my 2003 article “When is a proof?” in which I discussed Thomas Hales’ Flyspeck Project for verifying his proof of the Kepler Conjecture.Other than making specific references to what are arguably the two best known, and spectacular, examples of automated proof checker use, I am, frankly, at a loss as to how to effectively acknowledge the role that can be played by proof checkers. (As it happens, I met and talked with proof-checking pioneer Larry Wos many years ago when I was a wide-eyed graduate student in logic, but that is not public knowledge, and I doubt Wos remembers the occasion.)

But to get back to the issue of knowing whether a proof is correct, it makes no difference whether the argument is checked by a human or a machine, we can never achieve 100% certainty that it is logically correct. The most we can obtain is greater confidence.

What this means is that proofs are

in order to reach a conclusion about the truth of a mathematical statement.evidence that humans evaluateIf we proceed carefully, and in sufficient detail, we can often achieve so much confidence in our conclusion that we no longer harbor any doubt about the truth of a particular theorem. (Almost all the proofs encountered in a typical undergraduate mathematics course are of that nature.)

By far the most common approach mathematicians take to gain that greater confidence is to subject the proof to repeated human attempts to find an error. In general, the longer it stands up, the more confident we become.

Very occasionally, that approach proves inadequate, however, and then, as the Gonthier and Hales examples illustrate, our increased confidence may result from the use of automated techniques.

But either way, history has shown us repeatedly that we should never make the mistake of believing we, or our machines, have achieved infallibility.

Does that mean that we’re really confident about the Pythagorean theorem, but that there still just might be an error somewhere, and it’s still just a theorem (in the mathematical sense), and that someday, someone might find a mistake in one of the hundreds of proofs?

In another place, you mentioned software programs. There’s been a fair amount of work on proving programs correct, but since any program that does anything useful is bound to be thousands of lines long, with millions of possible paths, and an almost infinite number of initial conditions, I don’t think that’s going to happen any time soon.

In fact, we can’t even prove whether a program will eventually stop, or run forever (in an infinite loop). I think it was Turing who established that.

That seems to say that the halting problem is a different sort of beast from the mathematical proof.

On another level, how do we deal with the fact that some machine proofs found better solutions to complex problems than had been established by mathematicians?

That mathematics is a form of communication, in particular a method of persuasion had profound implications for mathematics education, even at lowest levels.

Personally, I use the argument that the point is not to get the right answer, but to show others that your answer is correct, to justify showing each step for my freshman and remedial math students.

Hi kevin,

I feel it has more to do with perception. You present a person with a proof given out by an established professor from a good college. The gut feel is to go with the proof if it makes mathematical and logical sense.

Well, that’s not the issue I was looking at. I was articulating how proofs work irrespective of the status of the author, as in blind refereeing. But you are absolutely right that, like it or not, most people are influenced by the those factors, when they are available.

My reaction here: http://mrchasemath.wordpress.com/2014/12/24/what-does-it-mean-to-truly-prove-something/

Prof. Devlin, please feel free to stop by and comment!

John

John, I think the following passage from my November 29 comment above addresses the issues you express in your blogpost:

“[I]t makes no difference whether the argument is checked by a human or a machine, we can never achieve 100% certainty that it is logically correct. The most we can obtain is greater confidence.

What this means is that proofs are

evidence that humans evaluatein order to reach a conclusion about the truth of a mathematical statement.”Thanks for the reply, Prof. Devlin.

“[I]t makes no difference whether the argument is checked by a human or a machine, we can never achieve 100% certainty that it is logically correct.”

So when you say this, are you making a passing reference to Gödel’s incompleteness theorems–that the consistency of a formal system can’t be checked without going outside the system?

Is this also why you say, “you’ve never seen a proof that truly fits the standard definition”?

No, my point has nothing to do with Gödel’s theorems. Those are about the limits of formal systems. I am talking about proofs as used in present-day mathematics.

You can read a proof and conclude, with certainty, “This proof is wrong, and here is why.” But there is no way you can read a proof and say “This is correct.” All a proof can do on the positive side is increase your confidence in the truth of its conclusion. Proofs are evidence that you weigh. Nothing more.

Using computers cannot eliminate this inherent uncertainty. For instance, in 1994 computers with the Intel Pentium 5 chip gave this answer

4,195,835/3,145,727 = 1.333739068902037589

This identity was thus “proved” by a computer. Indeed, it was “proved” by means of an extremely simple argument, namely an arithmetic computation. But a human spotted that it was wrong at the fourth decimal place.

Keith: Would you say then that the Pythagorean Theorem is not proved, but only that we have extreme confidence that it’s true?

(Proof by Appeal to Authority: A U.S. President came up with a proof. Therefore it must be true.)

Strictly speaking, yes. But that kind of knowledge is way more certain than pretty well anything else. I’m not proposing solipsism; on the contrary, I am trying to be totally practical. Hilbert’s observations notwithstanding, it would be unreasonable to say the theorems in Euclid’s

Elementsare not proved. The argument are so simple and short. But that just is not the case in much of present day mathematics. I have confidence in the Wiles-Taylor proof of Fermat’s Last Theorem and Perelman’s proof of the Poincare Conjecture. (Based on what knowledgeable experts have said.) But I would wager that in both cases the published proofs have errors, and are thus not, strictly, valid proofs. I don’t see how anyone can conclude with total certitude that the claimed theorems are indeed correct. But on a practical level, both are now accepted into the mathematical canon, and reasonably so. It’s all a matter of degree of certainty.My interpretation of mathematical proof is related to my view of mathematicians as gifted programmers. Imagine a programmer who uses an ancient method of exhaustion to calculate circumference of circle. Unfortunately, such programs tend to be long hence prone to breakdowns. Then comes along a gifted programmer and reduces the lengthy code to one line C = π x d. To demonstrate that results produced by such shortened programs are reliable, he uses logic-based proofs. What the proof demonstrates is WHY the proposed algorithm is reliable. One can then say that “Proofs are stories that convince suitably qualified others that a certain statement (method?) is true.”

A question about proofs in general. I’ve been of the opinion that your statement “.. that convince suitably qualified others …” is th mark of a good proof. But then, Prof Devlin’s opening ” a statement S is a finite sequence of assertions S(1), S(2), … S(n)”. seems to suggest that the sequence itself (provided that S(n) follows from S(n-1) and S(n-k)) might be independent of another examiner.

On the other hand, Euclid seems to be trying to convince the reader (or someone): “I say that …. therefore …”

There are two notions of proof being considered here, the formal model of a proof as a logically connected sequence of statements, independent of a reader, and the social construct used by one mathematician to convince others. This distinction is what my post is all about.

Thank you, Professor Devlin – great stuff here. I am looking fwd to listening to your lectures & have signed up for this Falls Coursera class

As for “proofs”, this always gives me a chuckle:

http://quod.lib.umich.edu/cgi/t/text/pageviewer-idx?c=umhistmath&cc=umhistmath&idno=aat3201.0002.001&frm=frameset&view=image&seq=126

“The above proposition is occasionally useful.”

I haven’t read Russell & Whitehead’s Principia, but Russells Introduction to Mathematical Philosophy inspired me to broaden my mathematical thinking.

Best!

In today’s news: Terence Tao (UCLA) published a proof of the Erdos discrepancy problem. Previously: “Last year, Alexei Lisitsa and Boris Konev of the University of Liverpool, UK used a computer to prove that the discrepancy will always be larger than two. The resulting proof was a 13 gigabyte file.” That seems a bit extreme.

I understand prof that with the work of Thomas Kuhn nothing is now for sure . But is this situation going to last forever or with better knowledge and understanding ( and maybe use of computer proofs , as one comment mentioned ) , we could move forward in the “truthfulness ” of proofs . I of course love and agree that ” if you live with the sword of precision you should be prepared to die for it as well ” but my query is whether there is any scope for the situation to ” improve ” in the coming years . Your comment would be highly helpful . Thanks .