110

There are many theorems in mathematics that have been proved with the assistance of computers, take the famous four color theorem for example. Such proofs are often controversial among some mathematicians. Why is it so?

I my opinion, shifting from manual proofs to computer-assisted proofs is a giant leap forward for mathematics. Other fields of science rely on it heavily. Physics experiments are simulated in computers. Chemical reactions are simulated in supercomputers. Even evolution can be simulated in an advanced enough computer. All of this can help us understand these phenomena better.

But why are mathematicians so reluctant?

Gerard
  • 4,004
  • 4
  • 23
  • 54
  • How to deal with infinity? – newbie Jan 09 '14 at 16:09
  • @newbie: What do you mean? – Gerard Jan 09 '14 at 16:11
  • 26
    A proof is a social process. – copper.hat Jan 09 '14 at 16:12
  • 2
    @copper.hat: Define "social process", exactly. – Gerard Jan 09 '14 at 16:14
  • I don't see how computer can be used in the proofs involving infinity, for example density arguments. I think the point is that many proofs simply cannot be proved by algorithms repititing on a computer. – newbie Jan 09 '14 at 16:24
  • 8
    @newbie You are assuming the only thing a computer can do is check numerical examples. It could also do formal logic, just like a human, and deal with infinity as we do. – Ryan Reich Jan 09 '14 at 16:28
  • 27
    The part of a proof you let the computer do is extremely hard to check for correctness (except in simple cases). Software has a pronounced tendency to have bugs. You have to check the programme for correctness, and you have to check the compiler for correctness. And the hardware too, it is not unheard-of that hardware is buggy. – Daniel Fischer Jan 09 '14 at 16:33
  • @RyanReich I did not mean $\infty$, but rather the proofs involving statement such as '$\forall x \in X$', where $X$ has infinite amount of elements. – newbie Jan 09 '14 at 16:35
  • 25
    The veracity of a conjecture is often less important than the techniques required to demonstrate it. We could, for instance, happily assume the Riemann Hypothesis is true (many already believe it is true on faith alone). Yet doing so does nothing to enhance our understanding of it. – Emily Jan 09 '14 at 16:35
  • 1
    @newbie My point is that whatever technique you imagine a human would use to prove this statement, you can program a computer to use as well. After all, we don't check infinitely many cases in our heads. – Ryan Reich Jan 09 '14 at 16:42
  • 47
    @DanielFischer Our heads have a pronounced tendency to have bugs too. – Red Banana Jan 09 '14 at 16:55
  • 3
    @GustavoBandeira Yes. But those bugs tend to be easier to find (and rectify). Well, those that are merely typos and like mistakes, and holes in a "proof". Widely shared conceptual bugs aren't. – Daniel Fischer Jan 09 '14 at 17:02
  • 5
    @Gerard *Why are mathematicians so reluctant?* - Mathematicians aren't reluctant: **Some** mathematicians are reluctant. The field of computer assisted proofs has some people who believe it. This is kinda the same for everything, you have a proposition, then you have one belief team and one disbelief team (each one with a set of beliefs armed to defend their cause), and then you have a predominance trend. Problems happen (one side could judge the other with superficial understanding). Try both sides and be happy. – Red Banana Jan 09 '14 at 17:29
  • 12
    @DanielFischer "The part of a proof you let the computer do is extremely hard to check for correctness". That depends. E.g., Metamath and HOL-Light have a really simple easy to understand trusted kernel with multiple cleanroom implementations in different programming languages, some with independent correctness proofs, which leaves negligible room for incorrectly-approved theorems. See for example section 4.1 in http://us.metamath.org/#book, and http://www.cl.cam.ac.uk/~jrh13/papers/holhol.pdf, and http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.4179. – MarnixKlooster ReinstateMonica Jan 09 '14 at 18:45
  • 4
    [Here is a question](http://math.stackexchange.com/questions/601940) with several answers, one giving a computer proof and one giving a human proof. The answers may serve to illustrate the pros and cons of either approach. – ccorn Jan 09 '14 at 23:42
  • 4
    "a good mathematical proof is like a poem—this is a telephone directory!"; Kenneth Appel on the four color theorem. [http://en.wikipedia.org/wiki/Kenneth_Appel] – K. Rmth Jan 10 '14 at 20:29
  • 2
    @RyanReich Actually, as in the four colours proof, the computer did not prove it itself. The computer showed that in this particular set of maps (1936 of them) the theorem holds. This can be brute forced. Then they proved by hand, that if it holds for all of these, then it must hold for all. – Cruncher Jan 10 '14 at 21:06
  • 3
    For the same reason bringing a gun to a knife fight is controversial! – wim Jan 10 '14 at 22:01
  • 1
    @GustavoBandeira, so add the bugs in a programmers' heads to the ones in the hardware and see what you're left with. `:)` – JMCF125 Jan 11 '14 at 16:03
  • @JMCF125 You have no idea of what you're talking about. *Bugs in heads + bugs in computers* is meaningless. – Red Banana Jan 11 '14 at 16:28
  • @Gustavo, oh is it? If the programmers think wrongly ("bugs in heads", making bad algorithms); besides making mistakes ("bugs in computers" (of course caused by "bugs in heads", but "bugs in computers" they are nonetheless), thus misimplementing those algorithms, there are both. How is that meaningless? – JMCF125 Jan 11 '14 at 16:34
  • 2
    @JMCF125 My argument is a refutation on Daniel Fischer proposition: *Computers may have bugs*. I argued that our heads could have the same issue, thus it's a poor reason for avoiding the use of computers for such task. The meaninglessness of your sentence comes from it's construction: You're presuming that bugs will happen: Bug in head = 1; Bug in computer = 1; Bug in both = 2. But that scenario is actually not that relevant. – Red Banana Jan 11 '14 at 16:42
  • @GustavoBandeira, what I said is more of a play on words than anything else, and certainly not a refutation. I did agree with what you say, and upvoted it in fact, I was just noting that the "bugs in heads" also end up as "bugs in computers" (although not as often). – JMCF125 Jan 11 '14 at 16:49
  • @JMCF125 There are some scenarios that are open to exploration, for example: There could happen human bugs that could be corrected with the aid of a computer; There could happen computer bugs that would be fixed through human verification, etc. Your scenario presumes only that bugs will happen in both, thus avoiding the computer, we are going to have 50% less bugs. – Red Banana Jan 11 '14 at 16:49
  • @GustavoBandeira, that's quite a stretch from my short comment. `:)` I didn't say the likelihoods were 50-50, and didn't even refer the would be likely smaller, as the repetition were humans may error is where computers excel. I'd like to discuss further, but this is getting long. – JMCF125 Jan 11 '14 at 17:10
  • i think computers does not understand INFINITY am i wrong ? , also unfortunately a numerical proof is not valid otherwise i would have found an operator which yields to Riemann zeros :D :D – Jose Garcia Jan 12 '14 at 21:43
  • some analysis/insights on this question with many refs/leads (incl related se questions) in this essay [adventures & commotions in automated thm proving](https://vzn1.wordpress.com/2013/05/03/adventures-and-commotions-in-automated-theorem-proving/) – vzn Feb 13 '14 at 03:38

12 Answers12

115

What is mathematics? One answer is that mathematics is a collection of definitions, theorems, and proofs of them. But the more realistic answer is that mathematics is what mathematicians do. (And partly, that's a social activity.) Progress in mathematics consists of advancing human understanding of mathematics.

What is a proof for? Often we pretend that the reason for a proof is so that we can be sure that the result is true. But actually what mathematicians are looking for is understanding.

I encourage everyone to read the article On Proof and Progress in Mathematics by the Fields Medalist William Thurston. He says (on page 2):

The rapid advance of computers has helped dramatize this point, because computers and people are very different. For instance, when Appel and Haken completed a proof of the 4-color map theorem using a massive automatic computation, it evoked much controversy. I interpret the controversy as having little to do with doubt people had as to the veracity of the theorem or the correctness of the proof. Rather, it reflected a continuing desire for human understanding of a proof, in addition to knowledge that the theorem is true.

On a more everyday level, it is common for people first starting to grapple with computers to make large-scale computations of things they might have done on a smaller scale by hand. They might print out a table of the first 10,000 primes, only to find that their printout isn’t something they really wanted after all. They discover by this kind of experience that what they really want is usually not some collection of “answers”—what they want is understanding.

Some people may claim that there is doubt about a proof when it has been proved by a computer, but I think human proofs have more room for error. The real issue is that (long) computer proofs (as opposed to, something simple like checking a numerical value by calculator) are hard to keep in your head.

Compare these quotes from Gian-Carlo Rota's Indiscrete Thoughts, where he describes the mathematicians' quest for understanding:

“eventually every mathematical problem is proved trivial. The quest for ultimate triviality is characteristic of the mathematical enterprise.” (p.93)

“Every mathematical theorem is eventually proved trivial. The mathematician’s ideal of truth is triviality, and the community of mathematicians will not cease its beaver-like work on a newly discovered result until it has shown to everyone’s satisfaction that all difficulties in the early proofs were spurious, and only an analytic triviality is to be found at the end of the road.” (p. 118, in The Phenomenology of Mathematical Truth)

Are there definitive proofs? It is an article of faith among mathematicians that after a new theorem is discovered, other simpler proofs of it will be given until a definitive one is found. A cursory inspection of the history of mathematics seems to confirm the mathematician’s faith. The first proof of a great many theorems is needlessly complicated. “Nobody blames a mathematician if the first proof of a new theorem is clumsy”, said Paul Erdős. It takes a long time, from a few decades to centuries, before the facts that are hidden in the first proof are understood, as mathematicians informally say. This gradual bringing out of the significance of a new discovery takes the appearance of a succession of proofs, each one simpler than the preceding. New and simpler versions of a theorem will stop appearing when the facts are finally understood. (p.146, in The Phenomenology of Mathematical Proof).

In my opinion, there is nothing wrong with, or doubtful about, a proof that relies on computer. However, such a proof is in the intermediate stage described above, that has not yet been rendered trivial enough to be held in a mathematician's head, and thus the theorem being proved is to be considered still work in progress.

ShreevatsaR
  • 39,794
  • 7
  • 90
  • 125
  • 1
    If computer-assisted proofs are an intermediate stage to the journey to a definitive proof, then why the controversy? Surely mathematicians must realize that once a conjecture becomes a theorem, no matter what the manner of proving, we can always get a 'trivial proof' (after enough thought), as you mentioned above. The computer-assisted proof is just the first step of the process, in essence assuring us that the theorem is true. They are just replacing the 'clumsy' first proofs (although, I admit, computer-assisted proofs are no less clumsy). – Gerard Jan 10 '14 at 02:32
  • 11
    Its about new, elegant, maths in the proof. The 4 color theorem proof was mostly about permutating a very large set known to be representitive of all cases and showing that none of them violate the theorem. And thats all good and fine, it is indeed a proof, but ultimately the interesting part of that proof was the part where the large set of possibilities represented all possibilities, and that was understood long before the final proof, so no really new understanding emerges that way. What is prefered is a small elegant *principle*, not a search of input spaces – Shayne Jan 10 '14 at 02:53
  • @Shayne +1 I really agree that the most elegant part is often the narrowing of the solution space, but there are those of us who take great joy in the "last mile". The jump from the statement "It is possible to do this" to "I can do this in the 1 month with a normal computer" can also be a source of grand elegance. Methods like the FFT are beautiful in themselves, and are sometimes part of the engine that allows brute force to be practical for the last mile of a proof. In a way, it's narrowing the solution as well (in # steps rather than space). There is one mathematics, and it's everywhere! – user Jan 10 '14 at 11:04
  • 3
    @Gerard: Partly, the controversy may be from "old-school" mathematicians who don't understand or are sceptical of computers. Let's set that aside. For the rest, though some people may cast the question as being whether the conjecture has been proved, the question may be whether the person can be considered to have proved it. A proof is a social process: the "prover" also has the responsibility of supplying understanding/enlightenment to the rest of the community. See [this great article](http://projectwordsworth.com/the-paradox-of-the-proof/) on Mochizuki's claimed proof of the ABC conjecture. – ShreevatsaR Jan 10 '14 at 11:18
  • 8
    I'm reminded of the times when I see someone working on an an algebra problem, eventually narrowing the solution down to two cases... and then give up and ask "how do we find the answer?" rather than just checking them. Sure, 2 is much smaller than 2000, but it *is* just a matter of scale -- at some point, trying to narrow things down further just makes them more complicated and abstruse. I boldly assert that there don't exist enough simple proofs for there to be a simple proof of every theorem! –  Jan 10 '14 at 16:37
  • @Shayne: My understanding of the story of Appel-Haken proof differs from your description. The bulk of their efforts were spent on finding the set of unavoidable configurations. The computer was used to check these configurations for reducibility. When configurations proved not to be reducible, or when the configuration was too hard for the computer to check quickly, the unavoidable set was modified so as to replace the offending configurations with, hopefully, better ones. There was no guarantee, of course, that this process would terminate. The finding of the unavoidable set... – Will Orrick Jan 10 '14 at 18:15
  • @Shayne: …that works came very close to the completion of the proof. – Will Orrick Jan 10 '14 at 18:15
  • @Hurkyl `I boldly assert that there don't exist enough simple proofs for there to be a simple proof of every theorem!` That's interesting actually. Pigeon hole principal :) – Cruncher Jan 10 '14 at 21:14
  • 3
    I really like this answer. I would add though that the time scale in which a computationally intensive proof is an "intermediate stage" could be very long, e.g. longer than one human lifetime. – Pete L. Clark Jan 11 '14 at 04:32
  • @Gerard "we can always get a 'trivial proof' (after enough thought)" Uhm, can you prove that to me? I'll accept a computer-assisted proof ;) No seriously, there are things we can't prove, there are even things we can't prove to be unprovable. So I doubt there's a simple explanation for everything. Something mathematicians had to realize a while ago is that their world is almost as messy as the "real" one. – Christian Jan 12 '14 at 15:12
  • Its entirely plausible I misunderstood the nature of the proof. – Shayne Jan 19 '14 at 21:57
  • "Every mathematical theorem is eventually proved trivial." This is hardly clear, because it's not clear what "trivial" means. The last paragraph of the second quote talks about simpler proof, and then this answer concludes by saying that a proof that relies on a computer is in an intermediate stage. This isn't true. There *do* exist computer produced proofs which are simpler than any humanly produced proofs. And were weren't at an intermediate stage when McCune got EQP to prove the Robbins conjecture or when Wos, Fitelson, and Ulrich got OTTER to prove that XCB was a single axiom. – Doug Spoonwood Jul 02 '14 at 19:49
  • Shorter and simpler proofs of those *may* get found, but the question of what comes as the shortest or simplest proof of a theorem consists of a different question than whether a system has a theorem or does not have a theorem. This "Often we pretend that the reason for the proof is so that we can be sure that the result is true. But actually what mathematicians are looking for is understanding." sets up a false dichotomy when a mathematician can have an interest in both. – Doug Spoonwood Jul 02 '14 at 19:53
24

This answer began as a sequence of comments to a great article linked by Joseph Geipel in his answer. Eventually I decided to leave it as an answer in its own right.

It is a great article: I found it to be very balanced and accurate, and they did a good job of finding the right people to interview: thoughtful, veteran mathematicians at several points on the computers/no-computers spectrum. (Well, to be fair the "no computers guy" was rather gentle in his expression of his views. If they had looked hard enough they could probably have found some eminent mathematician to take a more extreme "computers are gross" stance. I didn't miss hearing that viewpoint!) But I didn't see anything in the article defending the position that computer-assisted proofs are inherently controversial. I think that is in fact a backward position that very few contemporary mathematicians take and that the profession as a whole has moved past.

My former colleague Jon Hanke makes a great point in the article about not blindly trusting computer calculations. But (as Jon well knows) that point can and should be made much more widely: it applies to any long, difficult calculation or intricate casewise argument for which the full details have not been provided or with which the community of referees chooses not to fully engage.

Let me respond briefly to some of the arguments expressed here (and certainly elsewhere) against computer assisted proofs.

The goal of mathematical proof is increased insight and understanding, and computer-assisted proofs just give us the answer.

There is something to this sentiment, but it is aiming at the wrong target. Mathematics has a venerable tradition of long, intricate computational arguments. Would it be better to have shorter, more conceptual, more insightful proofs instead? Of course it would! Everyone wants that. But proving theorems is really, really hard, and the enterprise of mathematics should be viewed as an accretion of knowledge over time rather than a sequence of unrelated solutions to problems.

"Pioneer work is clumsy" someone famous said: the first proof of an important result is very often not the best one in any way: it is more computational, or longer, or harder, or uses extraneous ideas,...,than the proofs that the community eventually finds. But finding the first proof -- even a very clumsy one indeed -- is clearly one of the most important steps in the accretion process. Gauss's first proof of quadratic reciprocity was by induction! But he proved it, and in so doing it opened the door to later and better proofs (including another six or so of his own). Gauss's Disquisitiones Arithmeticae is full of what to contemporary eyes looks like long, unmotivated calculations: nowadays we can give proofs which are shorter and less computational (most of the time: Gauss was such a blazing pioneer that he found many results that he had "no right to find", and his work retains a distinctly nontrivial aura to this day).

People who argue against computer assisted proof for this reason seem to have the wrong idea about the culture of mathematics. One might naively worry that the Appel-Haken long computational proof of the Four Color Theorem would have stopped people from thinking about this problem and that for ever after we would be stuck with this long, uninsightful (apparently; I am not an expert here) argument. But of course that's not what happened at all: their work increased the interest in the Four Color Theorem, and by now we have a still-computer-assisted but much simpler proof by Robertson, Sanders, Seymour, and Thomas, as well as a fully formalized proof by Gonthier. (One of the main differences in their proof from the original proof -- which I did not fully appreciate until I got some helpful comments on this answer -- is that the Appel-Haken proof also had extensive and poorly documented hand computations. This then becomes an instance of "doing one's computational business properly" as discussed elsewhere in my answer.)

A key point is that you don't need a computer to give a long, uninsightful, calculational proof: lots of mathematicians, including some of the very best, have offered such proofs using hand calculation. Because computers are so much better at calculations than human beings, we can use computers to calculate in different and deeper ways than is possible by hand.

Doron Zeilberger likes to exhibit single screens of code that prove more general versions of theorems that appear in published papers, often with lengthy (and certainly computational) proofs. Sometimes though Zeilberger (whom I have corresponded with and I hold to be a truly great mathematician; he has, though, decided to take the extreme stance in this particular debate, which I feel is not always to the credit of his cause) can be disingenuous in describing the origin of this code: he looks forward to the day when his computer will itself read and respond to the work of these human colleagues and figure out for itself what calculations to do. It is reasonable to expect that something like this will occur at some point in the future. But he sometimes likes to facetiously pretend that that's what's happening in the present. Of course it isn't: Zeilberger is brilliantly insightful on how to find exactly the right routine calculation which, if it works, will essentially immediately prove a big theorem. What he's doing is actually much more impressive than he makes it look: he's a little like Harry Potter using real magic to do what appears to you to be a card trick. But he's making a very important point: what you can do with computers is absolutely confluent with the goals of proof and progress in mathematics, especially increased conceptual understanding.

Computer proofs are less trustworthy than human proofs, because we cannot check what the computers are doing.

This argument really is twaddle, it seems to me. First of all, computers are as good as the people who program them, and there are good and bad ways to do one's programming business (again, this is Jon Hanke's point). What is frustrating (to Jon, and also to me) is that the mathematical community has been slow to regulate this programming business, to the extent that it is possible to publish a paper saying "Proof: I got my computer to do this calculation. (You can write to me if you want to see the details.)" Is that good practice? Of course not. But it's not particular to computers. It is indicative of a larger standard of careless and disdain that we have for calculations over (easier, in many cases) big ideas. Many of us when refereeing a paper -- even an important one -- are too lazy or too snobbish to actually roll up our sleeves and check a calculation. (I have been a referee for a paper which turned out to have a significant error, coming from a calculation that I was too snobbish to check. At first I was actually relieved to hear that the mistake was "only in a calculation": in some sense it hadn't really gotten by me. And then I realized how thoroughly stupid that was, and that the error "hadn't really gotten by me" in the same sense that a ball doesn't really get by the shortstop who has decided that he won't even make a play for it: in some narrow technical sense it may not be an error on my part, but the outcome is the same as if it were. Defensive indifference should not be a proud mathematical position.)

If you do your computer proof well it becomes so much more reliable than a long hand calculation that probably no human being will even seriously claim to have checked. Just ask Thomas Hales, who proved the Kepler Conjecture and published it in the Annals...but the referees admitted that they couldn't check all the details, so eventually they gave up and published it anyway. He has spent a lot of time since then working on a proof that can be checked by a computer proof assistant in the technical sense. Of course this type of proof is more reliable than a messy hand calculation: isn't it just silly to think otherwise?

The flip side of this coin is that many results which are held up as being pillars of conceptual mathematics have proofs that are not perfectly reliable. In mathematics we have a tradition of not speaking badly on each other's work in a subjective way. I can't publish a paper saying that I find your paper to be fishy and I encourage others not to accept it. I need to actually say "Theorem X.Y is faulty". That gentility has negative consequences when you combine it with other bad practices; nevertheless overall I really like it. But for instance when in the 1980's people talked about the grand completion of the classification of finite simple groups, they were being very gentle indeed. There was a key part of the proof that was never published but appeared only privately in an incomplete manuscript of about 200 pages in length. In an interview about ten years later (I believe), Serre made the point -- nicely, of course -- that a 3000 page proof in which 200 pages are missing [and missing because there may be a snag somewhere!] is really not a complete proof, and he speculated rather wryly that the question was not whether there was a gap in the proof -- of course there was -- but whether the gap was large enough to allow a further finite simple group to pass through it! Things were later patched up better in the "second generation proof", and there is a better -- shorter, more conceptual, more powerful -- "third generation proof" coming. Pioneer work is clumsy.

I honestly think that if circa 1982 you believed more in the classification of finite simple groups than the four color theorem then you would simply be showing religious prejudice. There is no rational justification.

Again, the key point here is that the dichotomy between computers / not computers is a very false one: if we do our business correctly, we can use computers to make many (I won't say "most" or "all", but who knows what the future will bring) of our arguments more reliable.

Added: Thanks to some comments from Will Orrick, I looked (for the first time) at the RSST paper. Their introduction contains some issues that I was unaware of. In particular they say that they were motivated by doubts on the validity of the Appel-Haken proof. For this they cite two reasons (I may as well quote):

(i) Part of the A&H proof uses a computer and cannot be verified by hand, and (ii) even the part of the proof that is supposed to be checked be hand is extraordinarily complicated and tedious, and as far as we know, no one has made a complete independent check of it. Reason (i) may be a necessary evil, but reason (ii) is more disturbing...

I find it remarkable how closely these comments parallel the points I made in my answer. As for (i): sure, yes, we would like a pure thought proof. As yet we don't have one, and a computationally intensive proof is much better than no proof at all. As for (ii): that is exactly the point that Jon Hanke is still trying to make today! In fact, the real issue with their proof is that given that it is highly computational, they did not use computers as much as they should have. (I hope it is clear that I am not really criticizing Appel-Haken. As I said several times, pioneer work is clumsy, and their work was pioneer work in a very strong sense.) The latter computer assisted proofs really do let the computers do the computation, which especially in the case of Gonthier's proof is a big improvement.

Pete L. Clark
  • 93,404
  • 10
  • 203
  • 348
  • 2
    Dear Pete, Thanks for posting this interesting and thoughtful answer. Cheers, – Matt E Jan 11 '14 at 02:21
  • Thanks for this. You've made the points that, I think, needed to be made. The assertion in the Wikipedia article that RSST were motivated by doubts surrounding the AH proof comes from the RSST article itself. Here's a quote from the introduction: – Will Orrick Jan 11 '14 at 03:01
  • "Unfortunately, the proof by Appel and Haken (briefly, A&H) has not been fully accepted. There has remained a certain amount of doubt about its validity, basically for two reasons: (i) part of the A&H proof uses a computer and cannot be verified by hand, and (ii) even the part of the proof that is supposed to be checked by hand is extraordinarily complicated and tedious, and as far as we know, no one has made a complete independent check of it." – Will Orrick Jan 11 '14 at 03:02
  • 2
    I'm not sure to what extent the RSST proof is more insightful. They improved the AH proof but left the essence the same, namely a discharging procedure to generate a set of unavoidable configurations, combined with a computer proof of reducibility of these configurations. The sad fact is that it may actually be true that the AH proof put a damper on work in the field. AH (and some rival groups) published in 1977 and 1978. RSST, publishing in 1997, refer in their bibliography to only one result published after that time. – Will Orrick Jan 11 '14 at 03:17
  • @Will: thank you for your comments. Unfortunately in that portion of my answer I chose the most (in)famous example of a computer proof, but one in which I have little expertise. I did look up the RSST paper just now. It confirms what you've said, and I've made changes accordingly. – Pete L. Clark Jan 11 '14 at 03:31
  • As to why the RSST proof is more "insightful": well, that's a vague term, so I took it out. But it certainly incorporates mathematical improvements: namely a quadratic time coloring algorithm rather than then quartic time algorithm of Appel-Haken. – Pete L. Clark Jan 11 '14 at 03:32
  • "The sad fact is that it may actually be true that the AH proof put a damper on work in the field." I wonder why you say that. Their proof was revisited and simplified by RSST. More recently a third proof has been given by Gonthier using the Coq proof assistant: this is a major advance. It seems hard to document a lack of activity, but...what do you have in mind? Certainly I don't believe that these computer proofs have stopped people from searching for more conceptual proofs: RSST are some of the very top people in the field, and they clearly were motivated to spend time on the problem. – Pete L. Clark Jan 11 '14 at 03:36
  • 1
    @Pete: I find it curious that we see things so differently. According to the account on page 224 of Robin Wilson's book, "Four Colors Suffice", Frank Allaire, who had been working along similar lines as Appel and Haken, actually completed his own proof, and had a method that was more efficient in some respects, but his work was never published in full. This is discussed by E. R. Swart in his [1980 article, "The philosophical implications of the four-color problem"](http://www.maa.org/sites/default/files/pdf/upload_library/22/Ford/Swart697-707.pdf). There were... – Will Orrick Jan 12 '14 at 12:55
  • ...other groups who had been working towards a proof as well; it's not clear what became of their efforts. – Will Orrick Jan 12 '14 at 12:55
  • The way it looks to me is that here was a field in which incremental progress was being made every few years, improvements in reducibility testing, for example. After the appearance of the final proof, however, it took nearly 20 years - a generation - for a full verification to appear. (Robin Wilson does mention Ulrich Schmidt, who spent a year in the early 80s verifying 40% of the discharging calculations in the AH paper.) Contrast this with the efforts that were made after the appearance of Perelman's proof, or Zhang's result on gaps between primes. – Will Orrick Jan 12 '14 at 12:56
  • One way of understanding this is that the need to assemble the necessary hardware and software infrastructure was too high a barrier to entry. Those who already possessed the tools were understandably exhausted at the time, and the required effort was just too much for anybody new to pick up the baton. It had to wait for a new generation, and much better and cheaper hardware, for this to happen. – Will Orrick Jan 12 '14 at 12:56
  • At this point, nearly four decades on, there do not appear to be hints of an alternative proof strategy. I don't see that Gonthier's proof changes things much. That work is clearly a major advance in computer assisted proof, but it's not clear that it's an advance in map coloring. As I understand it, it is a formalization of the RSST proof. (I don't mean to imply that the formalization process didn't require non-trivial changes in the proof.) – Will Orrick Jan 12 '14 at 12:57
  • 1
    @Will: Thanks for your additional comments. I don't see that they express a view so different from mine. First of all I agree that after all this time we still do not have a short, conceptual proof of 4CT. That such a proof will one day come is sort of one of the dogmas of orthodox mathematics (so I would like to believe it too, and maybe I do), but the situation is definitely a good test case for the reality that many mathematical theorems will not in our lifetime have accessible, non-computational proof. – Pete L. Clark Jan 12 '14 at 17:44
  • To me this makes the value of computational proofs greater, not less: in practice, most individual contemporary mathematicians need "black boxes" -- i.e., things that they trust to be true and don't have time to check -- and we would like to be sure that these black boxes are not going to explode. – Pete L. Clark Jan 12 '14 at 17:47
  • 1
    Imagine for instance that there was a completely computational but similarly conceptually opaque proof of the Riemann Hypothesis. This would be amazing, and it would change the way mathematicians work. Whenever I state a theorem that says "conditionally on RH" I feel bad about it and have a temptation to omit it entirely: we are trying to figure out what is actually true, not just what is true conditionally on the truth of other difficult things. – Pete L. Clark Jan 12 '14 at 17:48
  • About comparing the proof of 4CT (which I don't know as much about as you do, honestly; so when we differ in our views, I take yours seriously) to other results: yes, Poincare-Perlman and Zhang were jumped on and checked quickly. The former was not instantaneous: it took at least a year, the way I remember it. The latter was surprisingly fast: this is an unusual instance of a major breakthrough that was not announced to the community until it was actually published (and I hear that Annals acted very quickly in order to do so). – Pete L. Clark Jan 12 '14 at 17:51
  • But I think that that means that Zhang's work in particular, although of the highest level of importance, was in fact *relatively easy to understand* (which makes it even better, of course!). Compare to the situation of finite simple groups, which has a very similar "20 years of not being checked by anyone" phenomenon, and compare to the case of Kepler-Hales, which the same journal published but admitted that it didn't check. – Pete L. Clark Jan 12 '14 at 17:53
  • So 4CT really is *harder* than Poincare-Perlman, it seems. Moreover, the problem is that the 4C problem is not as esteemed within the mathematical community as Poincare: you must admit that the average mathematician probably knows about the problem and the proof mainly because of the notoriety of the Appel-Haken proof. In my answer I said that Appel-Haken's work made people much more interested in the problem: isn't that clearly true? – Pete L. Clark Jan 12 '14 at 17:57
  • You emphasize that the progress on the proof itself has been rather meager. Yes, I agree with that. What I don't (yet) understand is your claim that the Appel-Haken proof put a damper on the work in this field. Okay, some other people did some things that were very close to Appel-Haken and decided not to publish them. Let's up the ante: are you suggesting that without Appel-Haken, more people would be trying to *prove* 4CT? If so, why do you think so? – Pete L. Clark Jan 12 '14 at 18:00
  • @Pete: I want to make sure you know that I really liked your answer and agree with most of it, as well as with most of your subsequent comments. My work sometimes involves proving things, and these are often computer proofs. My question for you is, do you think that, in general, the proof of a major open conjecture increases or decreases efforts to find alternative proofs? I would think that it would almost certainly decrease the amount of interest in finding other proofs. The rewards for being first are much greater than the rewards for finding improvements... – Will Orrick Jan 14 '14 at 16:32
  • ...to an existing proof. Of course, interest in the techniques used in the proof often goes way up. After all, those techniques were powerful enough to solve this big problem. Further development of those techniques may ultimately lead to a simpler proof. But even that isn't always true, especially if the techniques appear to be specific to the particular problem and are seen as having shut down the whole area of inquiry. – Will Orrick Jan 14 '14 at 16:32
  • Now I can see you arguing that if a previously obscure question suddenly became famous because the scandal of having been proved by a computer, that might be an exception. I could imagine a mathematician never having heard of the four-color problem, upon hearing the 1800+ cases had to be checked, saying to themselves "I could do better than that." – Will Orrick Jan 14 '14 at 16:33
  • While I agree that Poincaré was certainly more celebrated among professional mathematicians, I don't think four-color qualifies as a previously obscure problem. It was a staple of the recreational mathematics literature for more than a century. Henry Dudeney wrote about it; Martin Gardner wrote about it - in fact it appeared in one of his columns in the year before the proof was announced, and not in an expository way, but as the subject of an April Fools joke that only worked if the audience was already well-acquainted with the problem. He stated that he received many manuscripts... – Will Orrick Jan 14 '14 at 16:34
  • ...from amateur mathematicians claiming to have proved it. Many professional mathematicians grew up reading the recreational literature, and were certainly aware of it. There were two false proofs in the 19th century, both of which were accepted for more than a decade, and both of which opened new research areas. Many areas of graph theory grew, in part, out of efforts to solve the problem. Some big names worked on it, including Birkhoff, Whitney, and Lebesgue. Coxeter apparently thought about it. – Will Orrick Jan 14 '14 at 16:34
  • By the time the proof appeared, people knew the problem was hard, and not likely to yield without new ideas. Because of the nature of the AH proof, I imagine that, if someday someone does come up with an elegant conceptual proof, that achievement will be rightly celebrated, perhaps as much as the original proof. Having said that, I wonder why - it seems to me - there is less being published about the problem now than there was in the decades prior to the proof. I think that some people have taken the view that the AH proof shows the problem wasn't a good problem after all - I may have... – Will Orrick Jan 14 '14 at 16:35
  • …seen someone having been quoted to that effect somewhere. Certainly I've seen a quote from a mathematician to the effect that no good mathematician will work on the problem now that it's been solved. – Will Orrick Jan 14 '14 at 16:35
  • I may be arguing against a straw man here. But then I'm not sure what you mean. Certainly interest in the problem among philosophers of mathematics and among mathematicians interested in philosophical issues has tremendously increased. But I don't see that mathematical interest *per se* has increased. Rather the opposite. – Will Orrick Jan 14 '14 at 16:56
  • It just occurred to me that - assuming I'm right that interest in proving the four-color theorem has decreased since the appearance of the AH proof - that could be taken as evidence that most mathematicians, in their hearts, accept computer proofs. After all, if a computer proof is not a proof, then the problem is still open and, as you say, more famous than ever. – Will Orrick Jan 14 '14 at 17:07
  • 1
    @Will: Thanks for all your comments. I'm not sure I have good enough responses to fill more little comment boxes. The main point that you've convinced me of is that whether increased in 4CT has increased or decreased "because of AH" is a very tricky question (in part because it has counterfactual aspects, but for other reasons as well). And one should make a distinction between "I know about that problem" and "I'm working on that problem". No, 4CT was not obscure, but compared to Poincare I think it is fair to say that fewer top people in the field were working on it. – Pete L. Clark Jan 14 '14 at 17:14
  • Also I think one must look at interest and activity as a function of time. Honestly, yes, I do see a downward dip in activity among "competitors" when the first proof of a well-known problem is emerged. This somehow discounts the value of subsequent proofs, but after some time the community places a premium on improving and streamlining the arguments and then subsequent proofs do have clear value. – Pete L. Clark Jan 14 '14 at 17:17
  • This temporal aspect is I think the real brunt of what I was getting at in that comment of mine that now looks a bit dubious even to me: one might think that the way mathematics works is that we solve problems *and then don't revisit them*. But in fact a big part of what **research mathematicians do** is go back over old problems and proofs and try to refine and improve them in almost every way. So "accepting" the computer proof does not close the door on finding a better proof, of course. – Pete L. Clark Jan 14 '14 at 17:20
  • 1
    "I think that some people have taken the view that the AH proof shows the problem wasn't a good problem after all." Yes, someone definitely said that. As far as the value judgment expressed by this: well, talk to Doron Zeilberger. Taking this as a prognostication: it seems totally, totally silly, like people speculating in 1992 that Fermat's Last Theorem might be undecidable, then in 1994 changing it to Goldbach. It is our job to look for a better proof! Everything else is just coffee talk. – Pete L. Clark Jan 14 '14 at 17:25
14

Here's a great article on why.

There are several reasons. One of the biggest ones is that how one solves the problem is frequently more interesting and useful than the actual result. If that how is "we did an exhaustive search of all the possibilities," you don't really get much other than the result. There's also the risk that relying on computers removes the need to improvise with novel ideas.

Another reason is that bugs and faults in the computing are sometimes considered too much of a risk to mathematical certainty.

Joseph Geipel
  • 276
  • 1
  • 2
  • 6
  • 10
    Bugs and faults in human reasoning are also a nontrivial risk to mathematical certainty, even with the most brilliant of minds. I recall a rather infamous case of von Neumann having proved something in formal logic that was accepted as being true by the mathematical community at the time for over 30 years, until in the 1960's someone realized that it was false after encountering a counterexample. – DumpsterDoofus Jan 09 '14 at 21:25
  • 5
    @DumpsterDoofus (and the upvoters): What is the statement you talk about? – boumol Jan 10 '14 at 09:21
  • 2
    @boumol: I guess it may be Gödel rather than von Neumann, and the example the one by MJD at this question: [In the history of mathematics, has there ever been a mistake?](http://math.stackexchange.com/questions/139503/in-the-history-of-mathematics-has-there-ever-been-a-mistake/). – ShreevatsaR Jan 10 '14 at 16:51
  • @ShreevatsaR: Ah, yes, you're correct, I mistakenly thought it was von Neumann, although Gödel was pretty smart as well. There are dozens of more examples of such humorous embarrassments on the thread at http://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-wrong/35679. My point is, humans themselves need debugging just as much as human-generated computer algorithms, and are subject to the same oversights when tackling complicated procedures as computer codes are. – DumpsterDoofus Jan 10 '14 at 17:42
  • Yeah but papers contains non typo mistakes far more often before i read any papers. Sometimes i really wonder what proportion of all proofs in mathematics contains flaws. Especially in applied branch, it is so easy to make a mistake which goes unchallenged, because very often, no one will sit down and try to understand every line of it and people would just apply the result or try to get a rough idea of the technique so that they can use on their own research – Lost1 Jan 11 '14 at 01:47
  • 1
    A quote of D.E.Knuth may fit in here well, which I only recall approximately: "I have only proved that the above code is correct, not tested it. So it may contain errors." – Hagen von Eitzen Jan 11 '14 at 19:39
  • @boumol: it looks like DumpsterDoofus had someone else in mind, but there is a famous von Neumann mistake where he published a no-go theorem in quantum physics purporting hidden variable theories impossible. Although used in physics, the theorem was purely mathematical result on Hilbert spaces. Bohm, following work by deBroglie, showed that the use of the result as offered by von Neumann and commonly taken was incorrect, though the technical result was still correct. – ex0du5 Jan 12 '14 at 20:49
  • I'm writing a paper on this subject for school and I would like to read the article. Unfortunately, the link no longer works. Would someone mind linking a new one or posting the title? Thank you. – CaptainAmerica16 Jul 06 '20 at 01:13
  • https://www.quantamagazine.org/in-computers-we-trust-20130222/ is the current live URL, Wayback Machine of the original is here: http://web.archive.org/web/20131024154208/https://www.simonsfoundation.org/quanta/20130222-in-computers-we-trust/ – Joseph Geipel Jul 06 '20 at 09:11
11

See also this article:

Ugly Mathematics: Why Do Mathematicians Dislike Computer-Assisted Proofs?
The Mathematical Intelligencer, December 2012, Volume 34, Issue 4, pp 21-28

Here is an abstract found here:

The author discusses an analogy between narratives and mathematical proofs that tries to account in a simple manner for the ugliness of computer-assisted proofs. He mentions that the ugliness is not essentially associated to methodological or epistemic problems with the evidence. He states that nonbeautiful proof may just be an uninspiring toward where mathematicians reveal indifference.

For another summary, see this ZMATH review.

lhf
  • 208,399
  • 15
  • 224
  • 525
5

I think the big issue here is not whether computer assistance is used, but whether the resulting proof is human-comprehensible: it is quite unsatisfying to have a proof that you cannot actually wrap your head around, and can certainly leave one wondering if there aren't bugs in the software.

I don't think anybody really minds the use of a proof assistant where it can generate reasonably-comprehensible proofs, and is just used out of laziness and/or in order to prevent stupid mistakes.

(And of course a proof which cannot itself be comprehended by humans really shouldn't be trusted at all unless the code to generate it is available so that the output can be checked using various proof-checking tools, like the Coq kernel.)

Note that the 4-colour theorem has now been proven using Coq, not just with that implausibly-large "hand-checked" computer output.

SamB
  • 183
  • 9
3

here is an interesting/deeper perspective/angle in addition to those worthwhile/more standard/surface answers so far.

there is known to be a strong correspondence between proofs and programs/algorithms. this was formalized decades ago in the Curry-Howard correspondence.

a proof in many ways is similar to "an algorithm that runs in a mathematicians head". they both have divide-and-conquer aspects (where theorems/lemmas are similar to subroutines), iteration/loops, etcetera. unfortunately some proofs are so complex that they span thousands of pages, and cannot be checked/conceptualized by single mathematicians, very much like complex computer programs. so there is an aversion in this case that may be related to the large complexity of the proofs.

in other words if a "proof is like a program that runs in a mathematicians head", unfortunately human brains are limited in range/span of what they can conceptualize. the recognized psychological concept of chunking is highly leveraged in mathematics but has limitations. so any aversion may be related to the natural human tendency to try to conceptualize as much as possible and running into limitations.

mathematics has been carried out for thousands of years by mathematicians, and by that measure, computers are an upstart.

another possible angle is that there is somewhat a split in mathematics between continuous/topological type approaches and combinatorics, the latter being more prone to automated theorem proving (and eg the area where major theorem proving proofs have occurred such as the 4 color conjecture/theorem). this split is characterized in a famous quote attributed to Whitehead, quoted by a student Higman[1]:

Combinatorics is the slums of topology. —Whitehead

the controversiality of automated theorem proving is somewhat lessening in recent years due to major advances and in many ways CS/mathematics fields are becoming more interlinked, and its plausible to expect that trend to continue and heighten this century.

an excellent ref envisioning the future of man/machine interaction in this area is [2] where a hypothetical conversation is given of the mathematician engaging with a Turing-test like dialogue with a computer to derive new results, in nothing less than a collaboration. the author Gowers a Fields medalist has also recently been researching this area (automated theorem proving) and collaborating with a computer scientist Ganesalingam in a way hinting/indicative of a paradigm shift.[3]

[1] Combinatorics entering the third millennium, Peter J. Cameron

[2] Rough structure and classification Gowers

[3] A fully automatic problem solver with human-style output M. Ganesalingam, W. T. Gowers

vzn
  • 774
  • 6
  • 15
2

I think a computer-assisted or computer-generated proof can be less convincing if it consists of a lot of dense information that all has to be correct. A proof should have one important property: a person reading it must be convinced that the proof really proves the proposition it is supposed to prove.

Now with most computer generated proofs that in fact is not the case. What often happens is that the proof is output of an algorithm that for instance checks all possibilities in a huge state space. And then we say the computer proved it since all states has been checked and they were all ok.

An important question here is: can there be a simple, elegant proof if we have to check a million cases separately and cannot use an elegant form of induction or other technique to reduce the number of logical cases?

I would say no, so in this case it does not matter if the output is from a computer or not. However, most mathematicians would look for a more elegant solution where the number of distinct cases to check fits one page of paper, and would only consider a proof of that size a nice proof.

  • "However, most mathematicians would look for a more elegant solution where the number of distinct cases to check fits one page of paper, and would only consider a proof of that size a nice proof." We all dream of an elegant solution where the amount of calculation/cases/whatever fits on one page of paper. But that is not the reality of contemporary mathematical practice 90% of the time. If what you say is true, then mathematicians in practice almost never find "nice proofs". Are you willing to make that assertion? – Pete L. Clark Jan 11 '14 at 01:25
  • 1
    The fallacy here is, "a person reading it must be convinced that the proof really proves the proposition it is supposed to prove". That's not true. Proofs found by machines are so rigorously formal that they can be checked automatically. Once you are convinced that the proof checker is valid (a more reasonably scoped task) you can be sure that all verified proofs are valid. I'd estimate that such approaches produce less errorneous results than traditional high-level, human-peer reviewed work. – Raphael Jan 25 '14 at 15:34
1

Proofs are supposed to be understood, which is impossible if they are non repetitious and extremely long, as is possible for computers but not as much for humans. I think a big source of controversy also is that people fear that computers will take over at some point, forcing them to give up maths. Also computers give connected people advantage, while without computers math is very open.

Jacob Wakem
  • 2,326
  • 9
  • 24
  • 6
    "Also computers give connected people advantage, while without computers math is very open." I don't think this is a good reason at all. It is like saying math is more "open" if nobody referred to books, journals, or colleagues because books, journals, and colleagues are accessible only to "connected people". – Apprentice Queue Jan 10 '14 at 00:31
  • It depends on if you see math as the pursuit of a priori truths or if you see it as a way to create happiness for quirky, intelligent people who want to pursue a priori truths... Personally, I agree with you. – Jacob Wakem Jan 10 '14 at 01:09
  • I also enjoy how democratized math is. I think it's difficult to draw the line between ourselves and tools we use. ME + PEN + NOTEBOOK feels close to ME, but ME + PEN + PAPER + GENIUS MATHEMATICIAN feels less. What about a calculator to avoid careless arithmetic errors? Maple / Mathematica? Perhaps sometimes it's elegant to use tools: perhaps the quote that "the best mathematician is a lazy one" is somehow akin to "the deadliest weapon in the world is a Marine and his rifle." But even as I write this, I'm reminded that flying un-manned death robots are actually the deadliest weapons... – user Jan 10 '14 at 11:20
  • 6
    I find the last sentence rather ridiculous. Of course mathematics is not a proletarian activity in the sense that any activity which requires many hours of time devoted to incorporeal objects necessarily privileges people who have the luxury to spend their time that way. But in contemporary life the number of people who have ready access to computing power numbers in the billions, whereas the number of people who have ready access to the intellectual/social/cultural network of "conventional" elite mathematics is much, much smaller. (So says the conventional, elitish mathematician.) – Pete L. Clark Jan 11 '14 at 00:07
  • @ Pete L. Clark, discoveries made with a computer are not just a function of programming know-how; super computers are used for much of this. I think you are right though that affluence is necessary; I guess I was just thinking within my American perspective, where relative affluence is NOT necessary. Within a global context "very open" is probably incorrect. – Jacob Wakem Jan 11 '14 at 05:04
  • @Jacob: I have written four papers which have a serious computational component. In no case did I use a "super computer", although occasionally it was useful to use a computer that would be guaranteed to run for a relatively long time without stopping. In every case the difference between what we could compute using the first version of the code and the last version of the code was much more significant than what we could have achieved by running it on a faster computer. Is your experience different? – Pete L. Clark Jan 11 '14 at 05:42
  • Let me also say that I spend much more money on textbooks than on computer equipment. If I had to pay for access to journals -- which I don't, thank goodness, but my institution certainly does -- I would have to spend much, much more. There are always things to buy if you have lots of money, but the amount of capital necessary to acquire access to a *sufficient* amount of computing resources is meager compared to the buy-in necessary for the rest of academic mathematics (and some of it you cannot buy, e.g. exposure to top people who give you nice problems to work on). – Pete L. Clark Jan 11 '14 at 05:48
  • @Pete L Clark, I have never used computers for math, but the four color theorem used super computing. Maybe I am just naïve about research mathematics with regards to journals and "top people", but if you need to hear about a problem before the "nonelite" mathematical community to solve it, even if it helps a "genius" in his research, doesn't that make your contribution not elite? – Jacob Wakem Jan 11 '14 at 07:04
  • 1
    @Jacob: I am not intimately familiar with the case of Appel-Haken, but their computation was done in 1976. Back then "personal computers" barely existed. I wonder how you think the "supercomputer" they calculated on would compare to a $500 laptop you could buy today? – Pete L. Clark Jan 11 '14 at 07:16
  • Also, may I ask: what is your experience with the math research community? As I understand it, you are claiming that computers are a source of inequity in this community. That's quite a strong thing to say, and I happen to believe exactly the opposite. So I am trying to figure out where your claims are coming from... – Pete L. Clark Jan 11 '14 at 07:19
  • Also, I didn't quite understand your last question. What I was getting at with my remarks was: before the popularity of the internet and the importation of lots of freely available mathematics there, it was nearly impossible to gain enough familiarity with modern mathematics to do serious research without enrolling in undergraduate and graduate academic programs. It is still the case that enrolling in such programs helps immeasurably, and more depending upon which program. Access to these programs is like access to a secret society compared to access to reasonable computing technology. – Pete L. Clark Jan 11 '14 at 07:25
  • 1
    @Pete L. Clark, I wikipedia'd it and found that out. I am not a member of the research community. I think when I wrote that, I didn't mean it has caused (much) inequity already, but that strong reliance on computers WOULD cause inequity. As mathematicians learn to use computing power, computing power itself will become more important. – Jacob Wakem Jan 11 '14 at 07:31
  • @Jacob: Thanks for providing additional information. Everyone is entitled to an opinion, and what the future will bring is certainly a matter of opinion. But your opinion may change if/when you begin to do computations yourself or experience other aspects of the math research community. Let me also say that the trend in computing is towards open access. *My* prediction is that in the not-too-distant future, almost anyone with internet access will be able to remotely access powerful computers for their work in much the same way that anyone can check out books from a public library now. – Pete L. Clark Jan 11 '14 at 07:40
  • 1
    @ Pete L. Clark, maybe that wasn't very clear though. And I thought about computers DOING mathematics, not computers used to share mathematics! – Jacob Wakem Jan 11 '14 at 07:40
1

I am going to give a contrary answer.

There have been some misguided points that computers can't do certain proofs beyond brute force. Those are obviously wrong, for the same reasons that it is often incorrect to say that computers cannot represent $\sqrt{2}$ exactly. Computers are great at symbol manipulation.

But there have been several comments on how mathematicians like beauty and simplicity, which try to explain some other dimension that mathematicians look for. Unfortunately, these are also horribly misguided answers. There is no reason that computers cannot find elegant answers among the many they find. There are already a number of good metrics and heuristics computers can use, including:

  • proof size (computers already can find shorter proofs than literature on a number of areas and clearly can search faster for reductions)
  • use of ideas from other areas (computers are good both at classification and ignoring irrelevant classification in choosing candidate proof "moves" - so valuation of a proof may be done by metrics of distance between fields used in the steps)
  • metrics of consonance with other results
  • metrics of generality or counts of earlier results generalized

and so on.

I fear the real answer for this question is that there is no valid reason some mathematicians distrust the general notion of computer proofs. There are reasons why particular proofs may be disliked (no matter their origin), but when you see mathematicians straining to give reasons that are so clearly false, it is almost certain that they are really hiding very human and natural fears that they may one day become less relevant. The same reaction can be seen in manufacturing workers in areas that are moving towards greater robotic manufacture and generally, people want to believe that there is something uniquely great about humanity that their field will always require the actions they have trained to.

I personally think we will always have a fundamentally important relationship with the automation we create, but I think the capabilities we give that automation will change the nature of that relationship over time. That kind of speculation, though, is more off-topic to the question.

ex0du5
  • 1,338
  • 8
  • 14
  • *Mathematicians must secretly fear becoming obsolete by computers*. Heh, you must be a computer person rather than a math person. – anon Jan 12 '14 at 22:13
  • 1
    @anon: I make my living working on the internals of operating systems, but I have mathematical results ranging from a class of results relating hypergeometric series to the algebraic structure of certain classes of function rings to generalizations of Bessel functions, work on nonclassical logics, and a bunch of new results in algorithms (related to my profession). I have a great respect for the mathematical process and enjoy pulling out blank paper and just working out something interesting. You are free to interpret that how you like. I am free to notice you said nothing about my points. – ex0du5 Jan 13 '14 at 00:37
  • I agree that much of the negative sentiment towards computer proofs that has been expressed by mathematicians over the years seems "phobic" in nature. What the phobia may be seems less clear. Speaking for myself, I am not afraid that computers will put an end to the mathematical profession: I do not see anything like that happening in my lifetime. I have at times worried that my lack of ability to program and disdain for calculations will affect my career success (to the extent of having improved on these fronts, at least a bit). I will speculate that my fears are the more common ones. – Pete L. Clark Jan 14 '14 at 17:33
  • I agree. In my estimation/experience, part of the problem is that many mathematicians are not (intuitively) aware of how computers work and what they can do. Also, the way a mathematician approaches finding a proof is not structured but requires ingenuity. Naturally, we don't attribute computers with this. Once you conceive that finding proofs (that are way more rigorous than anything mathematicians publish, usually) *can* be approached systematically and -- more importantly -- that *checking* such proofs is easily automated. – Raphael Jan 25 '14 at 15:31
1

While I totally agree with ShreevatsaR and other answers and comments in the same school of thought, I wanted to avert your attention to the fact that the phrases "proofs relying on computers" or "computer assisted proofs" should be used with care.

There are two ways that computers help in proving mathematical theorem. On is (as discussions above allude to) having a computer consider all different possible states checked to affirm or reject certain property (or theorem). In this case I agree that proofs are more like "phone book" than a "poem" (as said by Kenneth Appel and referred to by K. Rmth on a comment on the question itself). These kind of proofs are only worth as a mere assurance that the theorem is indeed true and mathematicians should not consider refuting that theorem and work towards finding a well formalized and understandable proof.

On the other hand, we can have computers help in the process of proving mathematical theorem in a much different way, i.e., as proof checkers, e.g., Coq, Isabelle, etc. These proof checkers are mere tools where a mathematician can formally write their proofs and have them checked by the computer to be valid proofs. The proof checking mechanism used in these proof checkers is based on curry-howard correspondence and therefore, would require a high level of rigour and guarantee the correctness of proofs. The correctness of proofs is a consequence of De Bruijn's criterion which requires them to have a tiny core (the correctness of which is proven by hand) to which all proofs are sent for being checked.

In the latter case the computer assisted proofs are not only not controversial but rather much desired as they leave no room for any kind of human error in proving mathematical theorems.

0

"The part of a proof you let the computer do is extremely hard to check for correctness (except in simple cases). Software has a pronounced tendency to have bugs. You have to check the programme for correctness, and you have to check the compiler for correctness. And the hardware too, it is not unheard-of that hardware is buggy"

Today's Microsoft .NET software is maintained and updated constantly by thousands of people worldwide. It is common practice for well designed code to build in testing facilities as you write the code, or Built In Self Test(BIST). These facilities are built in to run on PCs, run on embedded micro-controller targets. As you develop the code, side by side with the test code, numerical calculations and their results can use the BIST to ensure that i.e. windowed values, tolerances, floating point calculations, and other parameters are what they should be. Of course all of this is quantified based on the design specification as this drives the design of the application and electronic hardware itself.

Provided that the design has been implemented according to design specification, and design itself has been correctly defined to do the intended job, then expectation is that the developed system will work. Of course there are differences between hardware and software design practices. As signals travel between hardware performing data acquisition and convert them into software quantities, there is 'loss' in the conversion. not just quantization in the analog to digital conversion but also passive filtering, active filtering, DSP processing, firmware processing and software post processing to name a few.

  • Could part of the problem then be that computers can transfer math from the a priori to the a posteriori domain? You would have to repeat a computer check to verify repeatability, which would make math into science. – Jacob Wakem Jan 10 '14 at 20:14
  • @JacobWakem: Proofs have always needed rechecking; this is not introduced by the use of computers. – SamB Jan 11 '14 at 03:50
  • @SamB, this is true but with computers the checking is a repeated experiment if its too big to check manually. Having another mind check a proof, and possibly discuss the proof with others is qualitatively very different from experimentation. – Jacob Wakem Jan 11 '14 at 04:49
  • you had the (1) circuit calculated values, (2), the measured values on the protoboard/pcb and (3) the computer simulated values to compare against. take this further factoring in the in-circuit emulator tools, jtag tools, USB tools, ect. . All of these taken into consideration gives at least a ballpark of how close / far the design is from the intent. – Ron Harding Apr 18 '15 at 08:05
0

Mathematicians seek not only the answer to a question, but the reason for that answer. Human limitations, and to a lesser extent computer limitations. require us to reduce, and reduction is achieved by the discovery of structure that characterizes large or infinite numbers of arbitrary cases. The discovery of the structure providing the reason for the yes or no answer is often more important than the answer itself.

Much as a constructive proof is more valuable than one merely of existence, a method of reduction that manages to reduce a problem to human scale is more valuable than one that merely manages to reduce a problem to computer scale.

So computer proofs are valuable, but the greater prestige and utility comes from human proofs.

William Muenzinger
  • 676
  • 2
  • 6
  • 18