I am just a high school student, and I haven't seen much in mathematics (calculus and abstract algebra).

Mathematics is a system of axioms which you choose yourself for a set of undefined entities, such that those entities satisfy certain basic rules you laid down in the first place on your own.

Now using these laid-down rules and a set of other rules for a subject called logic which was established similarly, you define certain quantities and name them using the undefined entities and then go on to prove certain statements called theorems.

Now what is a proof exactly? Suppose in an exam, I am asked to prove Pythagoras' theorem. Then I prove it using only one certain system of axioms and logic. It isn't proved in all the axiom-systems in which it could possibly hold true, and what stops me from making another set of axioms that have Pythagoras' theorem as an axiom, and then just state in my system/exam "this is an axiom, hence can't be proven".

EDIT : How is the term "wrong" defined in mathematics then ? You can say that proving fermat's last theorem using the number-theory axioms was a difficult task but then it can be taken as an axiom in another set of axioms .

Is mathematics as rigorous and as thought-through as it is believed and expected to be ? It seems to me that there many loopholes in problems as well as the subject in-itself, but there is a false backbone of rigour that seems true until you start questioning the very fundamentals .

  • 30,888
  • 4
  • 58
  • 139
  • 11
    This question reminds me a quote from Jean-Pierre Serre [(at 3:30)](http://www.dailymotion.com/video/xf88g3_jean-pierre-serre-writing-mathemati_tech#.UZs03MqSLXs) where he explained the difference between a proof and a Bourbaki's proof : "A proof is accepted by experts. A Bourbaki's proof is accepted by non-experts". – user10676 May 21 '13 at 08:53
  • 6
    A more realistic dictum would be: "A proof is accepted by experts. A Bourbaki proof is accepted by Pierre Deligne". – Mikhail Katz May 21 '13 at 13:17
  • 3
    @dkbose The only thing that really stops you from doing that on an exam is that your professor will probably fail you :) I took an MIT OCW course on discrete math where the professor said basically that: "You can use any basic rules of math that you already knew coming into this course as an axiom in your proofs, as long as you don't claim to 'already know' everything we're asking you to prove." :) – KutuluMike May 21 '13 at 18:37
  • 2
    @dkbose -- You might be interested in reading about the "Axiom of Choice". It's an axiom that is assumed by most mathematicians, but not all. The disbelievers don't go to jail, but they have a lot of problems with their socks :-). The Wikipedia page is a good place to start, I guess: http://en.wikipedia.org/wiki/Axiom_of_choice. – bubba May 22 '13 at 13:52
  • 2
    “a false backbone of rigour that seems true until you start questioning the very fundamentals” This questioning usually comes from people who do not write formal proof by themselves, they just understand it philosophically. Practice dissolves doubts; you can find formal proofs written by others; you can eat as much rigor as you want until you are bursting. – beroal May 22 '13 at 14:57
  • 1
    @beroal Getting more and more formal at some point stops increasing rigor and simply continues mathematical development in a different direction. – dfeuer May 24 '13 at 16:07
  • 1
    I think I am getting crazy – chndn May 26 '13 at 06:35
  • 2
    Usually the words used in the thing to prove already refer to some set of axioms. For example, if you are supposed to prove something about vector spaces, then "vector space" is, by definition, anything which fulfils the vector space axioms. – celtschk May 29 '13 at 23:27
  • @beroal, "*This questioning usually comes from people who do not write formal proof by themselves, they just understand it philosophically*", meaning, meaning and then meaning. You can eat as much rigor as you like and yet be empty (enlightenment itself :) was it Goedel or Buddha?) – Nikos M. May 28 '14 at 01:01
  • "It seems to me that there many loopholes in problems as well as the subject in-itself." If there are loopholes we would have noticed by now; all it would take is one counterexample or one contradictory proof from flawless logic and axioms. – Stephen Fratamico Jul 14 '16 at 03:13

14 Answers14


There are really two very different kinds of proofs:

  • Informal proofs are what mathematicians write on a daily basis to convince themselves and other mathematicians that particular statements are correct. These proofs are usually written in prose, although there are also geometrical constructions and "proofs without words".

  • Formal proofs are mathematical objects that model informal proofs. Formal proofs contain absolutely every logical step, with the result that even simple propositions have amazingly long formal proofs. Because of that, formal proofs are used mostly for theoretical purposes and for computer verification. Only a small percentage of mathematicians would be able to write down any formal proof whatsoever off the top of their head.

With a little humor, I should say there is a third kind of proof:

  • High-school proofs are arguments that teachers force their students to reproduce in high school mathematics classes. These have to be written according to very specific rules described by the teacher, which are seemingly arbitrary and not shared by actual informal or formal proofs outside high-school mathematics. High-school proofs include the "two-column proofs" where the "steps" are listed on one side of a vertical line and the "reasons" on the other. The key thing to remember about high-school proofs is that they are only an imitation of "real" mathematical proofs.

Most mathematicians learn about mathematical proofs by reading and writing them in classes. Students develop proof skills over the course of many years in the same way that children learn to speak - without learning the rules first. So, as with natural languages, there is no firm definition of "what is an informal proof", although there are certainly common patterns.

If you want to learn about proofs, the best way is to read some real mathematics written at a level you find comfortable. There are many good sources, so I will point out only two: Mathematics Magazine and Math Horizons both have well-written articles on many areas of mathematics.

  • 20,315
  • 10
  • 65
  • 122
Carl Mummert
  • 77,741
  • 10
  • 158
  • 292
  • 10
    +1 for humor, and for writing in a style that the OP is likely to understand. – bubba May 21 '13 at 12:43
  • "Formal proofs contain absolutely every logical step, with the result that even simple propositions have amazingly long formal proofs. " I think a good example of this would be a proof of EEEpqrEpEqr... the association of logical equivalence... in a natural deduction style system (even longer... a "pure" natural deduction system which disallows derived rules of inference). The proof I wrote and the proof someone else wrote me took up plenty of lines: http://spoonwood.xanga.com/746689198/luaksiewiczs-theorem-the-associativity-of-logical-equivalence-eexeyzeexyz/ – Doug Spoonwood May 24 '13 at 16:07
  • 1
    I like this answer overall, but I will contest that "If you want to learn about proofs, "the best" way is to read some "real mathematics" written at a level you find comfortable." whatever "the best" and "real mathematics" mean. If this came as a personal suggestion, that would come as more modest. But, unfortunately, it comes across like some sort of well-tested psychological fact about how to learn about proofs. On top of that, you can find evidence that people have learned about proofs in other ways rather well, such as how students have used the WFF 'N Proof set of games. – Doug Spoonwood May 24 '13 at 16:17
  • Thinking more on this, I also do NOT agree that formal proofs model informal proofs. Formal proofs present a sequence of evidence that a logical formula does hold in a theory. Informal proofs present a sequence of rhetoric to psychologically affect oneself or others (I'll emphasize that informal proofs **convince**). At the very least, informal proofs work psychologically, though formal proofs (being those that actually use logical formulas) work objectively, at least in principle. With that in mind, I don't think it makes sense to say that objective proofs model psychological proofs. – Doug Spoonwood May 28 '13 at 01:25
  • Still a fairly good answer though. – Doug Spoonwood May 28 '13 at 01:26
  • Doug Spoonwood, I think you're not being fair to informal proofs. First off, an informal proof can generally be turned into a formal one, with enough tedious work and reams of paper. Second, the notion of formal proof certainly arose from logicians trying to model informal proofs. They ended up having to change a lot of things, but the essential concept is similar. – dfeuer Jun 01 '13 at 08:22
  • Can you provide a reason why the mocked (by the author ) *high-school proofs* are only imitations of the other *real mathematical* proofs? Are they not valid? Are they informal? (you already answered that) Is it because they are in 2 columns? Is it because Euclid's proof that there are infinite primes (and other highly-respected proofs) is a (imitation of a) high-school proof? Not for an instant think that validity or not is a matter of who makes the proof and what her affiliations are – Nikos M. May 27 '14 at 23:55

Starting from the end, if you take Pythagoras' Theorem as an axiom, then proving it is very easy. A proof just consists of a single line, stating the axiom itself. The modern way of looking at axioms is not as things that can't be proven, but rather as those things that we explicitly state as things that hold.

Now, exactly what a proof is depends on what you choose as the rules of inference in your logic. It is important to understand that a proof is a typographical entity. It is a list of symbols. There are certain rules of how to combine certain lists of symbols to extend an existing proof by one more line. These rules are called inference rules.

Now, remembering that all of this happens just on a piece of paper - the proof consist just of marks on paper, where what you accept as valid proof is anything that is obtained from the axioms by following the inference rules - we would somehow like to relate this to properties of actual mathematical objects. To understand that, another technicality is required. If we are to write a proof as symbols on a piece of paper we had better have something telling us which symbols are we allowed to use, and how to combine them to obtain what are called terms. This is provided by the formal concept of a language. Now, to relate symbols on a piece of paper to mathematical objects we turn to semantics. First the language needs to be interpreted (another technical thing). Once the language is interpreted each statement (a statement is a bunch of terms put together in a certain way that is trying to convey a property of the objects we are interested in) becomes either true or false.

This is important: Before an interpretation was made, we could still prove things. A statement was either provable or not. Now, with an interpretation at hand, each statement is also either true or false (in that particular interpretation). So, now comes the question whether or not the rules of inference are sound. That is to say, whether those things that are provable from the axioms are actually true in each and every interpretation where these axioms hold. Of course we absolutely must choose the inference rules so that they are sound.

Another question is whether we have completeness. That is, if a statement is true under each and every interpretation where the axioms hold, does it follow that a proof exists? This is a very subtle question since it relates semantics (a concept that is quite illusive) to provability (a concept that is very trivial and completely mechanical). Typically, proving that a logical system is complete is quite hard.

I hope this satisfies your curiosity, and thumbs up for your interest in these issues!

Ittay Weiss
  • 76,165
  • 7
  • 131
  • 224
  • 1
    Could you please justify that provability is "a concept that is very trivial and completely mechanical"? A very good mathematician I know never calls anything trivial, whereas he has "spent many hours trying to prove things someone else called trivial". – Douglas B. Staple May 25 '13 at 20:34
  • 2
    @DouglasB.Staple Good comment. I meant to say that checking if a formal proof (a list of symbols) is a proof of some statement (another list of symbols) is a trivial matter and can be done by a computer. One simply checks if each step of the proof follows typographically from previous steps using the inference rules. I, of course, did not attempt to insinuate that actually *finding* proofs is an easy matter. Nor am I claiming that checking the validity of a proof in the standard sense (ie is not a formal proof, but rather an informal proof, a rather bad approximation of a formal one) is easy. – Ittay Weiss May 26 '13 at 03:46
  • I hereby offer a trivial proof: $\top$ – dfeuer Jun 01 '13 at 08:24
  • @IttayWeiss I'm not so sure that checking a formal proof is a trivial matter (though a computer can check any formal proof in principle). Suppose our only axiom is CCpqCCqrCpr, and our rules of inference are uniform substitution, and detachment. Then, as a meta-theorem the derivable rule of condensed detachment **D** holds. Consequently, (CCpqCCqrCpr, CCCCqrCprtCCpqt, CCpqCCCprsCCqrs) is a formal proof in the sense of using **D**. I can see how getting from the first step to the second step is trivial, but I don't see how getting from the second step to the third step is trivial. – Doug Spoonwood Jun 13 '13 at 12:40
  • +1, although i dont agree that a proof is (just) a typographical entity, in a realist interpretation (or a semi-constructive) one, a proof is an algorithm or problem solution for going from one point to another (given the initial data and relations). In other words a step-by-step process from problem statement and data to solution – Nikos M. May 28 '14 at 00:13
  • @NikosM. in mainstream mathematics non-constructive proofs are considered proofs. You interpretation is much closer to constructivism of intuitinism, branches of mathematics that demand more constructive arguments, and reject non-constructive ones. A constructivist might say that a proof is: a step-by-step process from problem statement and data to solution. However, most mathematicians are not construcitivists. – Ittay Weiss May 28 '14 at 00:25
  • Yes true but i used the term semi-constructive on purpose to cover this aspect. A reduction-ad-absurdum proof can still be considered as part of problem solution interpretation (presumably, eg [negation as failure](http://en.wikipedia.org/wiki/Negation_as_failure)). – Nikos M. May 28 '14 at 00:30
  • what about proofs using the axiom of choice? – Ittay Weiss May 28 '14 at 00:31
  • what about them? (ok i get an idea of what you mean AC in not constructive), but i wait to see where you are heading – Nikos M. May 28 '14 at 01:05
  • I'm trying to understand your objection to "a proof is just a typographical entity". Proofs using AC essentially certainly can't be considered "an algorithm or a step-by-step process...". Proofs using AC are more like a step-by-step-by-step-by-step-by-step ... (and I mean lots and lots of dots here). – Ittay Weiss May 28 '14 at 01:11
  • Yes i see, although i wouldnt like to turn this into a chat at this time, i will try to answer it. i think the problem is in the literal use of the AC in the example you provided. i tended to think of it like reductio-ad-absurdum. As in the example of negation as failure it can be un-countable. Hope this answers the objection. – Nikos M. May 28 '14 at 01:15
  • In order to turn failure nto negation one will have to test various cases, this can be un-countable (and this justifies the term semi-constructive). For a further discussion over un-countability and trans-finite numbers, it will have to wait for another time. – Nikos M. May 28 '14 at 01:19
  • Also let me remind that AC is a theorem in intuitionistic formulations. There is always choice – Nikos M. May 28 '14 at 01:22

Since you're a high-school student, here's an answer that's less sophisticated and much less rigorous:

I suppose you could make up any set of axioms you want, and start using them to prove theorems. So, as you say, you could make Pythagoras' theorem an axiom in your world, and then you wouldn't need to "prove" it.

But, if you're going to start making up your own system of axioms, and doing mathematics in this private world, there are a few things you need to worry about:

(1) If no-one else uses the same axioms as you, then no-one will be very interested in your "theorems", since they are only true in your private world. Your private world might be a bit lonely. So, better to use the same axioms as everyone else.

(2) It's useful (though not absolutely necessary) to have a system of axioms that bears some relationship to reality. That way, the theorems you prove will sometimes give you information that has value in the "real" world -- in fields like economics and engineering, for example. Your private world might be quite different from physical reality, if you don't choose the axioms carefully. So, your results could be misleading or even dangerous, even though they are provably "true" in your world.

(3) If you're not careful, the system of axioms you invent might lead to contradictions, or it might have other fundamental logical flaws. The axioms can't be completely arbitrary (as far as I know).

There are some areas of mathematics where part of the game is making up modified systems of axioms and seeing what happens. But most of us play by a fairly well established set of rules, for the reasons outlined above (and for other reasons, too, I expect).


Regarding your added comment that "there is a false backbone of rigour that seems true until you start questioning the very fundamentals". It seems to me that the rigour is in the reasoning that's used to derive theorems from the chosen set of axioms. I don't think this rigour is "false".

What's bothering you, I suppose, is that there is some freedom when choosing the set of axioms, and, depending on what choices you make, you get a different set of theorems -- a different version of the truth, and different statements of what is "right" and "wrong". I understand your concern -- I can see how it might be disturbing to find out that the axioms of mathematics are not universally agreed. One example of a debatable axiom is the "Axiom of Choice" (read more here). Most mathematicians assume that this axiom is true, but some don't, and, of course, the two groups get a different set of theorems. Not entirely different, but different.

But, on the other hand, the choice of axioms is not completely arbitrary, and there is a very large overlap in the sets of axioms that are in common use. So, in practice, things typically work just fine, despite the fact that the foundations are not entirely cast in stone.

Questioning the fundamentals, as you are doing, is a valid thing to do, and mathematicians have been doing it for a long time. If you want to know more about this, from sources that are at least somewhat "credible and reliable", then this Wikipedia page might be a good place to start.

  • 40,254
  • 3
  • 55
  • 107
  • 1
    Your second point is somewhat misleading. How is an axiomatic system in which infinite sets exist is anything like the real world? – Asaf Karagila May 21 '13 at 10:10
  • @bubba May I ask in which part of mathematics is the main aim making up new systems of axioms and seeing what happens? – Ittay Weiss May 21 '13 at 10:14
  • @ittay -- I didn't say "the main aim", I said "a major part", but perhaps that's too strong, too. I was thinking of things like propositional logic and non-euclidean geometries of various sorts. – bubba May 21 '13 at 10:21
  • @bubba if I may suggest thus expressing it as "in some areas of mathematics one carries out an analysis of the axiomatics of a certain concept to see which axioms are more or less important, and what happens if one (or more) get replaced or dropped". Pardon my intrusion into your answer, but I think it is important to emphasize that a random choice of axioms is nothing that is ever done. – Ittay Weiss May 21 '13 at 10:26
  • @Asaf -- I guess you could say that the real numbers are removed from reality because they are infinite in number. But they are close enough to reality that reasoning with them leads to useful results. They provide a fairly faithful model of "real" things like time and distance, it seems to me. – bubba May 21 '13 at 10:27
  • I have heard that some ultrafinitists refer to the real numbers and classical analysis as "approximation of real analysis", which happens over a finite subset of $\Bbb Q$ which is "dense enough" (whatever that means) that physical applications should follow through without a hitch. So fine, the real numbers are justifiable. How can you justify infinite cardinals much much - oh so much - larger than $2^{\aleph_0}$ then? – Asaf Karagila May 21 '13 at 10:30
  • @Asaf -- "How can you justify infinite cardinals much much - oh so much - larger than 2ℵ0 then?" I can't. Those kinds of things were not what I had in mind when I wrote "the systems of axioms that are widely used". I'll modify my answer -- I don't think it matters which systems are widely used; the point is that the models we use need to match reality if we're going to apply our results to real-world problems. – bubba May 21 '13 at 10:39
  • 2
    That is true, indeed if we want to apply something to the real world then working with an unrealistic model doesn't make sense. But mathematics isn't about application to the real world, it hasn't been for quite a long time now. While mathematics is partially a blacksmith shop for physics, economics and computer games; it is also a blacksmith shop for other blacksmiths. That is to say, mathematics is for mathematicians, like jazz is for musicians rather than the layman. It doesn't mean that the layman can't enjoy jazz, just in a different way and the less experimental stuff. – Asaf Karagila May 21 '13 at 10:43
  • @Ittay -- "a random choice of axioms is nothing that is ever done". I don't know if it's ever done or not, but I agree that random is silly. – bubba May 21 '13 at 10:46
  • 7
    @Asaf -- "mathematics isn't about application to the real world". Well, please don't tell my employers. They pay me a lot of money to use mathematics to solve real-world problems :-) – bubba May 21 '13 at 11:04
  • 1
    @bubba Who decides these *fairly well established rules* ? **Math-Police ?** :) –  May 22 '13 at 12:02
  • 1
    @Asaf - do a survey of physicists for a percentage who think that densish subsets of the rationals are (a) a sensible way to model the world, or (b) how the world actually works. It seems to me you are deciding how physics should be done by asking a tiny community of mathematicians - in order to conclude that other mathematicians and physicists they're all probably wrong. I don't see why ultrafinitists should be the first people to go to.. – not all wrong May 22 '13 at 12:43
  • @dkbose -- no, there is no "math police", as far as I know. Maybe "conventions" would be a better word than "rules". You don't go to math jail if you decide to play by different rules, but you'd want to derive some benefits from your non-conformance, to make up for the disadvantages that I outlined. – bubba May 22 '13 at 13:08
  • 1
    @dkbose: No, it's like *conformism* and invisible rulers of the math world that covertly vaporize those who disagree… A *majority* does not dare to shake the *foundations* and knows nothing about the conspiracy… :) – beroal May 22 '13 at 14:45
  • @bubba: IMHO, other criteria for axiom systems are 4) they are trying to decrease the number of axioms; 5) convenience (whatever that means). – beroal May 22 '13 at 15:02
  • "If no-one else uses the same axioms as you, then no-one will be very interested in your "theorems", since they are only true in your private world." Research in modern logic indicates the contrary. When Meredith found/made-up his 21-letter axiom for two-valued propositional logic, hardly anyone used it, but there still existed interest since it comes as so short. We don't know if it's the shortest axiom using both a conditional and negation connectives. Perhaps even better with the notion of a variable function Lesniewski found/invented a 6-symbol axiom for two-valued logic. – Doug Spoonwood Jul 01 '13 at 15:16
  • And there still exist other open questions (and possible some we might not even realize exist) about shortest axioms in other logical systems. Or we could ask if we have a fixed number of axioms n for our axiom set, what is the shortest possible length? For example, does Lukasiewicz's preferred axiom set for two-valued logic {CCpqCCqrCpr, CpCNpq, CCNppp}, with 23 symbols, have the shortest possible with three axioms using only C, N, and variables? Are there other axiom sets, under the same conditions, with 3 axioms which have only 23 symbols? – Doug Spoonwood Jul 01 '13 at 15:20

I'm not sure, but to me your specific question doesn't seem to have been given the simple answer to why assuming Pythagoras as an axiom is wrong in that situation.

The reason is: because you're actually being asked "Given the set of axioms you've been taught, derive Pythagoras." The question implicitly assumes some particular axiom system.

In general a proof could be considered formally a set of symbols obeying some rules (of logic) which begins with a set of axioms and assumptions and ends with the statement you want to prove.

not all wrong
  • 15,712
  • 2
  • 32
  • 56

A proof is a completely convincing argument. Thus, a proof of the Pythagorian theorem would be a completely convincing argument that the Pythagorian relation is correct as stated. The notion of "proof" is arguably more fundamental than this or that axiom system or system of formal logic. This point of view is that of Errett Bishop. It is the underlying theme of his book 1967 "Foundations of Constructive Analysis" (for a review see http://www.ams.org/journals/bull/1970-76-02/S0002-9904-1970-12455-7/home.html as well as http://www.jstor.org/stable/2314383?origin=crossref).

Mikhail Katz
  • 35,814
  • 3
  • 58
  • 116

There is not complete agreement among mathematicians about which axioms or rules to use in every case. There are, however, no "loopholes" in a consistent set of axioms and rules. If they are inconsistent, then you can prove absolutely anything!

Quite often, however, mathematicians use axioms and rules that have been found to be very useful, but it is not known with certainty whether they are consistent or not, e.g. the ZFC axioms for set theory. After over a century of intensive study by the experts, no inconsistencies have been found in ZFC. As a foundation for mathematics, it just seems to work.

As for exam questions, the "usual axioms and rules" of logic and mathematics (those most widely used) can be assumed to be available to you unless otherwise stated on the exam paper or in the course materials. Yes, that is a bit vague, but you exploit such ambiguity at your peril. Most examiners have no sense of humour in this regard.

Dan Christensen
  • 13,194
  • 2
  • 23
  • 44

A rigorous mathematical argument which unequivocally demonstrates the truth of a given proposition. A mathematical statement that has been proven is called a theorem.

In mathematics, a proof is a demonstration that if some fundamental statements (axioms) are assumed to be true, then some mathematical statement is necessarily true. Proofs are obtained from deductive reasoning, rather than from inductive or empirical arguments; a proof must demonstrate that a statement is always true (occasionally by listing all possible cases and showing that it holds in each), rather than enumerate many confirmatory cases. An unproven proposition that is believed to be true is known as a conjecture. Proofs employ logic but usually include some amount of natural language which usually admits some ambiguity.

In fact, the vast majority of proofs in written mathematics can be considered as applications of rigorous informal logic. Purely formal proofs, written in symbolic language instead of natural language, are considered in proof theory. The distinction between formal and informal proofs has led to much examination of current and historical mathematical practice, quasi-empiricism in mathematics, and so-called folk mathematics (in both senses of that term).

http://en.wikipedia.org/wiki/Mathematical_proof and go to methods of proof

  • 2,695
  • 4
  • 25
  • 39

You might be interested to learn that we didn't have a very good definition of what a proof is until fairly recently. Before the birth of modern logic and set theory, mathematicians had managed to use accepted proof methods to derive contradictions (a famous example is Russell's paradox). A great deal of work was put into defining proof systems that didn't allow for such inconsistencies. Among other things, this involved creating much stricter rules for how you can construct sets.

Most people are content with our current proof methods, even though we rarely (if ever) use completely rigorous proofs. Unfortunately, it has been proven that any sufficiently advanced proof system can't prove its own consistency. In fact, if someone could prove that our current proof methods are consistent (i.e. can't generate contradictions), then that would imply that our proof systems is inconsistent. So in that sense, yes, the "false backbone of rigour" does fall appart a bit when you start digging. If our proof system is consistent, we will never know...

There are lots of strange and interesting phenomena related to proofs and logic. One thing you might have heard of is that certain statements can't be proven or disproven. They are, in a sense, unknowable. Unless I'm mistaken, the continuum hypothesis and the axiom of choice are two such statements. You can choose to assume they're true or false and write perfectly valid proofs in either case.

The same can't be said for provable statements, though. If we take your example and add Fermat's last theorem to our axioms we could face a problem. What if Fermat's last theorem wasn't true? (it is, but for the sake of argument suppose it isn't). You could use the other axioms to prove it false. You now have a system of logic that is inconsistent; you can use it to prove paradoxes. And if you can do that, you can literally prove anything. Not good...

(I'm essentially parroting what my logic prof said during our first lecture. I've fact-checked this to the best of my abilities, but this is still mostly from memory. If something is wrong, please let me know.)

  • 3,684
  • 15
  • 25
  • "In fact, if someone could prove that our current proof methods are consistent (i.e. can't generate contradictions), then that would imply that our proof systems is inconsistent." That strongly reminds me of the disprove of god's existence by proving his existence in the Hitch-Hikers Guide to the Galaxy. :-) – celtschk May 30 '13 at 09:37

Excellent Question. The concept of a proof carries authority; the conclusion of a proof is demonstrated from the premises. Proofs of theorems still assume a proof theory, which, as you've noted, depend on accepting a logic and some axioms. One can think of a proof as a convincing demonstration (informal sense) or as a mathematical object (formal sense). I take it that you are more interested in the idea of a convincing demonstration. All demonstrations must make certain assumptions. An opponent can't be put in checkmate unless they accept the rules of chess. Similarly, one can't offer a convincing demonstration unless one accepts the premises from which the demonstration proceeds. The more general question of whether anything is ever convincingly demonstrated is a philosophical question, but first-order logic and the Peano axioms provide many natural cases of convincing demonstrations. The Stanford Encylopedia of Philosophy has a nice article on the development of proof theory here.

  • 155
  • 1
  • 5

Well, of course you're free to adopt the Pythagorean as an axiom in your system. But that doesn't remove the burden of proof; it merely transforms it, as now you have to prove that a) your system as stated is consistent; and, almost as important, b) that your new axiom is not redundant i.e. could not itself be derived from other axioms in that system. That is, you'll have to prove that you can neither prove nor disprove the Pythagorean using the other axioms of your system. If I was your teacher I'd definitely take you up on that. But if I was you, I might not want to go there. Unless you really are game.

Basically, axioms aka postulates aren't universal truths; they're Lego bricks. At the end of the day it doesn't matter what bricks you have or decide to use, so much as what you can build with them, how that relates to other builds, and occasionally whether it's useful for anything.

I could adopt as my only axiom that I'm God or Chuck Norris, and hence everything I say is true just because I say so - or; it is true that I said it, since "things Jesper said" is the only universe of which this system is valid. And even then fairly useless. You can't build things with it. Even I can't build things with it. You may note, say, that Jesper said A, and then he said not-A. But you can't even take that and derive a proper contradiction. Systems with that property are called dialethian - and they may not all be as useless as this.

Or I might adopt as axiomatic that only those propositions that have been proved or disproved are allowed to have a truth value, so that there will be propostions that have none; and consequently the classical dictum, known as the law of the excluded middle, that a proposition is either true or false, is itself false - in this kind of system, called intuitionistic or constructivist. And that has some very interesting consequences, or builds, or meta-builds known as category theory and topos theory.

All I'm saying is, you're on the right track. Challenge everything up to and including your ability to challenge everything. But derive the consequences. Cuz that's where the fun is. You're absolutely right that you can take anything at all to be your axioms in your system. The challenge is still to do something interesting with it, and hopefully tell us something of how your system relates to everything else.

But right now you're at school. And school tells you what system to use. Use it, then. It doesn't have to become part of your identity.

You can prove the Pythagorean theorem given Euclid's postulates. Once you've done that, you'll probably just go right on using it wherever those postulates hold, without even bothering to remember or recount your proof. And hence it is, in a sense, axiomatic for your new build.

You can also quite easily disprove the Pythagorean: just do your geometry on a sphere or a saddle (hyperboloid). In non-Eucledian geometry, while still incredibly important, the Pythagorean is only true in the limit of infinitesimal triangles. And may that be a lesson to us all: today's universal truth is likely tomorrows special case. Maybe.

  • 46
  • 1

Very interesting question. You certainly seem to have at least one of the major qualities a well-to-do mathematician would possess, namely, the drive for rigor.

To answer your question, and to paraphrase what you have said, you have to get to the very fundamentals. Let's expand a little more.

Although I want to avoid a technical and philosophical discussion of what Mathematics is (is not or can be), let's just say any system with a set of strictly, i.e. rigorously, predefined rules and some kind of fundamental object defines a mathematical system. So, for example, just rules would make up a logical system; we're treading on thin ice here too, but let's move on.

An example of a mathematical system is topology. This is one of the prime examples of what most people don't think mathematics to be, since most of the time you'll be dealing with sets and operations on them, which aren't operations you're used to seeing up till now. However, we still may use ideas and operations from another system, say the system of numbers (I want to avoid the phrase "number theory" for fear of a potential technical slight), on objects here as long as we can verify these such objects satisfy all the requirements those ideas and operations state for them to be put in use. So for example, we can count how many sets we're working with, and forgoing any concerns regarding the applicability of operations on numbers in this context, we can even make statements such as, "There are $2n$ sets." And we need not prove here that $2n = n + n$, since we already have confirmed that the system of numbers can, although perhaps not entirely, be used in the system of topology. In other words, what's proven once, is proven totally. For more on this, you may wish to see this question, and in particular, I also talk about this in my answer there.

How can we be completely sure that something is true? Never. Indeed, we agree to set up axiomatic systems, as you have mentioned, and then proceed to determine certain truths in these such systems. And we can only be sure that the truths we have obtained are true given that our axiomatic system is valid and that our process of reason holds. In the absolute sense, there is no way to know for sure if those two things are actually true. But there are a few things I want to mention here. First, Mathematics was never to concern itself with such matters. Mathematics simply stands on the premise, let's say this and that is true; given these assumptions, what else can we say is true? And the various other answers to the question I linked to provide an excellent discussion of this. If you still are concerned about such things, however, you should know that now you're entering the domain of philosophy; namely the philosophy of reality (Ontology) and knowledge (Epistemology); interestingly enough there is also Philosophy of Mathematics. Furthermore, despite all this, there are still certain proposition in Mathematics which cannot be shown to be either true or false; this is known as Godel's Theorem.

Lastly, I want to address what might be an underlying concern here. I ask you, where is the concern in accepting the validity of such a statement as,

$$2 + 2 = 4.$$

Is it in determining what ideas, exactly, are referred to by the symbols "$2$" or "$4$"? Or maybe the "$+$" and "$=$"? We have already agreed on the definitions of the ideas referred to by those symbols, and we can agree in our application of those definitions and ideas. Indeed, some may disagree with this application, but that's ok. That is not necessarily to say our conclusion is incorrect, but just that not everyone's on the same page. And disagreements like this can happen in higher mathematics; in fact, that is precisely why we have proofs of conjectures peer reviewed.

  • 399
  • 4
  • 8
  • 1,415
  • 9
  • 24

A proof is a chain of statements $p_1\implies\cdots\implies p_n$ which can be braked down so that each implication essentially correspond to a conclusion of type modus ponens or a substitution - due to Kurt Gödel.

From a formal point of view, $p_1\iff a\wedge c$ is a conjunction of all axioms and some conditions which is formulated in the theorem to be proved ($c\implies q_n$ is the theorem), and the chain have links like $(p_k\wedge(p_k\implies q_k))\implies p_k\wedge q_k$.

Example. Goldbachs conjecture can be formulated:
If $m>2$ is an even natural number (the condition $c$), then it exists two prime numbers $p,q$ such that $m=p+q\;$ (the conclusion $q_n$).

Here the axioms are the axioms of Peano ($a$), and by finding each modus ponens and each substitution needed in the chain, the conclusion $q_n$ could be made.

From a more informal point of view the axioms and a lot of theorems are supposed to be known by the reader and don't have to be pointed out.

  • 13,268
  • 4
  • 23
  • 72

I don't know if the following comes as an adequate classification of wrong statements in mathematics and at least some parts of logic.

That said, plenty of wrong statements have counterexamples. For instance, if someone were to claim that "all (Euclidean... all triangles are assumed Euclidean in this answer) triangles are isoceles" or "all triangles are equilateral", these statements are wrong, because there exist triangles which are not isoceles and triangles which are not equilateral. To convince someone that a mathematical statement is wrong thus requires either constructing a counterexample or indicating how in principle a counterexample can get constructed, or indicating how a counterexample can exist within the theory.

You can use already proven theorems when constructing a counterexample or indicating how one could get constructed for the theory you work with. With the triangle example, you could use Thales Theorem, which in effect gives you a method to construct several triangles, and then confirm that at least one of such triangles is not isoceles or not equilateral.

Doug Spoonwood
  • 10,720
  • 1
  • 30
  • 50

Proving something is no more than just saying why something is true mathematically. There are many ways to prove something. Here are the thre most used:

Logic: Using established rules and thinking to get to a coherent answer.

Contradiction: Trying to prove that what we are proving is false. If this fails, what we are trying to prove is true.

Alternate thinking: Putting the problem in a different way to clarify previously unseen things.

Advanced Mathematical proofs generally come in abstract and obscure ways of rationalization. But a proof can come as simple as some simple logic.

Anonymous Pi
  • 1,249
  • 8
  • 19