I am about to finish my first year of studying mathematics at university and have completed the basic linear algebra/calculus sequence. I have started to look at some real analysis and have really enjoyed it so far.

One thing I feel I am lacking in is motivation. That is, the difference in rigour between the usual introduction to calculus class and real analysis seems to be quite strong. While I appreciate rigour for aesthetic reasons, I have trouble understanding why the transition from the 18th century Euler style calculus to the rigorous "delta-epsilon" formulation of calculus was necessary.

Is there a book that provides some historical motivation for the rigorous developement of calculus? Perhaps something that gives several counterexamples that occur when one is only equipped with a non-rigorous (i.e. first year undergraduate) formulation of calculus. For example, were there results that were thought to be true but turned out to be false when the foundations of calculus were strengthened? I suppose if anyone knows good counterexamples themselves they could list them here as well.

Stella Biderman
  • 30,178
  • 6
  • 43
  • 86
  • 2,057
  • 1
  • 11
  • 21
  • 57
    Yours is a very weird question… What would be the motivation for *not knowing for sure* if a result we think is true is true? It is not that "rigour" is optional. if someone comes and tells you «I know that X holds but I can only prove it non-rigorously» he is just paraphrasing the sentence «I have a hunch». – Mariano Suárez-Álvarez Mar 29 '17 at 17:15
  • (If the one having a hunch is Euler, that is something. If it is Random Guy In The Street, another…) – Mariano Suárez-Álvarez Mar 29 '17 at 17:19
  • 53
    @MarianoSuárez-Álvarez I find this response a weird comment. Mathematicians were fine doing things with minimal rigor for over 2000 years. From the modern viewpoint that seems weird, but to them it makes perfect sense. In fact, there was an influential camp that was against the work on set theory of Cantor, Whitehead, Russell, et al. at the turn of the 20th century because it was **unnecessary abstract nonsense** – Stella Biderman Mar 29 '17 at 17:29
  • 20
    Yes, one can imagine that Cauchy had great fun when Abel observed that his theorem that a series of continuous functions has continuous sum had «exceptions». There are many such examples… The fact that the change towards arithmetization of calculus had opposition is explained not so much in that it moved towards arithmetization as in it was a change. Change is usually resisted. Great mathematicians objected to Hilbert's proof of his basis theorem, too, and to many other things. – Mariano Suárez-Álvarez Mar 29 '17 at 17:35
  • 1
    Here's a post which has a link to an article by Hammersley attacking overly abstract mathematics as "soft intellectual trash": http://math.stackexchange.com/questions/2093593/reference-request-finding-an-op-ed-by-j-hammersley Although I'm firmly in the camp that rigor is useful for both theory and application (see the answer below), I think everyone should read Hammersley's article in order to fully appreciate the change in mindset which rigorous mathematics affected. – JMJ Mar 29 '17 at 17:38
  • It does not help that the question probably is confusing rigour with the arithmetization of calculus, of course. – Mariano Suárez-Álvarez Mar 29 '17 at 17:38
  • 8
    There is no such thing as "non-rigorous mathematics", unless one is trying to refer politely to hand-waving. – Mariano Suárez-Álvarez Mar 29 '17 at 17:39
  • 29
    @MarianoSuárez-Álvarez What does that even mean? Do you think almost no one did mathematics before 1900? Fermat didn't prove his results, does that make him not a mathematician? What about Ramanujan? Or Euler's work on infinite series? From a modern POV most of Euler's work on infinite series is non-rigorous. So therefore it's not mathematics to you? – Stella Biderman Mar 29 '17 at 17:48
  • Are you confusing arithmetization with rigour, too? – Mariano Suárez-Álvarez Mar 29 '17 at 17:49
  • 9
    @MarianoSuárez-Álvarez No, I'm not. – Stella Biderman Mar 29 '17 at 17:52
  • 7
    If we are going to pretend that the meaning of "doing mathematics" has remained constant since Euler's time, then this is not going to get anywhere. Euler and friends (and Ramanujan, after an inintial period) published proofs of their results, that were held (and in most cases, still are) as rigorous and correct. But criteria have changed, and today's Abel would not observe that Cauchy's theorem had exceptions but flat outright write a negative report as a referee. – Mariano Suárez-Álvarez Mar 29 '17 at 17:57
  • 5
    When I saw the title, I thought you were talking about things like constructing the real numbers, or proving the least upper bound property. I'm surprised it's just the $\epsilon-\delta$ level of things that you don't see the point of, it's absolutely necessary to be able to do calculus. How would you even define a limit, much less a derivative, without it? – Jack M Mar 29 '17 at 18:08
  • 15
    @JackM Use the word "gets close to" a lot. Or infinitesimals. – Stella Biderman Mar 29 '17 at 18:44
  • 16
    It's worth mentioning explicitly that analysis had to adapt primarily due to the broadening of what constituted a function, that occurred gradually from approximately Euler (it's a thing we have a single expression for) to approximately Weierstrass (it's a thing that has a value for each real number). If all your functions are *implicitly* continuous, differentiable, ..., as essentially all of Euler's functions are, you can comfortably manipulate them without worrying if the operations are actually valid, and indeed, you won't notice they could *not* be valid. – Chappers Mar 29 '17 at 20:26
  • 4
    @StellaBiderman That made me laugh much. Thank you.:) – DRF Mar 29 '17 at 20:51
  • 7
    May I just say how weird it is that one can now go through an entire "calculus sequence" and not see epsilon-delta proofs? 25 years ago I took calculus at a semi-cruddy second-string state school (by video) and we still had epsilon-delta proofs in 1st-semester calculus. – Daniel R. Collins Mar 29 '17 at 22:25
  • 4
    @DanielR.Collins Agreed. At my undergraduate institution, anyone taking calculus learns this, even people in the computational non-major sequence. – Stella Biderman Mar 30 '17 at 03:11
  • 2
    @StellaBiderman: When one is actively involved in creating mathematics (as opposed to studying mathematics) he can safely ignore the rigor part but only on the assumption that once the final creation is available he can support it with full rigor. Ramanujan had proofs for many of his results (barring a few exceptions), it was only that he did not have the time to communicate these proofs to the world. For those studying mathematics the situation is different and rigor can not be avoided at any cost here. – Paramanand Singh Mar 30 '17 at 07:28
  • 2
    @ParamanandSingh Did he? Huh. I was not aware of this fact. – Stella Biderman Mar 30 '17 at 14:13
  • 2
    @StellaBiderman: you should study Collected Papers of Ramanujan and see some of the proofs he offered there. He was very clear about what he proved and what he conjectured. – Paramanand Singh Mar 30 '17 at 20:12
  • AFAIK, the **necessity** for rigour in analysis has its roots on Fourier analysis. – Aloizio Macedo Apr 01 '17 at 03:52

10 Answers10


In general, the push for rigor is usually in response to a failure to be able to demonstrate the kinds of results one wishes to. It's usually relatively easy to demonstrate that there exist objects with certain properties, but you need precise definitions to prove that no such object exists. The classic example of this is non-computable problems and Turing Machines. Until you sit down and say "this precisely and nothing else is what it means to be solved by computation" it's impossible to prove that something isn't a computation, so when people start asking "is there an algorithm that does $\ldots$?" for questions where the answer "should be" no, you suddenly need a precise definition. Similar things happened with real analysis.

In real analysis, as mentioned in an excellent comment, there was a shift in what people's conception of the notion of a function was. This broadened conception of a function suddenly allows for a number of famous "counter example" functions to be constructed. These often that require a reasonably rigorous understanding of the topic to construct or to analyze. The most famous is the everywhere continuous nowhere differentiable Weierstrass function. If you don't have a very precise definition of continuity and differentiability, demonstrating that that function is one and not the other is extremely hard. The quest for weird functions with unexpected properties and combinations of properties was one of the driving forces in developing precise conceptions of those properties.

Another topic that people were very interested in was infinite series. There are lots of weird results that can crop up if you're not careful with infinite series, as shown by the now famously cautionary theorem:

Theorem (Summation Rearrangement Theorem): Let $a_n$ be a sequence such that $\sum a_n$ converges conditionally. Then for every $x$ there is some $b_n$ that is a reordering of $a_n$ such that $\sum b_n=x$.

This theorem means you have to be very careful dealing with infinite sums, and for a long time people weren't and so started deriving results that made no sense. Suddenly the usual free-wheeling algebraic manipulation approach to solving infinite sums was no longer okay, because sometimes doing so changed the value of the sum. Instead, a more rigorous theory of summation manipulation, as well as concepts such as uniform and absolute convergence had to be developed.

Here's an example of an problem surrounding an infinite product created by Euler:

Consider the following formula: $$x\prod_{n=1}^\infty \left(1-\frac{x^2}{n^2\pi^2}\right)$$ Does this expression even make sense? Assuming it does, does this equal $\sin(x)$ or $\sin(x)e^x$? How can you tell (notice that both functions have the same zeros as this sum, and the same relationship to their derivative)? If it doesn't equal $\sin(x)e^x$ (which it doesn't, it really does equal $\sin(x)$) how can we modify it so that it does?

Questions like this were very popular in the 1800s, as mathematicians were notably obsessed with infinite products and summations. However, most questions of this form require a very sophisticated understanding of analysis to handle (and weren't handled particularly well by the tools of the previous century).

Stella Biderman
  • 30,178
  • 6
  • 43
  • 86
  • 2
    There are many, many problems with the Collingwood article as it tends to regurgitate unthinkingly the Boyer-Grabiner line on Cauchy, for example. For a discussion of these issues see the publications [here](http://u.cs.biu.ac.il/~katzmik/infinitesimals.html). – Mikhail Katz Jun 26 '17 at 13:46

One good motivating example I have is the Weierstrass Function, which is continuous everywhere but differentiable nowhere. Throughout the 18th and 19th centuries (until this counter example was discovered) it was thought that every continuous function was also (almost everywhere) differentiable and a large number of "proofs" of this assertion were attempted. Without a rigorous definition of concepts like "continuity" and "differentiabiliy", there is no way to analyze these sort of pathological cases.

In integration, a number of functions which are not Riemann integrable (see also here) were discovered, paving the way for the Stieltjes and more importantly the Lebesgue theories of integration. Today, the majority of integrals considered in pure mathematics are Lebesgue integrals.

A large number of these cases, especially pertaining to differentiation, integration, and continuity were all motivating factors in establishing analysis on a rigorous footing.

Lastly, the development of rigorous mathematics in the late 19th and early 20th centuries changed the focus of mathematical research. Before this revolution, mathematics--especially analysis--was extremely concrete. One did research into a specific function or class of functions--e.g. Bessel functions, Elliptic functions, etc.--but once rigorous methods exposed the underlying structure of many different classes and types of functions, research began to focus on the abstract nature of these structures. As a result, virtually all research in pure mathematics these days is abstract, and the major tool of abstract research is rigor.

  • 4,562
  • 10
  • 24
  • 3
    People long knew that continuous functions could fail to be differentiable at finitely many points: just take any piecewise linear function. The issue is failure of differentiability *everywhere*, or even just at *most* points. – Ian Mar 29 '17 at 17:39
  • 1
    Indeed. But that point is rather trivial and is included in every high school calc course. It is clear that "differentiable" in this context means "differentiable, except possibly at a finite number of points". – JMJ Mar 29 '17 at 17:45
  • 4
    For a rare book in English (virtually all are in German or French) from the mid 1800s that discusses concerns with rigor *from the perspective of that time*, see this translation (published in 1843) of Ohm's [**The Spirit of Mathematical Analysis, and its Relation to a Logical System**](https://archive.org/details/spiritmathemati00ohmgoog) (original German version published in 1842), especially the translator's *Introduction* on pp. 1-17. – Dave L. Renfro Mar 29 '17 at 18:55
  • @ALB: That is not at all clear to somebody who hasn't come across this kind of thing before. Such a person will think they have failed to understand something. Especially in the context of questions like this, which ask about the need for rigour, it is vital to get these things right! – TonyK Mar 30 '17 at 12:24
  • 1
    @TonyK OK. You asked for it so I'll rant a little ;-P. I think points like yours come from being cloistered in a math department for too long. What is the derivative of $|x|$? Clearly $|x|/x$ if you are a scientist, engineer, programmer, etc. If you are a mathematician you preface this with a lecture about how $|x|$ technically doesn't *have* a derivative at zero. Not just that it is "infinite" -- as the formula suggests-- but that there's no function, infinite or not, which has *any* business calling itself the derivative of $|x|$ at 0. To someone just seeing differentiation, making (1, cont) – JMJ Mar 30 '17 at 14:45
  • 4
    I think you're being overly defensive here. Just go edit in an "almost everywhere" and no one has anything to complain about. (I do find as a lecturer that any time I try to gloss over fine print, at some future points it comes back to bite me. Maybe that's apropos for this question.) – Daniel R. Collins Mar 30 '17 at 14:53
  • 1
    mountains out of these small technical details actually confuses them further. I've heard no end of stories from people who took a calc class and thought it was pointless. I used to be surprised since calc is useful for just about everyone --who doesn't deal with rates?! But what became clear is that these people could only see the trees, not the forest, due to the thousand petty points, such as this, which pedantic instructors endlessly expounded upon. There is an obvious difference between $|x|$ and the Weierstrass function. One "has a derivative" the other does not. I think (2, cont) – JMJ Mar 30 '17 at 14:55
  • 2
    the indirection of petty rigor can sometimes cause more harm than good. (3) – JMJ Mar 30 '17 at 14:56
  • 1
    @ALB, as a matter of fact, I have not seen the inside of a maths department since I left university 35 years ago. – TonyK Mar 30 '17 at 15:00
  • @DanielR.Collins I understand your point and probably will. However, I wanted to use this opportunity to make a larger point. I'm not trying to act defensive, just engaging in a polite discussion (note the smiley faces). – JMJ Mar 30 '17 at 15:04
  • 1
    @TonyK So perhaps you can understand where I'm coming from? If you've ever had a boss who is nonmathematical, making a big deal out of these small points makes you unpopular fast. I know that because I've been that person. – JMJ Mar 30 '17 at 15:06
  • 1
    No, I'm afraid I still disagree with you! I would never have posted your second sentence as it originally appeared. – TonyK Mar 30 '17 at 15:39

Some other answers have already provided excellent insights. But let's look at the problem this way: Where does the need for rigor originates? I think the answer lies behind one word: counter-intuition.

When someone is developing or creating mathematics, they mostly need to have an intuition about what they are talking about. I don't know much about the history, but for example, I bet the notion of derivative was first introduced because they needed something to express the "speed" or "acceleration" in motion. I mean, first there was a natural phenomenon for which a mathematical concept was developed. This math could perfectly describe the thing they were dealing with, and the results matched with expectation/intuition. But as time passed, some new problems popped out that led to unexpected/counter-intuitive results. So they felt the need to provide some more rigorous (and consequently, more abstract) concepts. This is why the more we develop in math, the harder its intuition become.

A classic example, as mentioned in other answers, is the Weierstrass function. Before knowing calculus, we may have some sense about the notion of continuity as well as the slope, and this helps us understand calculus more thoroughly. But Weierstrass function is something unexpected and hard-to-imagine, which leads us to the fact that "sometimes mathematics may not make sense, but it's true!"

Another (somehow related) example is the Bertrand paradox in probability. In a same manner, we may have some intuition about the probability even before studying it. This intuition is helpful in understanding the initial concepts of probability, until we are faced with the Bertrand paradox and be like, Oh... what can we do about that?

There are some good questions on this site and mathoverflow about some counter-intuitive results in various fields of mathematics, some of which were the initial incentive to develop more rigorous math. I recommend taking a look at them as well.

  • 8,721
  • 6
  • 27
  • 56

You may enjoy these books. The first one is a classic.

  • 208,399
  • 15
  • 224
  • 525

I list here few excellent texts on Real Analysis,have a look at them.

1)Understanding Analysis by Stephen Abbott

2)Real Mathematical Analysis by Pugh

3)Counterexamples in analysis by Gelbaum

For historically inclined yet mathematical you may try The Calculus gallery by William Dunham

Coming to your question of there was a need for epsilon delta proofs, have a look at this https://en.m.wikipedia.org/wiki/Non-standard_analysis

  • 724
  • 4
  • 14

You can try to read this https://en.wikipedia.org/wiki/Fluxion to understand the motivations to introduce the definition of limit.

Important is the example in the indicated web page:

If the fluent ${\displaystyle y}$ is defined as ${\displaystyle y=t^{2}}$ (where ${\displaystyle t}$ is time) the fluxion (derivative) at ${\displaystyle t=2}$ is:

${\displaystyle {\dot {y}}={\frac {\Delta y}{\Delta t}}={\frac {(2+o)^{2}-2^{2}}{(2+o)-2}}={\frac {4+4o+o^{2}-4}{2+o-2}}=4+o}$

Here ${\displaystyle o}$ is an infinitely small amount of time and according to Newton, we can now ignore it because of its infinite smallness. He justified the use of ${\displaystyle o}$ as a non-zero quantity by stating that fluxions were a consequence of movement by an object.


Bishop George Berkeley, a prominent philosopher of the time, slammed Newton's fluxions in his essay The Analyst, published in 1734. Berkeley refused to believe that they were accurate because of the use of the infinitesimal ${\displaystyle o}$. He did not believe it could be ignored and pointed out that if it was zero, the consequence would be division by zero. Berkeley referred to them as "ghosts of departed quantities", a statement which unnerved mathematicians of the time and led to the eventual disuse of infinitesimals in calculus.

Towards the end of his life Newton revised his interpretation of ${\displaystyle o}$ as infinitely small, preferring to define it as approaching zero, using a similar definition to the concept of limit. He believed this put fluxions back on safe ground. By this time, Leibniz's derivative (and his notation) had largerly replaced Newton's fluxions and fluents and remain in use today.

You can also find informations in: "The American Mathematical Monthly, March 1983, Volume 90, Number 3, pp. 185–194."

Daniel R. Collins
  • 7,820
  • 1
  • 19
  • 42
  • 759
  • 1
  • 4
  • 15

Rigor is essential in mathematics, there is just no other way to do math than to proceed on the basis of rigorously proven theorems. This does not mean that calculus necessarily needs to be set up in the same way as it currently is. You may rail against the rigorous definition of limits, but you need to come up with an alternative if you don't like the way things are done. There are plenty of examples for imperfections in mathematics as was practiced in previous centuries, see Stella Biderman's and ABL's answers for details.

A more compelling objection against the way real analysis is done, is i.m.o. that we haven't gone far enough in neutralizing infinitely large or infinitely small objects. So, as is pointed out in asv's answer the limit procedure does away with the ill defined fluxions. To make such quantities well defined requires setting up the formalism of non-standard analysis, which is extremely complicated to do. But we still have not excised all infinite objects, etake e.g. the set of real numbers, as pointed out here this is an extremely complicated matter that's easily glossed over.

Therefore, it's worthwhile to explore the opposite idea where limits are used also at a higher level to get rid all infinite objects. This has not yet been done, but there have been mathematicians who have railed against the idea of infinite sets, which has led to formalisms such as finitism and ultrafinitism. A proper finitist foundation can allow one to always work on a discrete set and then approach the continuum only in a proper scaling limit where both the set is made larger and larger but also the functions that are defined on that set are coarse grained so that they become smooth functions in that continuum limit (we then don't get exotic objects like non-measurable functions). This more elaborate limiting procedure would i.m.o. at least, lead to a much simpler and much more natural set up of real analysis.

I'm in no doubt that the mathematicians who have fallen in love with exotic objects would strongly disagree with me, but it's difficult to explain to an engineering student why he/she has to navigate around all these exotic mathematical artifacts in order to study a topic such as fluid dynamics.

Count Iblis
  • 10,078
  • 2
  • 20
  • 43
  • 3
    "Rigor is essential in mathematics, there is just no other way to do math than to proceed on the basis of rigorously proven theorems." The OP does not question this and seems to have no problem with it; the question is "*for what reason*, historically speaking, did old mathematicians decide to be more rigorous". – AnoE Mar 30 '17 at 09:21
  • As an alternative to finitism, one could use so-called "nonstandard analysis" to put the "infinitesimal" approach to calculus on a rigorous foundation, and then just get on with doing physics or engineering, since most functions used in applied science *are* "sufficiently well behaved" not to cause any serious problems. (String theorists may have to look after their mathematical hygiene more carefully than engineers, of course). – alephzero Mar 30 '17 at 14:12
  • @alephzero " since most functions used in applied science are "sufficiently well behaved"" Yes, but there is a reason why that's the case and that reason can be formalized which isn't done in analysis. This is because in the 19th century the classical physics notion of a continuum was taken as a model that inspired mathematicians at the time to define the mathematical notion of a continuum. But in modern physics the continuum only exists "effectively" as a result of coarse graining and scaling. – Count Iblis Mar 30 '17 at 19:19
  • @AnoE I did refer to the other answers given here that go into the details of problems with the old approaches making rigor necessary. My answer basically boils down to saying that you're ill so you're taking medicines to deal with that illness, the main problems have been cured, but there is still some pain left because you are only taking half the dose. So, you're not getting the full benefit of the rigor. – Count Iblis Mar 30 '17 at 19:23

@TheGreatDuck This is a fascinating thread. Let me comment on your aeronautical contribution from the viewpoint of an aeronautical engineer.

There are times when rigor is important and times when it isnt. For the first situation, consider the design of software to undertake air traffic control. Much attention is being given at the moment to "verifiable" algorithms, where it can be "rigorously" established that no possible situation has been overlooked. I am not sure that the standard of rigor would convince a modern analyst, but there is a recognition that intuition can be misleading and that formal analysis has considerable value.

An example where the search for rigor would be misplaced would be the calculation of the airflow by solving the Navier-Stokes equations. Insistence on rigor would require waiting around until the Navier-Stokes equations are shown to be well-posed, which is probably not going to happen soon. Until that day comes, designers will rely on wind-tunnel experiments, flight tests, and decades of accumulated experience. For now, this is MUCH safer than attempting to prove theorems. In fact, if I knew that the designers were trying to rely on theorems I would think very seriously before buying an airline ticket.

The value of rigor depends entirely on what you are trying to do, and how quickly you need to do it. This is true within mathematics as much as in its applications. Without Euler's gleeful nonchalance the pace of mathematical advance would have been greatly delayed.

Philip Roe
  • 1,180
  • 6
  • 12

The purpose of rigor is not so much to make sure something is true. It is to make sure we know what we are actually assuming. If one forces specificity of what is assumed then also new ways to define thing may become clearer.

The parallell axiom of euclidean geometry is a good example. By forcing ourselves to try and prove it ( which we now know was not possible ) we gradually realize that other paths to build theory are possible. Without bothering to try and prove it and just take it for granted, then maybe other possibilities would not have occured to us.

For each added piece of specificity there is always a "in what other ways could this be done?" which has a chance to pop up leading to new theories.

  • 24,082
  • 9
  • 33
  • 83

The purpose of "rigor" is to prove that when you claim something in mathematics it actually is legitimately true. If you wish to ask "why" then it is a fairly simple answer:

When we use calculus in machinery, programming, and to solve problems in science at a much larger scale than just a handful of expert scientists* can we risk not being able to perfectly know whether or not mathematics (the fundamental tool we use to measure theoretical scientific concepts) is actually correct? Imagine if the mean value theorem were not always true but we assumed it to be true. What if we built an airplane with an auto-piloting system relying on that theorem being true (maybe it turns upward at full throttle at a certain dropping velocity which we know it must pass through to be considered 'crashing'). We know that position is continuous (obviously we do not teleport) but without proof that the derivative is continuous due to position being a "smooth" curve we don't have a basis to claim velocity is not a step function.

And well, without rigor there would be a risk that the plane would crash.

tl;dr Science relies much heavier on calculus to do riskier jobs with safety concerns and so our scrutiny must therefore rise to meet the occasion.

*Of course it wasn't just experts that did calculus in the late 1800's to early 1900's but one has to admit that a college education is more widespread than many decades and/or centuries ago and so more people have that knowledge. Therefore, the number of people using it rises. With that, the need for quality control rises. You wouldn't buy a broken device at the store. Mathematics isn't a product that can be bought, but it's the same way. If it's broken, people won't accept it. Therefore, we scrutinize everything in a much deeper manner than before so that we can be justified in saying "yes, this statement is true!" and people will agree with us. We don't want to be blamed for something failing because we simply ignored cases where an equation wasn't true.

  • 2,167
  • 4
  • 25
  • 53