34

The question is the title of a 2013 publication in the Notices of the American Mathematical Society, by twelve authors (of which I am one). The contention is that traditional history of mathematics is based on the assumption of an inevitable evolution toward the real continuum-based framework as developed by Cantor, Dedekind, Weierstrass (referred to as the "great triumvirate" by Carl Boyer here) and others. Taking some seminal remarks by Felix Klein as their starting point, the authors argue that the traditional view is lopsided and empoverishes our understanding of mathematical history. Have the historians systematically underplayed the importance of the infinitesimal strand in the development of analysis? Editors are invited to submit reasoned responses based on factual historical knowledge, and refrain from answers based on opinion alone.

To be even more explicit, we ask for additional examples from history that support either Boyer's viewpoint or the NAMS article viewpoint. That is, limit the question to facts and not opinions (based on a comment by Willie Wong at meta).

Note 1. For a closely related MO thread see this.

Note 2. A reaction to the Notices article by Craig Fraser was published here.

Note 3. Another would-be victor Gray is analyzed in this MSE thread.

Note 4. The Notices article originally contained a longish section on Euler, which was eventually split off into a separate article. The article shows, using the writings of Ferraro as a case study, how an assumption of default Weierstrassian foundations deforms a scholar's vision of Euler's mathematics. The article was recently published in 2017 in Journal for General Philosophy of Science.

Note 5. A response to Craig Fraser's reaction was published in 2017 in Mat. Stud.; see this version with hyperlinks.

Mikhail Katz
  • 35,814
  • 3
  • 58
  • 116
  • 130
    The history of Linear Algebra is almost certainly written by the vectors. – Emily Jul 16 '13 at 16:37
  • 1
    I think Claude Lobry would like this paper. I was introduced to non-standard analysis by a little book of him:["Et pourtant ils ne remplissent pas N"](http://books.google.cz/books/about/Et_pourtant_ils_ne_remplissent_pas_N.html?id=LwPvAAAAMAAJ&redir_esc=y). Excellent read, even if you are a die hard Weierstrassian. – Raskolnikov Jul 16 '13 at 16:55
  • 4
    +1 for damn good question, All the math history I ever did might be just propaganda. – jimjim Jul 17 '13 at 08:12
  • 5
    I see the following question: "Have the historians systematically underplayed the importance of the infinitesimal strand in the development of analysis?" I'm no expert on the history of the subject, but considering you had to find your source material for your article somewhere, I'd say the answer is an emphatic **no**. What else is there to discuss? – Raskolnikov Jul 17 '13 at 14:29
  • @Raskolnikov: The NAMS article is based largely on primary sources. Indeed the article argues that such sources have not been given sufficient attention. – Mikhail Katz Jul 17 '13 at 14:32
  • If it is not too much trouble, can you give an example of the sort of "examples" (perhaps from your paper or Boyer's book) you are looking for? I expect that will help focus the discussion some what. – Willie Wong Jul 17 '13 at 15:08
  • 1
    One of the first examples in the NAMS text is from David Mumford, who wrote about overcoming his own prejudice (about what he was taught concerning infinitesimals) in the following terms: "In my own education, I had assumed that Enriques [and the Italians] were irrevocably stuck.… As I see it now, Enriques must be credited with a nearly complete geometric proof using, as did Grothendieck, higher order infinitesimal deformations.… Let’s be careful: he certainly had the correct ideas about infinitesimal geometry, though he had no idea at all how to make precise definitions." There are many other – Mikhail Katz Jul 17 '13 at 15:13
  • 3
    Following on from Raskolnikov's comment, what are you trying to achieve? No one is stopping anyone from writing a book on the history you cite, and no one is stopping anyone from trying to convince educators to give infintesimals another look. There may be others who *disagree* with you, but that's a different issue. For the record, I have no strong opinion. –  Jul 17 '13 at 17:33
  • I am puzzled by Raskolnikov's comments. One the one hand, he wrote that Lobry's book is excellent and Lobry would enjoy this article/question. On the other, he wrote that NAMS article was based on published material, and therefore historians must not be "guilty as charged" so to speak. I pointed out that the NAMS article (as well as a number of other articles) are based on 17th, 18th, and 19th century sources that were indeed given insufficient attention by modern historians, and tend to go counter to Boyer's philosophical approach. Have other editors come across data supporting either approa – Mikhail Katz Jul 17 '13 at 17:45
  • Is there a secret society of math-historians called Victor? – Asaf Karagila Jul 17 '13 at 23:25
  • 1
    I don't know, but there is apparently one called Vector, with at least 41 members :-) – Mikhail Katz Jul 18 '13 at 08:18
  • @Arkamis Damn! I'm rolling on the floor! – Metin Y. Jul 24 '13 at 09:13
  • 1
    @Metin Y.: Sorry, that's not enough to join. To join the secret society, you must also do all the epsilon-delta exercises in Thomas-Finney, and sign a pledge of allegiance to $\frac{dy}{dx}$ not being a ratio. – Mikhail Katz Jul 25 '13 at 14:52
  • @Raskolnikov, thanks for your comment. Lobry and I are currently writing a paper together :-) I can send you a copy if you are interested. – Mikhail Katz Apr 19 '16 at 13:57
  • Oh, hello! Nice to see you came into contact with Claude Lobry. Is he still working at Sophia-Antipolis? Or is he already retired? I'm certainly interested in a copy. Just tell me when you can send it to me. – Raskolnikov Apr 19 '16 at 17:29
  • He is still very active as the current collaboration shows though officially I think he may be semi-retired. Is there an email address I can send this to? @Raskolnikov – Mikhail Katz Apr 20 '16 at 07:21
  • You can send it to tuvegeto137@gmail.com . – Raskolnikov Apr 20 '16 at 09:08
  • @Raskolnikov, the paper by Lobry et al is online [here](https://arxiv.org/abs/1703.00425). – Mikhail Katz May 17 '18 at 10:29

4 Answers4

11

Certainly the victors write the history, generally. But when the victory is so complete that there is no further threat, the victors sometimes feel they can beneficently tolerate "docile" dissent. :)

Srsly, folks: having been on various sides of such questions, at least as an interested amateur, and having wanted new-and-wacky ideas to work, and having wanted a successful return to the intuition of some of Euler's arguments ... I'd have to say that at this moment the Schwartz-Grothendieck-Bochner-Sobolev-Hilbert-Schmidt-BeppoLevi (apologies to all those I left out...) enhancement of intuitive analysis is mostly far more cost-effective than various versions of "non-standard analysis".

In brief, the ultraproduct construction and "the rules", in A. Robinson's form, are a bit tricky (for people who have external motivation... maybe lack training in model theory or set theory or...) Fat books. Even the dubious "construction of the reals" after Dedekind or Cauchy is/are less burdensome, as Rube-Goldberg as they may seem.

Nelson's "Internal Set Theory" version, as illustrated very compellingly by Alain Robert in a little book on it, as well, achieves a remarkable simplification and increased utility, in my opinion. By now, having spent some decades learning modern analysis, I do hopefully look for advantages in non-standard ideas that are not available even in the best "standard" analysis, but I cannot vouch for any ... yet.

Of course, presumably much of the "bias" is that relatively few people have been working on analysis from a non-standard viewpoint, while many-many have from a "standard" viewpoint, so the relative skewing of demonstrated advantage is not necessarily indicative...

There was a 1986 article by C. Henson and J. Keisler "on the strength of non-standard analysis", in J. Symbolic Logic, 1986, maybe cited by A. Robert?... which follows up on the idea that a well-packaged (as in Nelson) version of the set-theoretic subtley of existence of an ultraproduct is (maybe not so-) subtly stronger than the usual set-theoretic riffs we use in "doing analysis", even with AxCh as usually invoked, ... which is mostly not very serious for any specific case. I have not personally investigated this situation... but...

Again, "winning" is certainly not a reliable sign of absolute virtue. Could be a PR triumph, luck, etc. In certain arenas "winning" would be a stigma...

And certainly the excesses of the "analysis is measure theory" juggernaut are unfortunate... For that matter, a more radical opinion would be that Cantor would have found no need to invent set theory and discover problems if he'd not had a "construction of the reals".

Bottom line for me, just as one vote, one anecdotal data point: I am entirely open to non-standard methods, if they can prove themselves more effective than "standard". Yes, I've invested considerable effort to learn "standard", which, indeed, are very often badly represented in the literature, as monuments-in-the-desert to long-dead kings rather than useful viewpoints, but, nevertheless, afford some reincarnation of Euler's ideas ... albeit in different language.

That is, as a willing-to-be-an-iconoclast student of many threads, I think that (noting the bias of number-of-people working to promote and prove the utility of various viewpoints!!!) a suitably modernized (= BeppoLevi, Sobolev, Friedrichs, Schwartz, Grothendieck, et al) epsilon-delta (=classical) viewpoint can accommodate Euler's intuition adequately. So far, although Nelson's IST is much better than alternatives, I've not (yet?) seen that viewpoint produce something that was not comparably visible from the "standard" "modern" viewpoint.

paul garrett
  • 46,394
  • 4
  • 79
  • 149
  • 6
    This is the key, really: "I've not (yet?) seen that viewpoint produce something that was not comparably visible from the "standard" "modern" viewpoint." – Andrés E. Caicedo Jul 18 '13 at 00:56
  • If paul garrett is looking for applications of NSA, one could mention Goldbring's proof of a local version of Hilbert's 5th problem; see http://mathoverflow.net/questions/16312/how-helpful-is-non-standard-analysis for more. – Mikhail Katz Jul 18 '13 at 07:34
  • 2
    I think that Cantor invented set theory after stumbling on the idea of ordinals when he was working on trigonometric series, or something similar. I think that his construction via Cauchy sequences came later. – Asaf Karagila Jul 18 '13 at 07:50
  • 1
    @user72694 Thanks for the pointer! Goldbring's work is a very interesting example! – paul garrett Jul 18 '13 at 12:53
  • @AsafKaragila, I believe Cantor was working on "sets of uniqueness" for Fourier series, and was looking at processes akin to "taking the derived set (limit points)" of a given set, and iterating the process. I believe he conceived of the notion of transfinite iteration of such. Such taxonomy of "derived sets" appeared in some British texts in "analysis" of the early 20th century, which is where I saw it (though their treatment of ordinals was hazy). – paul garrett Jul 18 '13 at 12:55
  • @paulgarrett No, not Fourier series. For Fourier series we have good uniqueness results. The point of Cantor's results is that they apply to trigonometric series even if they are not Fourier series, so we cannot just compute the coefficients via the integral formulas. – Andrés E. Caicedo Jul 18 '13 at 14:31
  • @AndresCaicedo I must not understand your use of "Fourier series" versus "trigonometric series". Also, I never did look first-hand at what Cantor did in that regard, but surely not all pointwise issues for any sort of eigenfunctions expansions are so clear, e.g., as the Carleson-Hunt theorem about almost-everywhere pointwise convergence. But, anyway, I was not intending to make a precise statement in my previous comment. – paul garrett Jul 18 '13 at 15:17
  • 1
    @paulgarrett It is a delicate matter, when a trigonometric series (a series of sines and cosines) is a Fourier series ($\displaystyle \frac{a_0}2+\sum_{n=1}^\infty(a_n\cos(nx)+b_n\sin(nx))$ is the Fourier series of $f(x)\in L^1(0,2\pi)$ iff $\displaystyle a_n=\frac 1{2\pi}\int_0^{2\pi} f(t)\cos(nt)\,dt$ and $\displaystyle b_n=\frac1{2\pi}\int_0^{2\pi} f(t)\sin(nt)\,dt$ for all $n$). The classic counterexample is $\displaystyle \sum_{n=2}^\infty\frac1{\log n}\sin(nx)$. This is a trigonometric series that converges everywhere, but it is not the Fourier series of any function. – Andrés E. Caicedo Jul 18 '13 at 15:31
  • @paulgarrett The point is that trigonometric series are considered formal objects, regardless of whether the series actually converges. Carleson's theorem and its extension by Hunt are about when the formal Fourier series of an $L^p$ function (for some appropriate $p$) actually converges. But there are no uniqueness issues here, as the $a_n$ and $b_n$ are given by the formulas above. By contrast, Cantor's results are about when the set of points where a formal trigonometric series converges suffices to uniquely identify its coefficients. – Andrés E. Caicedo Jul 18 '13 at 15:35
  • @AndresCaicedo Aha! Thanks for the clarification! And for the (counter-) example! (In recent years, I've developed a fondness for a Levi-Sobolev-space viewpoint, and emphasis on "generalized functions", so I would interpret that $\sin(nx)/\log n$ series as being the Fourier series of a distribution in $-{1\over 2}-\epsilon$ Levi-Sobolev space... the point being that I would think of it as the Fourier expansion of _something_... thus my confusion about usage...) Thanks! – paul garrett Jul 18 '13 at 15:36
  • @paulgarrett I see. Now that you point this out, the study of sets of uniqueness naturally leads to distributions (and "pseudomeasures"). There are many interesting questions still left in the area! – Andrés E. Caicedo Jul 18 '13 at 15:39
  • @paul garrett: I don't really follow your Schwartz-Grothendieck-Bochner-Sobolev-Hilbert-Schmidt-BeppoLevi comment so well. Judging from [Beppo Levi](http://en.wikipedia.org/wiki/Beppo_Levi), you are talking about Lebesgue integration. Is this correct? Can you elaborate? – Mikhail Katz Jul 23 '13 at 12:29
  • @user72694 I meant Beppo Levi's observation (c. 1906) that the Dirichlet principle needed the Hilbert space completion with respect to the energy norm (nowadays called Sobolev +1). The other names refer to the subsequent development of Fourier integrals, distributions, Sobolev spaces, nuclear spaces, pseudo-differential operators, wavefront sets, and such. I didn't intend so much to refer to measure theory. – paul garrett Jul 23 '13 at 12:51
  • In other words, you are talking about the calculus of variations, the connection with Euler being the Euler-Lagrange equations? – Mikhail Katz Jul 23 '13 at 13:38
  • @user72694, Well, calculus of variations, but/and also Euler's intuitive use of infinitesimals and "unlimited" numbers, much as Leibniz. – paul garrett Jul 23 '13 at 13:44
  • I don't really see how Schwartz-Grothendieck-...-BeppoLevi, for all their great achievements, helped with Euler's infinitesimals (unless you mean Grothendieck's nilpotent infinitesimals in algebraic geometry). Take for example the distributional definition of the Dirac delta function, a la Schwartz. This is a great definition, but its "double dual" nature is a far cry from intuitive infinitesimals. I don't know about Euler but Cauchy had an intuitive definition of the Dirac delta using infinitesimals, as a function with LOCAL VALUES (which is not to say he was as precise as Schwartz). – Mikhail Katz Jul 23 '13 at 13:50
  • @user72694, I myself would tend to take the Gelfand-et-al "generalized functions" viewpoint, rather than think about duals, much less double duals. Many expressions which "don't make sense" 19-century-wise do make sense, e.g., as Fourier transforms of tempered distributions, or spectral expansions of compactly-supported distributions in Sobolev spaces. Even $\sum_n 1\cdot e^{2\pi inx}$ is a perfectly fine Fourier expansion, namely, of the periodic Dirac delta, despite its not converging pointwise, etc. Spectral expansions work for distributions, and can be manipulated. – paul garrett Jul 23 '13 at 15:47
  • @paul garrett: What do you mean by "the Gelfand-et-al 'generalized functions' viewpoint"? I thought they use the term "generalized function" in the sense of a Schwartz distribution. I agree with you that once one understands this formalism, one starts viewing distributions as "generalized functions", and they begin to seem intuitive. However, at the level of the definitions, one needs first to pass to the space of all functions, and then in turn to consider a functional on that space. That's what I was meant by the "double dual". This formalism great but it's some distance away from intuition. – Mikhail Katz Jul 30 '13 at 09:30
  • @user72694, in the "rigged Hilbert space" discussion in volume 4 of Gelfand-et-al, they take up a more refined viewpoint, "grading" topological vector spaces in a fashion that abstracts Levi-Sobolev spaces, etc. This is finer structure than just "distribution". And, independently of that, in all these spaces, in concrete examples, test functions are dense, so these spaces can be viewed as completions of test functions with respect to topologies of various strengths/weaknesses. Thus, one can put various topologies on test functions, and complete, to obtain these spaces (yes, of distributions). – paul garrett Jul 30 '13 at 12:51
  • @paul garrett: Approximating a distribution/generalized function by a sequence $(f_n)$ of test functions is certainly fine. These seem to be called "nascent delta functions" in the case of the delta distribution, for example. The question arises, however: once we acknowledge that we need such a sequence $(f_n)$ to represent an ideal object we are interested in (such as the Dirac delta), why not take the next logical step and consider the equivalence class of such a sequence $(f_n)$ (modulo a suitable maximal ideal)? This results in an object with local values over an extended field, which is.. – Mikhail Katz Aug 04 '13 at 07:42
  • ... closer to our intuition of what the desired ideal object should look like. You implied in your answer that one needs "model theory" and "fat books" to understand this, but arguably the construction is more accessible than it is made out to be. Again, the main point of my original post was to question certain whiggish historical interpretations, rather than argue the for efficacy of working over the hyperreals. Robinson's work was merely a *catalyst* for a historical re-evaluation, and the merits or otherwise of Robinson's approach are a separate issue... – Mikhail Katz Aug 04 '13 at 07:43
  • ... Briefly, what Robinson revealed was that there was a "method to their madness" as far as the work of such giants as Fermat, Leibniz, Euler, and Cauchy is concerned. – Mikhail Katz Aug 04 '13 at 07:44
  • 1
    @user72694, Oh, I heartily agree that "whiggish interpretations" are too common, too shallow, and, yes, Robinson (and J. Los'?) work was a singularly helpful catalyst for re-consideration. No doubt in my mind. – paul garrett Aug 04 '13 at 16:37
  • @paul garrett: thanks for acknowledging this explicitly. You may recall that this question was closed at first, and had to be restored through an additional vote count. This was clearly not because this question is a duplicate of earlier similar questions, since I have not seen such questions here on SE. Note that a similar question at MO was not only closed but subsequently deleted altogether. This would seem to indicate that not everybody in the community realizes there is a need for such a historical re-evaluation. – Mikhail Katz Aug 04 '13 at 19:20
  • This is one of the best answers I have seen on the site. @user72694, what do you mean by a function "with local values" in relation to Dirac delta or constructions thereof? – zyx Dec 03 '13 at 05:10
  • @zyx, To explain what is meant by "local values", recall that the difference between a function and a distribution is that a function is defined by its value at each point, whereas a distribution, being a functional on the space of functions, is a more nebulous object. The Dirac delta function is generally considered to be of the latter type from a mathematically rigorous viewpoint. However, there is a different point of view on Dirac delta, as was pointed out already by Robinson in his 1966 book. Extending the field of scalars from the reals to the hyperreals gives an additional flexibility.. – Mikhail Katz Dec 03 '13 at 08:10
  • ... that allows one to define a type of Dirac delta function which has values at hyperreals points (rather than being a distribution). The idea of an infinitely tall, infinitely thin Dirac delta function can be implemented rigorously in this framework. The idea was already present in Cauchy's work from the 1820s though of course not in a form that would meet current standards of rigor. – Mikhail Katz Dec 03 '13 at 08:12
  • It is easy to define formally in this context some reasonable and useful meanings for something to be "local" or to have support concentrated at or near a point. The thing I was hoping to understand is what notion of locality might apply when writing that Cauchy (or Robinson) talked about a function "with local values". If you just mean that it is possible to define a notion of infinitesimal neighborhood, and talk about functions supported on the infinitesimal neighborhood, this seems like something that any theory that allows for (analytic) infinitesimals and infinities would accomplish. – zyx Dec 03 '13 at 08:38
  • @zyx, I am glad to see you are interested in this, but why are we discussing it here? This question concerns certain misrepresentations in the historical literature, and therefore in the views of mathematicians who are affected by this literature. As far as your question on locality is concerned, I would say the strength of Robinson's framework is the existence of the transfer principle which makes this useful in analysis rather than mere "notions" of infinitely small neighborhood. To understand the "local" issue, it may be helpful to examine the notion of... – Mikhail Katz Dec 03 '13 at 09:16
  • ...uniform continuity which similarly has a local definition in Robinson's framework. Drop me an email if you would like to discuss this further. – Mikhail Katz Dec 03 '13 at 09:17
  • I'm way out of my depth on the technicalities here but it seems obvious to me that most talk of infinities, infinite limits, infinitesimals etc. is describing the boundaries of what is possible, which any good mathematician knows is not what is actually possible. As such, maths using these concepts may reach further initially but ultimately be misleading while precise descriptions restricted to what is actually the case (Euler etc.) will be more formal, more difficult, but ultimately yield only *what IS*. As long as we keep our assumptions front of mind we are not apt to mislead ourselves. – samerivertwice May 30 '18 at 04:22
4

To give an example of the kind of answer requested here, note that one of the first examples in the NAMS text is from David Mumford, who wrote about overcoming his own prejudice (stemming from what he was taught concerning infinitesimals) in the following terms: "In my own education, I had assumed that Enriques [and the Italians] were irrevocably stuck.… As I see it now, Enriques must be credited with a nearly complete geometric proof using, as did Grothendieck, higher order infinitesimal deformations.… Let’s be careful: he certainly had the correct ideas about infinitesimal geometry, though he had no idea at all how to make precise definitions."

I enjoyed paul garrett's answer though it is steered in a slightly different direction, namely the effectiveness of NSA in cutting-edge research, whereas my question is mostly concerned with historical interpretation and getting an accurate picture of the mathematical past.

To give another example, Fermat's procedure of adequality involves a step where Fermat drops the remaining "E" terms; he carefully chooses his terminology and does not set them equal to zero. Similar remarks apply to Leibniz. Yet historians often assume that there is a logical contradiction involved at the basis of their methods, which can be summarized in the notation of modern logic as $(dx\not=0)\wedge(dx=0)$. Such remarks often go hand-in-hand with claims that the alleged logical contradiction was finally resolved around 1870. Without detracting from the greatness of the accomplishment around 1870, such criticism of the early pioneers of the calculus may not be on target.

Mikhail Katz
  • 35,814
  • 3
  • 58
  • 116
  • 6
    I'm a little unsure of why this quote by Mumford is relevant to your question, which is about whether historians of mathematics have given short shrift to infinitesimals in the development of analysis. The "infinitesimals" Mumford describes are not infinitesimal numbers; they are infinitesimal deformations. Although there are some analogies between them (and Mumford makes an *analogy* between Enriques and Leibniz), they are really not the same. – Pete L. Clark Jul 18 '13 at 07:57
  • 7
    In particular there is nothing "nonstandard" (i.e., ultraproducts, internal sets, transfer principles...) in the formalism of infinitesimal deformations. Rather it requires the enlargement of the class of rings which are viewed as rings of algebraic functions to include rings with nilpotent elements. So for instance this involves replacing $\mathbb{C}$ by $\mathbb{C}[t]/(t^n)$. Could you clarify whether you are saying that you view this as part of "the infinitesimal strand in the development of analysis", and if so, why? – Pete L. Clark Jul 18 '13 at 08:00
  • Of course they are not the same, but there does tend to exist an attitude of distrust toward all historical forms of infinitesimal reasoning, as illustrated by Mumford's comment. This is usually accompanied by a belief that historical infinitesimals were allegedly "logically inconsistent". Whether or not this is the case was discussed at length in [this thread](http://mathoverflow.net/questions/124998/was-the-early-calculus-inconsistent). – Mikhail Katz Jul 18 '13 at 08:06
  • Hmm. I don't disagree with anything in your last comment. But I feel like broadening the scope of your question to include all instances of things called "infinitesimal" in the history of mathematics dilutes it, maybe to the point of making it less interesting. For instance, there are many uses of the word "infinitesimal" in classical Lie theory, but I am not aware of any similar sociological backlash against this. A big difference here is that the formalism was there from early on: i.e., Lie algebras and their connection to Lie groups. – Pete L. Clark Jul 18 '13 at 08:19
  • Two points in response to Pete's last comment: (1) I would be happy to provide additional examples from the history of infinitesimal analysis if you like (though I was mostly hoping to elicit additional examples I am not yet familiar with); (2) From what I heard, Lie himself did present the theory in infinitesimal terms. This was "sanitized" by Killing and others, but caused some loss of insight, so there was a bit of "sociological backlash" as you call it in this area as well. – Mikhail Katz Jul 18 '13 at 08:24
  • Regarding (1): no, that's not necessary, much as you suggest. I was trying to understand the scope of your question. (2): That's interesting; I'd be happy to hear more about it, especially as to what insight was lost. Any infinitesimals that appear in Lie theory would be nilpotent infinitesimals, as far as I can see (and indeed the modern definition of the Lie algebra of an algebraic group uses $\mathbb{C}[t]/(t^2)$ very directly). To me that seems very different from NSA, because the algebraic formalism is right there for you to use: no heroic efforts a la Robinson are required. – Pete L. Clark Jul 18 '13 at 08:37
  • Pete, I did not say a *word* about NSA in my question (though I did add the NSA tag later, since it is relevant). Leibniz and Euler had no more notion of ultrafilter than Enriques. An important distinction here is that between procedures (or "inferential moves" as a postdoc from Chicago would put it) and entities (constructed within a suitable foundational framework such as ZFC). Modern infinitesimal theories give us adequate "proxies" for the procedures of the historical infinitesimalists, without of course implying any ultrafilter wrongdoing on their part. – Mikhail Katz Jul 18 '13 at 08:45
  • 2
    "I did not say a word about NSA in my question (though I did add the NSA tag later, since it is relevant)." What a strange thing to say. You included it as a tag in the question, which means that you regard it as one of the keywords. The question asks for a response to your article, which has NSA all over it. And your question is all about infinitesimals in the history of analysis, a subject in which NSA is one of the most important, if not the single most important, development. – Pete L. Clark Jul 18 '13 at 09:54
  • "Leibniz and Euler had no more notion of ultrafilter than Enriques." I know you know I know that, so I'm missing the point you're trying to make. (In fact my feeling is that some people draw too straight a line from Newton/Leibniz to Robinson. I don't view Robinson's work as an explication of 17th century analysis in the slightest: it is much more impressive than that, and conversely it is quite clear to me that the 17th century analysts did not have a firm logical foundation for their manipulations with infinitesimals.) – Pete L. Clark Jul 18 '13 at 10:03
  • "your question is all about infinitesimals in the history of analysis, a subject in which NSA is one of the most important .. development[s]": The position you expressed here is hotly disputed by many historians; this was discussed for example in [this thread](http://mathoverflow.net/questions/126986/eulers-mathematics-in-terms-of-modern-theories). My claim is that the helpfulness of NSA in understanding what the historical infinitesimalists were doing only involves the procedures (syntax, if you like) rather than the actual models (or semantics). I will add a Fermat example to my answer. – Mikhail Katz Jul 18 '13 at 10:10
  • "the 17th century analysts did not have a firm logical foundation for their manipulations with infinitesimals": I think this issue is less clear-cut than it appears to be. Here again it is important to keep in mind the distinction between procedures and entities (or between syntax and semantics). Claiming that Leibniz had a firm basis (by modern standards) for his entities such as infinitesimals would be absurd; on the other hand, we published a 50-page paper arguing that his procedures were not as contradictory as they are made out to be; see http://dx.doi.org/10.1007/s10670-012-9370-y – Mikhail Katz Jul 18 '13 at 10:23
  • I feel like you keep putting strange spins on things. The idea that NSA is not a highly important development in the history of infinitesimals strikes me, as a mathematician, to be utterly without merit: how can the discovery that a type of mathematical reasoning that had been made for hundreds of years then discarded for hundreds of years can be made fully rigorous not be highly important?? If there is a historian of mathematics who feels that way, I am not so interested. (I looked at the link you gave and didn't find any support for that.) – Pete L. Clark Jul 18 '13 at 11:02
  • Also I said that Leibniz did not have a firm logical foundation for his work on infinitesimals. You say that's not so clear, but *then* you say "Claiming that Leibniz had a firm basis (by modern standards) for his entities such as infinitesimals would be absurd;" Right: so it's totally clear that he did not have a firm logical basis (of course I mean this in the contemporary sense rather than the 17th century sense unless I say otherwise) for his ideas! You go on to say other things which, please note, rebut things that I have not said. – Pete L. Clark Jul 18 '13 at 11:06
  • I agree with you: NSA is an important development in this history. Yet there are many historians of math who deny this. They claim NSA is irrelevant to historical infinitesimals because it is an altogether different development involving ultrafilters. The link I provided above served a different purpose, namely to argue that Leibniz's *procedures* were not inconsistent. If you want an example of a historian who questions the relevance of NSA to the history of infinitesimals (and there are many of them), consider [Ferraro](http://dx.doi.org/10.1016/S0315-0860(03)00030-2)... – Mikhail Katz Jul 18 '13 at 11:11
  • ... and [this thread](http://mathoverflow.net/questions/126986/eulers-mathematics-in-terms-of-modern-theories) – Mikhail Katz Jul 18 '13 at 11:11
  • Also, the link on Leibniz I provided above does document a school of Leibniz interpretation, called the "syncategorematic interpretation" and originating with Ishiguro. These scholars not only deny that NSA has any relevance to Leibnizian infinitesimals, they also claim that Leibnizian infinitesimals were mere shorthand for a kind of a Weierstrassian paraphrase with a hidden quantifier (one of them, Levey, actually spells this out in terms of quantifiers). – Mikhail Katz Jul 18 '13 at 11:19
  • Now I think we're definitely talking about different things. When I say that NSA is a highly important development in the history of infinitesimal calculus, I'm not looking back at Euler's work to see to what extent it can be rewritten in the modern language. I'm saying that putting infinitesimals on a rigorous basis is itself a highly important development in the history (20th century history) of mathematics. – Pete L. Clark Jul 18 '13 at 11:26
  • 1
    Honestly your comments are convincing me that I am not so interested in what mathematical historians have to say about this issue. (Should I be? Why?) Let me also reiterate that I have never accused Leibniz of inconsistent procedures. As a research mathematician, I well know that any method that you use on a large variety of problems and get correct answers must have a high degree of procedural consistency. So that's not such a fascinating issue to me either... – Pete L. Clark Jul 18 '13 at 11:29
  • First of all, I agree with you: the fact that Terry Tao or Isaac Goldbring come along and prove new important results using NSA is more significant than having a better appreciation of Euler. Yet I wonder if it is only the historians who feel that Leibniz's procedures were inconsistent; I assume that many participants in [this discussion](http://mathoverflow.net/questions/124998/was-the-early-calculus-inconsistent) were mathematicians. – Mikhail Katz Jul 18 '13 at 11:37
  • Notwithstanding my last comment I looked at Ferraro's paper. I found that he has about three sentences on NSA, amounting to "There are aspects of Euler's thinking that are not consistent with NSA." This seems very far from "questioning the relevance of NSA to the history of infinitesimals". You see a lot more controversy here than I do... – Pete L. Clark Jul 18 '13 at 11:40
  • Ferraro's paper refers to Laugwitz's work as introducing aspects that are "quite extraneous to that of Euler", and concludes that "the attempt to specify Euler’s notions by applying modern concepts is only possible if elements are used which are essentially alien to them, and thus Eulerian mathematics is transformed into something wholly different". This amounts to a rejection of the relevance of NSA to understanding Euler. Moreover, Ferraro goes on to ignore important work on Euler done by a number of scholars in addition to Laugwitz, but his dismissal of Laugwitz is particularly odious. – Mikhail Katz Jul 18 '13 at 11:48
  • I looked back at the article and...sorry, you're quite right. (I had looked quickly and searched for "standard", so I missed some stuff.) I had better retreat to my earlier position: how close a mapping one can make between Euler's work and NSA is not really very important to me: it sounds like an issue well designed to support protracted argument across many publications, frankly. I have never read Euler directly, but rather (like everyone else) assimilated his work through the chain of mathematical breakthroughs and refinements. Viewed in that light, he is obviously linked to NSA. – Pete L. Clark Jul 18 '13 at 12:03
  • Now that I have you on my side, let me play the devil's advocate: why exactly is it that Euler is linked to NSA? Perhaps he is linked to [SIA](http://en.wikipedia.org/wiki/Smooth_infinitesimal_analysis)? – Mikhail Katz Jul 18 '13 at 12:07
  • "why exactly is it that Euler is linked to NSA?" Both Euler and Robinson contributed greatly to mathematical analysis. Euler was one of a number of great mathematicians who thought in terms of infinitesimal quantities, and Robinson's work made such thinking fully rigorous and legitimate. But how -- and really even whether -- Euler used infinitesimals is not so key here. I also see a link between, e.g., Weierstrass and NSA: the link is the progression of mathematical ideas. (I don't know anything about SIA.) – Pete L. Clark Jul 18 '13 at 12:28
  • If you mean to imply a *linear* progression, then it could be argued that the picture is more complicated in at least the following aspect: even in the first approximation, history of analysis is not a single thread but rather a pair of parallel threads. This is how we interpret Felix Klein's comment cited at the beginning of the NAMS text. Both threads were already present in Leibniz: he used infinitesimals to do calculus, but he also sought to underwrite the legitimacy of the calculus by means of "exhaustion" arguments (such translation works in certain cases but not in others). – Mikhail Katz Jul 18 '13 at 12:40
  • "If you mean to imply a linear progression" No. Whether a progression of mathematical ideas is "linear" or not is not the sort of question that interests me. (Again it sounds like the type of question designed to be argued about, not to be definitively answered.) I am not even sure that it is a meaningful question, at least not by my professional standards in that regard. – Pete L. Clark Jul 18 '13 at 19:08
  • My reply was getting long so I posted it as a separate "answer". – Mikhail Katz Jul 19 '13 at 07:53
2

(This is meant as a response to a comment by Pete L. Clark on whether the history of analysis was a "linear progression". Due to its length I decided to post it as a separate answer) I agree that focusing on the term "linear" is not the issue. What does seem to be a meaningful issue is the following closely related question.

Is it accurate to view the formalisation of analysis around 1870, an extremely important development by all accounts, as having established a "true" foundation of analysis in the context of the Archimedean continuum and by eliminating infinitesimals?

An alternative view is that the success of the Archimedean formalisation in fact incorporated an aspect of failure, as well, namely a failure to formalize a ubiquitous aspect of analysis as it had been practiced since 1670: the infinitesimal.

According to the alternative view, there is not one strand but two parallel strands for the development of analysis, one in the context of an Archimedean continuum, as formalized around 1870, and one in the context of what could be called a Bernoullian continuum (Johann Bernoulli having been the first to base analysis systematically and exclusively on a system incorporating infinitesimals). This strand was not formalized until the work of Edwin Hewitt in the 1940s, Jerzy Los in the 1950s, and especially Abraham Robinson in the 1960s, but its sources are already in the work of the great pioneers of the 17th century.

To give an example, In his recent article (Gray, J.: A short life of Euler. BSHM Bulletin. Journal of the British Society for the History of Mathematics 23 (2008), no. 1, 1--12), Gray makes the following comment:

"At some point it should be admitted that Euler's attempts at explaining the foundations of calculus in terms of differentials, which are and are not zero, are dreadfully weak" (p. 6). He provides no evidence for this claim.

It seems to me that Gray's sweeping claim is coming from a "linear progression" school of thinking where Weierstrass is credited with eliminating logically faulty infinitesimals, so of course Euler who used infinitesimals galore would necessarily be "dreadfully weak" without any further explanation needed.

Mikhail Katz
  • 35,814
  • 3
  • 58
  • 116
  • 5
    To my eye your question is primarily one lying at the border of history, philosophy and personal taste. The quotation marks around *true* point to this, as the truth you are talking about is not mathematical truth: it is more like a political truth. If you construe the questions purely mathematically the answers are easy: was Weierstrassian analysis correct and sound? Yes, of course. Moreover it has been very useful in all the years since. Did it fail to formalize some of the earlier forms of analytic reasoning including infinitesimals? Yes, of course. – Pete L. Clark Jul 19 '13 at 09:19
  • Is there something "missing" in the lack of treatment of infinitesimals in Weierstrassian analysis? That's equivalent to asking whether it is possible to add to W. analysis a rigorous and useful theory of infinitesimals, to which work of Robinson,...,Gromov,Green-Tao,Goldbring answers: yes, of course. – Pete L. Clark Jul 19 '13 at 09:23
  • I agree that this is not an issue of purely mathematical correctness, but issues of evaluation and appreciation of mathematical results arise in a daily fashion both in journal editorial boards and in hiring decisions. They also affect our view of history. To give an example, see my addition on Gray's comment with regard to Euler's alleged "dreadful weakness". – Mikhail Katz Jul 19 '13 at 09:31
  • "but issues of evaluation and appreciation of mathematical results arise in a daily fashion both in journal editorial boards and in hiring decisions." Sure, but in these cases the evaluations and appreciations are being made by other mathematicians. OK, some historian of mathematics called Euler "dreadfully weak": people say the darndest things. I would still hire Euler! – Pete L. Clark Jul 19 '13 at 09:54
  • I do take your point that historians of mathematics are, in part, the custodians of the history of mathematics, so it would be nice if they did a good rather than a poor job. But here are two honest facts about the attitudes of mathematicians towards historians (and other academics who study mathematics from the perspective of their own fields):.... – Pete L. Clark Jul 19 '13 at 09:57
  • 3
    (i) Most mathematicians are extremely uninterested in the Xology of mathematics. Mathematics is one of the most ahistorical academic fields: as a profession we do not regard history or historical scholarship as important. It might be an interesting thing to talk about over drinks, but we have much less of a commitment to primary source material, accurate scholarship, and so forth. (Most of us are openly contemptuous of philosophy and education...) – Pete L. Clark Jul 19 '13 at 10:01
  • 3
    (ii) Mathematicians who do get involved in other academic aspects of mathematics usually do so in a highly critical, individualistic way. If we somehow get interested in Euler's mathematics, *we will read Euler*. And we will think we understand Euler ever so much better than the historians/philosophers/educators did and thus not place much truck in what they say about him. – Pete L. Clark Jul 19 '13 at 10:05
  • 2
    Final comment: the professional painter, be he journeyman or star, will find the work of the old masters to be masterful, more so than the critics. When I read an old paper by a master, I'm always struck by how knowledgeable and smart they are *despite* not having sophisticated modern tools. I've watched Gauss prove theorem after theorem about class groups of quadratic forms. This should be impossible: *the notion of a group did not exist back then*! So he works harder than we would have to in some places. But he does it, better than almost any of us could do in his place. – Pete L. Clark Jul 19 '13 at 10:23
  • I share your admiration for Gauss, and have a similar admiration for Fermat whose work on maxima and minima I recently studied, as well as Euler. I have no problem having a discussion in the assumption that math is more important than history, but this happens to be a historical question and there is room for these in SE (over 500 of them in fact). I would like to respond to your comment earlier that you "also see a link between, e.g., Weierstrass and NSA: the link is the progression of mathematical ideas." This is of course true: Robinson's work was done in the context of 20th century math... – Mikhail Katz Jul 20 '13 at 20:46
  • ... and in this sense he can be viewed as a successor of Weierstrass. Note, however, that this connection was not that important to Robinson himself; he named his antecedents as Skolem, and Hardy and Borel before him, and cited a seminal work of Borel linking his theory of growth rates to earlier work by Paul du Bois-Reymond and back to Cauchy's work on growth rates of infinitesimals. There is a table summarizing these developments in a recent paper by Borovik and myself [here](http://dx.doi.org/10.1007/s10699-011-9235-x) (section 4). I describe these developments as occurring in the ... – Mikhail Katz Jul 20 '13 at 20:52
  • ... Bernoullian strand rather than the Archimedean strand, as explained in my answer above. I think on a deeper level, the view of Robinson as successor of the infinitesimalist Cauchy is more meaningful than a view of him as a successor of Weierstrass. – Mikhail Katz Jul 20 '13 at 21:18
  • Carl Mummert: Once the derivative is defined at the real points, the new function $g=f'$ has its own natural extension ${}^*g$ which is therefore defined at all hyperreal points. This situation is not analogous to a complex analysis book that defines the derivative on the real axis only. The reason is that the goal of complex analysis is to study complex functions, but the goal of the analysis over the hyperreals is still to study the real functions. One doesn't replace the real line by the hyperreal line, but rather works with the PAIR $(\mathbb{R}, {}^*\mathbb{R}$. This approach enables ... – Mikhail Katz Jul 23 '13 at 11:32
  • ... a richer syntax, including infinitesimals. My impression is that the concern about the derivative supposedly not being defined at hyperreal points is a misconception, but perhaps other participants can comment, as well. This comment was a response to a pair of comments by Carl Mummert that seem to have disappeared in the meantime... – Mikhail Katz Jul 23 '13 at 11:34
  • ... and reappeared [here](http://mathoverflow.net/questions/124998/was-the-early-calculus-inconsistent#comment354555_125072) – Mikhail Katz Aug 25 '13 at 15:19
0

A recent discussion at https://math.stackexchange.com/questions/455871/cauchys-limit-concept is a good illustration of the influence of feedback-style ahistory (to borrow Grattan-Guinness's term), when Weierstrass's ideas are read into an earlier author whether or not they belong there. To be sure, there is a considerable amount of historical controversy concerning Cauchy. J. Grabiner emphasizes the importance of the germs of epsilon, delta procedures that can be found in certain arguments in Cauchy's oeuvre. However, the actual epsilon, delta definition of limit (as opposed to procedures found in certain arguments) was not introduced by Cauchy but rather by later authors (usually Weiestrass is credited even though an earlier occurrence is found in Dirichlet).

In any case, the formalisation of the epsilontic limit concept certainly does not lie with Cauchy. What Cauchy did write about limits is that a variable quantity has limit $L$ if its values indefinitely approach $L$. With Cauchy, the primitive notion is that of a variable quantity, and limits are defined in terms of the latter in a fashion almost identical to what Newton wrote a few centuries earlier. Recently young scholars like Bråting and Barany have challenged received views on Cauchy.

Meanwhile, the discussion at https://math.stackexchange.com/questions/455871/cauchys-limit-concept proceeds under the explicitly stated assumption concerning alleged "Cauchy's formalization of limits", which is contrary to fact. The assumption was not challenged by any of the participants. This indicates that the community is often not aware of the true nature of Cauchy's work in analysis, including his definition of continuity expressed in terms of infinitesimals rather than epsilon, delta.

Mikhail Katz
  • 35,814
  • 3
  • 58
  • 116
  • That question is so muddled that it is a terrible example to use to illustrate your point. Currently it has two votes to delete. No doubt the remarks you mention would have been rightly addressed if a reasonable version of the question had been asked instead. – Andrés E. Caicedo Aug 06 '13 at 19:35
  • @Andres Caicedo: are you referring to the question on "Cauchy's limit concept"? It could have been stated more clearly, but the thrust of the question is clear. The OP is asking what came out of what is reputed to be Cauchy's foundational contribution to formalizing analysis, in terms of recognizable breakthroughs. The OP compares this with breakthroughs in physics that resulted from contributions of Einstein, Dirac, and others. I think other editors have also interpreted his question this way; one of them went on to answer that important developments in analysis stemmed from Cauchy's rigor. – Mikhail Katz Aug 06 '13 at 19:43