It often happens in mathematics that the answer to a problem is "known" long before anybody knows how to prove it. (Some examples of contemporary interest are among the Millennium Prize problems: E.g. Yang-Mills existence is widely believed to be true based on ideas from physics, and the Riemann hypothesis is widely believed to be true because it would be an awful shame if it wasn't. Another good example is Schramm–Loewner evolution, where again the answer was anticipated by ideas from physics.)

More rare are the instances where an abstract mathematical "idea" floats around for many years before even a rigorous definition or interpretation can be developed to describe the idea. An example of this is umbral calculus, where a mysterious technique for proving properties of certain sequences existed for over a century before anybody understood why the technique worked, in a rigorous way.

I find these instances of mathematical ideas without rigorous interpretation fascinating, because they seem to often lead to the development of radically new branches of mathematics$^1$. What are further examples of this type?

I am mainly interested in historical examples, but contemporary ones (i.e. ideas which have yet to be rigorously formulated) are also welcome.

  1. Footnote: I have some specific examples in mind that I will share as an answer, if nobody else does.
  • 14,399
  • 4
  • 29
  • 70
  • 2
    I am not educated on the topic, so I'll leave this as a comment, but I think the field with one element $\mathbb{F} _{1}$ is an example in the making. The possibility has been discussed for decades, and still nobody can say what the "correct" definition should be. – Will R Feb 03 '18 at 08:33
  • 5
    Ideas from what we now call generalised functions or distributions were around for a while before being formalised – Alessandro Codenotti Feb 03 '18 at 08:52
  • Possibly related: https://mathoverflow.net/questions/56677/ – Watson Feb 03 '18 at 09:54
  • I suppose [Fermat's last theorem](https://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem) falls into the first category and doesn't answer your queston, right? – Eric Duminil Feb 03 '18 at 10:01
  • 1
    It's not my area, so I'd rather someone more knowledgeable give it as an answer, but I know [motives](https://en.wikipedia.org/wiki/Motive_(algebraic_geometry)) are a modern example. Only very recently have pure motives seen a formal definition, and mixed motives are still yet to be defined! – Wojowu Feb 03 '18 at 10:34
  • 1
    Generalised functions (Schwartz). Integration (Riemann). – AccidentalFourierTransform Feb 03 '18 at 13:50
  • 13
    The notion of a function. See [history of the function concept](https://en.wikipedia.org/wiki/History_of_the_function_concept) – Sri-Amirthan Theivendran Feb 03 '18 at 17:21
  • @AlessandroCodenotti That is a perfect example. I suggest posting it as an answer. – Yly Feb 03 '18 at 17:27
  • @EricDuminil Correct. Fermat's Last Theorem was a well-formulated conjecture, and so not within the scope of this question. – Yly Feb 03 '18 at 17:28
  • 5
    Shouldn't this be community wiki? – Anton Fetisov Feb 03 '18 at 19:33
  • Almost everything in mainstream mathematics. I think it's the reverse question which is nearly devoid of examples, especially if we consider that important ideas were developed to satisfy some specific need(s). – pjs36 Feb 03 '18 at 22:25
  • Fermat's Last Theorem It was first conjectured in 1637. It was solved by Andrew Wiles and published finally and completely in 1995. – KingLogic Feb 03 '18 at 20:06
  • 2
    Calculus. Co-invented by Newton and Leibniz, but the idea of what happens in the limit was unsatisfactory to Betrand Russell, hence *Principia Mathematica*. – user207421 Feb 03 '18 at 23:43
  • Assuming something is true solely because of assumed consequences of the converse is a fallacy. – WGroleau Feb 04 '18 at 03:12
  • 9
    A corollary of this question is prompted by the observation that the 19th and 20th century seemed to do a lot of"cleaning up" of older mathematics by putting it on a rigorous foundation. Is it possible that in years to come, people will look back at 21st century math and scoff at our impreciseness? Or are we "done" tidying the house? – AlwaysLearning Feb 04 '18 at 05:48
  • 2
    I might have guessed **fractal** would be one such example. Perhaps someone better versed in its history as an "idea" before the formalized definition (in terms of Hausdorff dimension) can comment in this direction. – Benjamin Dickman Feb 04 '18 at 06:24
  • 8
    Sets were simply known as random aggregations of possibly no entities long before Zermelo and Fraenkel defined their modern meaning. Still, the old definition is good enough for math basics. – rexkogitans Feb 04 '18 at 12:44
  • 4
    Zero? (It's too short even to be a comment, much less an answer) – ShadSterling Feb 05 '18 at 23:59
  • 1
    I was astonished to learn that the Fundamental Theorem of Arithmetic is barely over 200 years old. – ShadSterling Feb 06 '18 at 00:05
  • 1
    @AlwaysLearning: We are done, because we have reached the notion of an **algorithm**, which we have described in terms of **syntactic** manipulation of **finite symbol strings**. Prior to that, these were only vaguely known, and so firm foundation of mathematics was essentially impossible. After that, there is nothing more, because **every formal system** that can ever be conceivable by humans will have to be an algorithmically describable one. All that is left is to find out what are the theorems proven by what formal systems. In that sense mathematics is now 100% precise (in theory). – user21820 Feb 06 '18 at 17:07
  • Related: https://math.stackexchange.com/a/4431348/169085 – Alp Uzman May 18 '22 at 17:49

19 Answers19


The notion of probability has been in use since the middle ages or maybe before. But it took quite a while to formalize the probability theory and giving it a rigorous basis in the midst of 20th century. According to wikipedia:

There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation, sets are interpreted as events and probability itself as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (that is, not further analyzed) and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.

There are other methods for quantifying uncertainty, such as the Dempster–Shafer theory or possibility theory, but those are essentially different and not compatible with the laws of probability as usually understood.

  • 8,721
  • 6
  • 27
  • 56
  • 1
    In fact, I would argue that that effort is not over, even as of today; see https://en.wikipedia.org/wiki/German_tank_problem and the XKCD take on a similar problem, https://xkcd.com/1132/ – DomQ Feb 12 '18 at 08:42
  • @DomQ I think the examples you mentioned are not so relevant as [this one](https://en.wikipedia.org/wiki/Bertrand_paradox_(probability)#Recent_developments), especially the part mentioning: _Nicholas Shackel affirms that after more than a century the paradox remains unresolved_ – polfosol Feb 12 '18 at 09:31
  • It is definitely not over, there remains today considerable controversy about the philosophy (and hence the formalism) of probability--start with Wikipedia "Interpretations of probability," and then Google to see the many papers and book chapters about the frequentist vs. Bayesian and belief based (e.g., Dempster-Shafer) models. – E. Douglas Jensen Jul 09 '18 at 19:50

Natural transformations are a "natural" example of this. Mathematicians knew for a long time that certain maps--e.g. the canonical isomorphism between a finite-dimensional vector space and its double dual, or the identifications among the varied definitions of homology groups--were more special than others. The desire to have a rigorous definition of "natural" in this context led Eilenberg and Mac Lane to develop category theory. As Mac Lane allegedly put it:

"I didn't invent categories to study functors; I invented them to study natural transformations."

  • 14,399
  • 4
  • 29
  • 70
  • 6
    The way I heard it was that "I didn't invent functors to study categories; I invented them to study natural transformations", which makes more sense, since Functors are less important than the other two concepts. – PyRulez Feb 04 '18 at 22:17
  • 1
    Fun fact I learned in grad algebra: Functors form a category, the morphisms of which are natural transformations. – Matthew Leingang Feb 06 '18 at 16:31

Euclidean geometry. You think calculus was missing rigorous understanding? Non-Euclidean geometry? How about plain old Euclidean geometry itself? You see, even though Euclid's Elements invented rigorous mathematics, even though it pioneered the axiomatic method, even though for thousands of years it was the gold standard of logical reasoning - it wasn't actually rigorous.

The Elements is structured to seem as though it openly states its first principles (the infamous parallel postulate being one of them), and as though it proves all its propositions from those first principles. For the most part, it accomplishes the goal. In notable places, though, the proofs make use of unstated assumptions. Some proofs are blatant non-proofs: to prove side-angle-side (SAS) congruence of triangles, Euclid tells us to just "apply" one triangle to the other, moving them so that their vertices end up coinciding. There's no axiom about moving a figure onto another! Other proofs have more insidious omissions. In the diagram, does there exist any point where the circles intersect? It's "visually obvious", and Euclid assumes they intersect while proving Proposition 1, but the assumption does not follow from the axioms.

allegedly intersecting circles

In general, the Elements pays little attention to issues of whether things really intersect in the places you'd expect them to, or whether a point is really between two other points, or whether a point really lies on one side of a line or the other, etc. We all "know" these concepts, but to avoid the trap of, say, a fake proof that all triangles are isosceles, a rigorous approach to geometry must address these concepts too.

It was not until the work of Pasch, Hilbert, and others in the late 1800s and early 1900s for truly rigorous systems of synthetic geometry to be developed, with the axiomatic definition of "betweenness" being a key new fundamental idea. Only then, millennia since the journey began, were the elements of Euclidean geometry truly accounted for.

  • 3,434
  • 1
  • 15
  • 36
  • 21
    Maybe instead of *Euclidean geometry*, it is the very idea of *rigor* that this is about – Hagen von Eitzen Feb 03 '18 at 11:15
  • Did the Cartesian plane formalise it? – PyRulez Feb 04 '18 at 22:20
  • 6
    @PyRulez: $\mathbb{R}^2$, once constructed, formalizes *one model of* a Euclidean plane. It doesn't formalize what a Euclidean plane *is*. What if you declare, "I say a Euclidean plane is whatever is isomorphic to $\mathbb{R}^2$"? Then you still have to say what structure the "isomorphisms" have to preserve. The idea that needs formalization is the structure - the lines/circles/angles/etc. system that Euclid describes (but with gaps). "$\mathbb{R}^2$ is far from being the end of the story. – echinodermata Feb 05 '18 at 02:48
  • Calling the parallel postulate *infamous* sounds like it was a mistake to include it. – JiK Feb 06 '18 at 09:25
  • @PyRulez: You may be interested in a recent conversation I had in the Logic room starting [here](https://chat.stackexchange.com/transcript/44058?m=42256549#42256549). Note that different mathematicians have had different axiomatizations of geometry, some of which fall to the incompleteness theorems, while others that don't naturally can't prove some basic 'geometrical facts'. – user21820 Feb 06 '18 at 17:15
  • 2
    @JiK, for a long time, it seemed like it *was* a mistake. Euclid's other axioms have a "fundamental" feel to them, but the parallel postulate seems like it should be provable from the others, and many efforts were made to develop such a proof. – Mark Feb 08 '18 at 01:03
  • @PyRulez : Descartes certainly didn't formalize it (actually the Cartesian plane goes back to the ancients (Ptolemy the astronomer used it), but Descartes pioneered using it to study equations through their graphs), because he didn't formalize $\mathbb{R}$. The difficulties in formalizing $\mathbb{R}$, which were resolved around the same time, are much better known, so this is still a good answer, regardless of which was finished first. – Toby Bartels Feb 09 '18 at 19:58

Following from the continuity example, in which the $\epsilon$-$\delta$ formulation eventually became ubiquitous, I submit the notion of the infinitesimal. It took until Robinson in the 1950s and early 60s before we had "the right construction" of infinitesimals via ultrapowers, in a way that made infinitesimal manipulation fully rigorous as a way of dealing with the reals. They were a very useful tool for centuries before then, with (e.g.) Cauchy using them regularly, attempting to formalise them but not succeeding, and with Leibniz's calculus being defined entirely in terms of infinitesimals.

Of course, there are other systems which contain infinitesimals - for example, the field of formal Laurent series, in which the variable may be viewed as an infinitesimal - but e.g. the infinitesimal $x$ doesn't have a square root in this system, so it's not ideal as a place in which to do analysis.

Patrick Stevens
  • 34,379
  • 5
  • 38
  • 88
  • What was Cauchy's formulation exactly? – Mikhail Katz Feb 03 '18 at 19:37
  • @MikhailKatz The $\epsilon$-$\delta$ formulation is what I understand by those words. – Patrick Stevens Feb 03 '18 at 20:09
  • 2
    Patrick, the fact is that Cauchy never gave an epsilon-delta definition of continuity. Cauchy defined continuity in terms of *infinitesimals*. Some historians have been engaged in a [quest for departed *quantifiers*](http://matstud.org.ua/texts/2017/47_2/115-144.pdf) in Cauchy which yielded meager results, because the thrust of Cauchy's approach to the calculus was via infinitesimals. – Mikhail Katz Feb 03 '18 at 20:12
  • In that case I'll remove Cauchy's name from this :) – Patrick Stevens Feb 03 '18 at 20:14
  • On the contrary you should mention Cauchy as one of the pioneers of the infinitesimal method. Many publications on this subject can be found at my page on the history and philosophy of infinitesimals. – Mikhail Katz Feb 03 '18 at 20:15
  • The point of my answer is to highlight that infinitesimals are something that everyone worked with, but we didn't really have a good rigorous foundation for them for a long time. Does Cauchy's work contradict that? – Patrick Stevens Feb 03 '18 at 20:17
  • 1
    No, Cauchy's work is an outstanding confirmation of this principle. Of course, Leibniz did it two centuries earlier already. – Mikhail Katz Feb 03 '18 at 20:19

Sets. As late as the early twentieth century, Bertrand Russell showed that one leading theory of them was self-contradictory, because it led to Russell's Paradox: Does the set of all sets that do not contain themselves, contain itself? The accepted solution was ZF set theory.

Another example that jumps to mind is counting up: Peano arithmetic was axiomatized in the nineteenth century (and has been considerably revised since). Or algorithms.

Which raises the point, I guess, that we're still looking for the best foundation for mathematics itself.

  • 2,528
  • 14
  • 17
  • 2
    I'd in fact argue that we still do not have a satisfactory axiomatization of "collection". ZFC sets don't do well, since there are 'classes'. MK set theory doesn't work, for the same reason. I prefer a type theory with a universal type, and it is possible, but something still has to be given up to avoid Russell's paradox. And you are right about 'algorithms'; I also mentioned that in my comment on the question, before I came to your answer. =) – user21820 Feb 06 '18 at 17:17
  • I think the point of Russell's paradox is that there _is_ no satisfactory axiomatization of "collection" because the idea is self-contradictory, unless you severely limit what a collection is allowed to be. – Akiva Weinberger Nov 27 '18 at 21:24

Continuity is an example of a concept that was unclear for some time and also defined differently from what we now consider its "correct" definition. See this source (Israel Kleiner, Excursions in the History of Mathematics, pp. 142-3) for example.

In the eighteenth century, Euler did define a notion of "continuity" to distinguish between functions as analytic expressions and the new types of functions which emerged from the vibrating-string debate. Thus a continuous function was one given by a single analytic expression, while functions given by several analytic expressions or freely drawn curves were considered discontinuous. For example, to Euler the function

$$ f(x) = \left\{\begin{array}{ll} x^2 & x > 0 \\ x & x \leq 0 \end{array}\right. $$

was discontinuous, while the function comprising the two branches of a hyperbola was considered continuous (!) since it was given by the single analytic expression $f(x) = 1/x$.


In his important Cours d'Analyse of 1821 Cauchy initiated a reappraisal and reorganization of the foundations of eighteenth-century calculus. In this work he defined continuity essentially as we understand it, although he used the then-prevailing language of infinitesimals rather than the now-accepted $\varepsilon - \delta$ formulation given by Weierstrass in the 1850s. [...]

(Note that it's useful to make explicit on what domain you're saying something is continuous; here, the author of the book is actually slipping up, since $1/x$ is in fact continuous on its natural domain $\mathbb R \setminus \{0\}$. Thanks @jkabrg)

  • 914
  • 5
  • 10
  • 6
    Related to this, the notion of a function itself ([see](https://en.wikipedia.org/wiki/History_of_the_function_concept) ). Basically since the 17th century, there was an intuitive understanding of what a function is, but it took at least until the 19th century before the appearance of any definition that was resilient enough not to immediately crumble under close scrutiny. – mlk Feb 03 '18 at 09:58
  • 6
    $1/x$ is continuous – wlad Feb 03 '18 at 17:47
  • @jkabrg The function defined by $f(x) = 1/x$ is only continuous on the subset $(-\infty, 0) \cup (0, \infty)$, not on the whole real line. By "the two branches of a hyperbola", the author refers precisely to the two branches of the "standard" hyperbola $1/x$, which are the two continuous components, one on $(-\infty,0)$ and one on $(0,\infty)$. – tomsmeding Feb 03 '18 at 17:51
  • 5
    @tomsmeding The domain of the function $1/x$ is *not* the whole real line – wlad Feb 03 '18 at 17:53
  • @jkabrg Fair point. You may write the author of that book if you'd like to have the book corrected. :) – tomsmeding Feb 03 '18 at 17:54
  • :) That's not to detract from the rest of your answer – wlad Feb 03 '18 at 17:55
  • Cauchy did *not* define continuity via epsilon-delta. Those historians who claim he did are involved in a [vain quest for the ghosts of departed *quantifiers*](http://matstud.org.ua/texts/2017/47_2/115-144.pdf). – Mikhail Katz Feb 03 '18 at 20:17
  • @Mikhail I am basically as far from a historian as one can get, so I'll just assume that you're right. If you have a better source/quote, please improve the answer. :) – tomsmeding Feb 04 '18 at 09:09

At the risk of having it called an obvious example, I submit Euclid's parallel postulate. It was formulated in his Elements cca. 300 B.C., then rationalized (including challenges and proof attempts) for many centuries before Saccheri laid down the $\,3\,$ alternatives of one/none/multiple parallels to a line through a given point in the $\,18^{th}\,$ century, then it took another $100$ years for the non-Euclidean geometries to be formalized by Lobachevsky and Bolyai.

  • 68,892
  • 6
  • 58
  • 112
  • 5
    Wish the downvoter had left a comment why. – dxiv Feb 06 '18 at 16:34
  • Well, it's not exactly a mathematical idea that took time to define rigorously. It's a mathematical proposition that took time to prove (rather, took time to disprove). The question explicitly distinguishes between ideas and propositions and is asking only about the former. – 6005 Feb 09 '18 at 20:50
  • @6005 We'll have to agree to disagree. The OP specifically asked about "*instances where an abstract mathematical "idea" floats around for many years before even a rigorous definition or interpretation can be developed to describe the idea*". The notion that geometry must be based on a set of axioms was a novel and "*abstract*" idea back then. The particular significance of the parallel postulate was noted early, but it "*floated around*" for more than two millennia before a "*rigorous definition or interpretation*" was developed. So, in my reading at least, the answer above is fully on-topic. – dxiv Feb 09 '18 at 21:25
  • 1
    Your reading is absolutely as valid as my own. To my reading, it didn't feel like an example. I didn't downvote, but I hope perhaps it may provide some explanation why someone might. – 6005 Feb 10 '18 at 00:16

The delta "function" showed up in a paper by Fourier - "Théorie analytique de la chaleur" in 1822.

It wasn't until ~1945 that Schwartz formally defined the delta functional as a distribution.


The notions of real numbers themselves - as rational cuts, as equivalent rational Cauchy sequences, or as elements of the unique model for the theory of complete ordered fields - would only appear in the 19th century, despite the centrality of calculus in mathematics and other sciences.

  • Euclid (V, def. 5) already had the construction of real numbers as rational cuts, except he was calling them ratios of magnitudes. – Anton Tykhyy Feb 04 '18 at 11:22
  • 2
    @AntonTykhyy: I don't see it. If the magnitudes are rational, then their ratios are rational as well; so I don't see how that definition is a construction of real numbers at all. Furthermore, the only similarity I see between that definition and Dedekind cuts is that both involve comparison of rational numbers. What am I missing? – ruakh Feb 06 '18 at 05:14
  • Euclid didn't apply the notion of rationality/irrationality to geometric magnitudes (lengths, areas and volumes) but to their ratios. It was known since Pythagoras that some ratios are irrational, and Euclid loc. cit. provides the definition of what it means for two ratios to be equal, less than or greater than one another, regardless of whether these ratios are rational or not. A Dedekind cut's upper and lower halves are the sets of rational numbers the numerators and denominators of which are those integers for which Euclid's equimultiples "alike exceed" or "alike fall short of" one another. – Anton Tykhyy Feb 06 '18 at 10:29
  • I'd accept this as a definition of real numbers, equivalent to Riemann's cuts, if Euclid had remarked that, given (say) a fixed line segment $x$, any line segment $y$ is determined (up to congruence, so really we're talking about lengths as a kind of geometric magnitude) by the pairs of natural numbers $(m,n)$ such that $my$ is shorter than $nx$, or something like that. – Toby Bartels Feb 09 '18 at 21:11
  • (Preferably he would also have characterized which collections of pairs of natural numbers could arise from such a ratio of magnitudes, namely that there exists at least one pair $(m,n)$ in the collection, and for each such pair, every pair $(m',n')$ such that $mn' < m'n$ is in, and also so is some pair $(m',n')$ such that $m'n < mn'$. Only then could one *define* a positive real number as an appropriate collection of pairs of natural numbers, equivalent to Riemann's definition of a real number as an appropriate collection of rational numbers. But perhaps this is asking too much.) – Toby Bartels Feb 09 '18 at 21:14
  • @TobyBartels : (Only) Some of your "$my$ is shorter then $nx$" thought is captured in Euclid's algorithm (for computing gcds). He argued in terms of using one quantity as a unit to partition another quantity, leaving a remainder quantity, used to partition the first quantity, et c. It's not quite what you're looking for, but the ingredients are all on the countertop, waiting to be mixed in the specific way you're discussing. – Eric Towers Feb 10 '18 at 17:06
  • I also neglected that there must exist at least one pair $(m,n)$ that is *not* in the collection (else we allow $y = 0$, which Euclid would not allow as a line segment; and if we did allow that, then we'd have to explicitly require $x \ne 0$, so I left *something* out in any case). And I should have put @AntonTykhyy in my first comment! – Toby Bartels Feb 12 '18 at 11:04
  • @EricTowers : Yes, the necessary preliminaries are there, but not (as far as I understand) any motivation to put them together in this way. Euclid would have easily understood what Riemann was doing, but he would not have understood so easily why! – Toby Bartels Feb 12 '18 at 11:08

Differentiable manifolds are an example. A rigorous definition only appeared about 100 years ago, in the works of Hermann Weyl and Hassler Whitney, although they were studied long before that time. Gauss's Theorema Egregium can already be seen as a theorem about this kind of concept, although stated long before it was formaly defined.

José Carlos Santos
  • 397,636
  • 215
  • 245
  • 423
  • 1
    Do you think it might be worth mentioning the strong Whitney embedding theorem explicitly, as this was an important milestone uniting two different notions of "manifold" floating around at the time. – Selene Routley Feb 09 '18 at 05:14
  • 1
    @WetSavannaAnimalakaRodVance That's an interesting complement, yes. – José Carlos Santos Feb 09 '18 at 06:31

Complex Numbers are an example of
" ..abstract mathematical "idea" floats around for many years before even a rigorous definition or interpretation can be developed to describe the idea ".

And it was much an embarrassing idea at Cardano and Bombelli times (XVI century) that took lot of imagination (sic) and mental stress to be settled.

G Cab
  • 33,333
  • 3
  • 19
  • 60

"Computation" (or effective calculability) is still an abstract mathematical idea that floats around awaiting a rigorous definition.

There are various candidates for defining what the term could mean -- e.g. the set of strings that can be generated by a certain type of grammar, or the set of strings that can be accepted by a certain type of machine, or the set of functions that can be defined given a certain set of function-construction rules. And there are rigorous proofs of the equivalences between many of those definitions.

But we are still left with an intuition about whether those definitions are adequate. That intuition is called Church's Thesis or the Church-Turing Thesis, but it remains (merely) a thesis. We might still come up with a broader definition of what constitutes a "computation" that cannot be subsumed under the existing candidates.

  • 347
  • 1
  • 5
  • I like this example a great deal, but do you really think that this is purely mathematics? Is there not a certain experimental aspect to this: that we won't come up with a definition of computability that isn't equivalent to the Church-Turing-Gödel triad of notions: the "candidates" you refer to are already rigorous. My impression is that although many quantum computing physicists conjecture that quantum computing will not come up with anything that isn't in principle computable in the classical sense, the idea that it might is not ruled out of the question. – Selene Routley Feb 09 '18 at 05:25
  • 1
    I would say that the results of Church and Turing and all the equivalences then have *solved* the problem of defining computation, or effective calculability. It certainly can be said that most mathematicians take Turing-computability to be the definition of what an algorithm or effective computation is. However, this is a philosophical point and I like your answer. – 6005 Feb 09 '18 at 20:53

Weil's conception of a cohomology theory for varieties adequate enough to solve the Weil Conjectures (the "Riemann Hypothesis" for varieties over finite fields) are an example. The idea for a Weil Cohomology was formulated in 1949 by Weil, and then Grothendieck came along in the 60's with etale and $\ell$-adic cohomology, which fit Weil's criteria and allowed Deligne prove the conjectures in 1974.

25 years may not be the longest time for something mentioned here, but exploring this idea and trying to rigorously realize it definitely helped create a decent portion of 20th century math.

  • 803
  • 1
  • 5
  • 9

Structure-preserving function.
It seems that this concept doesn't have a general definition yet. Category theory define the rules for calculations with morphisms but doesn't provide a general and formal rule what a structure-preserving function is, when the objects of the category are sets with additional structures.

For the kind of structures appearing in universal algebra it's clear enough but, for example, from an algebraic perspective, what makes continuity to the natural concept in topology?

This may provide a clue?

There is a subcategory of the category of relations as objects and relations between relations as morphisms, consisting of all relations as objects and certain relations between relations, that can be expressed by two relations, as morphisms.

Given two relations $R\subseteq A\times B$ and $R'\subseteq A'\times B'$. Some relations $r\subseteq R\times R'$ can be characterized by two relations $\alpha\subseteq A\times A'$ and $\beta\subseteq B\times B'$ so that

$((a,b),(a',b'))\in r \iff \Big((a,a')\in\alpha\wedge (b,b')\in\beta\wedge (a,b)\in R\implies (a',b')\in R'\Big)$

and if $R''\subseteq A''\times B''$, $r'\subseteq R'\times R''$, where $r'$ is characterized by $\alpha'\subseteq A'\times A''$ and $\beta'\subseteq B'\times B''$, then the composition $r'\circ r$ is characterized by the relations $\alpha'\circ\alpha\subseteq A\times A''$ and $\beta'\circ\beta\subseteq B\times B''$? (Where $\circ$ denote the composition of relations).

Suppose $A=B\times B$ and that $R\subseteq A\times B$ is the composition in a magma. Then the functions among the morphisms between two such objects defines magma morphisms $B\to B'$.

Suppose $B=\mathcal P(A)$ and that $R\subseteq A\times B$ is the relation $(a,S)\in R\iff a\in\overline{S}$ for some topology on $A$. Then the functions among the morphisms between two such objects define continuous functions $A\to A'$.

  • 13,268
  • 4
  • 23
  • 72
  • 2
    We have a general definition of a structure-preserving *bijection*, that is an isomorphism, and thus of the *groupoid* of sets equipped with a given structure, and this goes back at least to Bourbaki I (first published slightly before Eilenberg & MacLane). But that's not good enough for what you're talking about, I agree. – Toby Bartels Feb 09 '18 at 21:29

You might want to check out Imre Lakatos' "Proofs and Refutations", which depicts in a fictional dialog the evolution of the idea of a "polyhedron" over the centuries. His goal is to illuminate the dialectical process of definition and redefinition in mathematics, and perhaps in cognition generally.

  • 81
  • 1

Computation seems to fall into this category - for a long time, there had been an informal notion of something like "information processing". There had, of course, been the idea of a function for a long time. There were also prototypical algorithms, even as far back as Euclid. But the general idea of a well-defined process that implements a function based on small steps did not appear until Turing defined it in his 1936 paper.

  • 181
  • 2

About Fractals (I know a fractal when I see it). What is it mathematical definition of this concept?

Till nowadays the notion of fractal does not have yet a proper mathematical definition.

Basically, a fractal is figure or shape that have a self-similarity property. the geometry of a fractal differ from one shape to another.

To this end I would like to quote a speaker at conference in Leipzig last year (2017) He was asked by an attender: Sir what is a fractals ?

His answer: I know a fractal when I see it

Patently once someone has shown you a fractal for your first time , next time you will certainly recognise a fractal just by looking at his shape no matter how different it could be from the previous one: this is a fact.

There are different kind of well know fractals : Speinski-Gasket (whose the triangle below), Mandelbrot set (see the second figure), Julia set.....


enter image description here enter image description here

Guy Fsone
  • 22,154
  • 4
  • 51
  • 94

The calculus of variations was actively applied from the 17th Century and put on a firm theoretical foundation with the introduction of Banach spaces in 1920.

  • 86
  • 2

The Egyptians and the Babylonians knew a lot of mathematical facts (e.g. the Pythagorean theorem, the quadratic formula, the volumes of prisms and pyramids, etc.) which were later proved by the Greeks.

Also, Pascal and Fermat used mathematical induction and the well ordering of the naturals (in the form of infinite descent) 250 years before Peano formalized the axioms defining the natural numbers.