My dad likes to tell of a quotation he once read in a book on philosophy of mathematics. He does not remember which book it was, and I have never tried to track it down; this is really hearsay to the fourth degree, so it may not even be true. But I think it is pretty apt. The quote describes a castle on a cliff where, after each a storm finally dies down, the spiders come out andrun around frantically rebuilding their spiderwebs, afraid that if they don't get them up quickly enough the castle will fall down.

The interesting thing about the quote was that it was attributed to a book on the logical foundations of mathematics.

First, note that you are looking at the problem from a perspective of someone who "grew up" with sets that were, in some way, *carefully* "built off each other in a nice way." This was not always the case. Mathematicians were not always very careful with their foundations: and when they started working with infinite sets/collections, they *were* not being particularly careful. Dedekind does not start from the Axiom of Infinity to construct the naturals and eventually get to the reals; and moreover, when he gives his construction it is precisely to try to answer the question of just what *is* a real number!

In some ways, Russell's paradox was a storm that sent the spiders running around to reconstruct the spiderwebs. Mathematicians hadn't been working with infinite collections/sets for very long, at least not as "completed infinities". The work of Dedekind on the reals, and even on algebraic number theory with the definitions of ideals and modules, was not without its critics.

*Some* mathematicians had become interested in the issues of foundations; one such mathematician was Hilbert, both through his work on the Dirichlet Principle (justifying the work of Riemann), and his work on Geometry (with the problems that had become so glaring in the "unspoken assumptions" of Euclid). Hilbert was such a towering figure at the time that his interest was itself interesting, of course, but there weren't that many mathematicians working on the foundations of mathematics.

I would think like Sebastian, that most "working mathematicians" didn't worry too much about Russell's paradox; much like they didn't worry too much about the fact that Calculus was not, originally, on solid logical foundation. Mathematics clearly *worked*, and the occasional antinomy or paradox was likely not a matter of interest or concern.

On the other hand, the 19th century had highlighted a *lot* of issues with mathematics. During this century all sorts of tacit assumptions that mathematicians had been making had been exploded. Turns out, functions can be *very* discontinuous, not just at a few isolated points; they can be continuous but not differentiable everywhere; you can have a curve that fills up a square; the Dirichlet Principle need not hold; there are geometries where there are no parallels, and geometries where there are an infinite number of parallels to a given line and through a point outside of it; etc. While it was clear that mathematics *worked*, there was a general "feeling" that it would be a good idea to clear up these issues.

So some people began to study foundations specifically, and try to build a solid foundation (perhaps like Weierstrass had given a solid foundation to calculus). Frege was one such.

And to people who were very interested in logic and foundations, like Frege, Russell's paradox it *was* a big deal because it pinpointed that one particular tool that was very widely used led to serious problems. This tool was unrestricted comprehension: any "collection" you can name was an object that could be manipulated and played with.

You might say, "well, but Russell's paradox arises in a very artificial context, it would never show up with a "real" mathematical collection." But then, one might say that functions that are continuous everywhere and nowhere differentiable are "very artificial, and would never show up in a 'real' mathematical problem". True: but it means that certain results that had been taken for granted no longer can be taken for granted, and need to be restricted, checked, or justified anew, if you want to claim that the argument is valid.

In context, Russell's paradox showed an entirely new thing: there can be collections that are not sets, that are not objects that can be dealt with mathematically. This is a very big deal if you don't even have that concept to begin with! Think about finding out that a "function" doesn't have to be "essentially continuous" and can be an utter mess: an entirely new concept or idea; and entirely new possibility that has to be taken into account when thinking about functions. So with Russell's, an entirely new idea that needs to be taken into account when thinking about collections and sets. All the work that had been done before which tacitly assumed that just because you could name a collection it was an object that could be mathematically manipulated was now, in some sense, "up in the air" (as much as those castle walls are "up in the air" until the spiders rebuild their webs, perhaps, or perhaps more so).

If nothing else, Russell's paradox creates an entirely new category of things that did not exist before: **not-sets**. Now you think, "oh, piffle; I could have told them that", but that's because you grew up in a mathematical world where the notion that there are such things as "not-sets" is taken for granted. At the time, it was the exact opposite that was taken for granted, and Russell's paradox essentially tells everyone that something they all thought was true just isn't true. Today we are so used to idea that it seems like an observation that is not worth that much, but that's because we grew up in a world that already knew it.

I would say that Russell's paradox was a big deal and wasn't a big deal. It was a big deal for anyone who was concerned with foundations, because it said "you need to go further back: you need to figure out what *is* and what is *not* a collection you can work with." It undermined all of Frege's attempt at setting up a foundation for mathematics (which is why Frege found it so important: he certainly had invested a lot of himself into efforts that were not only cast into doubt, but were essentially demolished before they got off the ground). It was such a big deal that it completely infuses our worldview today, when we simply *take for granted* that some things are not sets.

On the other hand, it did not have a major impact on things like calculus of variations, differential equations, etc., because those fields do not really rely very heavily on their foundations, but only on the properties of the objects they are working with; just like most people don't care about the Kuratowski definition of an ordered pair; it's kind of nice to know it's there, but most will treat the ordered pair as a black box. I would expect most of them to think "Oh, okay; get back to me when you sort that out." Perhaps like the servants living on the castle not worrying too much about whether the spiders are done building their webs or not. Also, much like the way that after Weierstrass introduced the notion of $\epsilon$-$\delta$ definitions and limits into calculus, and then re-established what everyone was using *anyway* when they were using calculus, it had little impact in terms of applications of calculus.

That rambles a bit, perhaps. And I'm not a very learned student of history, so my impressions may be off anyway.