In general, the push for rigor is usually in response to a failure to be able to demonstrate the kinds of results one wishes to. It's usually relatively easy to demonstrate that there exist objects with certain properties, but you need precise definitions to prove that no such object exists. The classic example of this is non-computable problems and Turing Machines. Until you sit down and say "this precisely and nothing else is what it means to be solved by computation" it's impossible to prove that something isn't a computation, so when people start asking "is there an algorithm that does $\ldots$?" for questions where the answer "should be" no, you suddenly need a precise definition. Similar things happened with real analysis.

In real analysis, as mentioned in an excellent comment, there was a shift in what people's conception of the notion of a function was. This broadened conception of a function suddenly allows for a number of famous "counter example" functions to be constructed. These often that require a reasonably rigorous understanding of the topic to construct or to analyze. The most famous is the everywhere continuous nowhere differentiable Weierstrass function. If you don't have a very precise definition of continuity and differentiability, demonstrating that that function is one and not the other is extremely hard. The quest for weird functions with unexpected properties and combinations of properties was one of the driving forces in developing precise conceptions of those properties.

Another topic that people were very interested in was infinite series. There are lots of weird results that can crop up if you're not careful with infinite series, as shown by the now famously cautionary theorem:

**Theorem (Summation Rearrangement Theorem):** Let $a_n$ be a sequence such that $\sum a_n$ converges conditionally. Then for every $x$ there is some $b_n$ that is a reordering of $a_n$ such that $\sum b_n=x$.

This theorem means you have to be very careful dealing with infinite sums, and for a long time people weren't and so started deriving results that made no sense. Suddenly the usual free-wheeling algebraic manipulation approach to solving infinite sums was no longer okay, because sometimes doing so changed the value of the sum. Instead, a more rigorous theory of summation manipulation, as well as concepts such as uniform and absolute convergence had to be developed.

Here's an example of an problem
surrounding an infinite product created by Euler:

Consider the following formula:
$$x\prod_{n=1}^\infty \left(1-\frac{x^2}{n^2\pi^2}\right)$$
Does this expression even make sense? Assuming it does, does this equal $\sin(x)$ or $\sin(x)e^x$? How can you tell (notice that both functions have the same zeros as this sum, and the same relationship to their derivative)? If it doesn't equal $\sin(x)e^x$ (which it doesn't, it really does equal $\sin(x)$) how can we modify it so that it does?

Questions like this were very popular in the 1800s, as mathematicians were notably obsessed with infinite products and summations. However, most questions of this form require a very sophisticated understanding of analysis to handle (and weren't handled particularly well by the tools of the previous century).