There are a couple of reasons why people compute $\pi$ and $e$ to so many digits.

One is simply that it is a way to test hardware. The values are known (or can be known), and you want to see if your hardware is fast and accurate with these computations.

Another is that there are actually a lot of questions about the decimal expansions of $\pi$ and $e$ (and other numbers) for which we simply don't know the answer yet. For example: is $\pi$ *normal in base 10*? That is, will a particular sequence of digits occur in its decimal expansion in about the frequency you expect? More precisely, given a specific sequence of $n$ digits, will
$$\frac{\text{number of times that the specific sequence occurs in the first $m$ digits of $\pi$}}{m}$$
approach $1/10^n$ as $m\to\infty$ (there are $10^n$ different sequences of $n$ digits, so this is what you would expect if the sequence was completely random)? While the answer cannot be known simply by computing digits of $\pi$, knowing more and more digits helps us see whether it seems at least plausible or not, at least early on. We can also perform other tests of randomness to see whether the digits of $\pi$ (or $e$) seem to pass it or not.

As to your question about $e$, yes, in principle we can do that. But the process gets more and more complex as $n$ gets larger. Normally, a computer only knows how to represent numbers that are no larger than a certain quantity (depending on the number of bytes it uses to represent numbers), so you need to "explain" to your computer how to handle big numbers like $n!$ for large $n$. The storage space needed to perform the computations also gets larger and larger and larger. And that idea only works if you start from a known point. So while theoretically we could compute $e$ to as many digits as we want, in practice if we want the computations to finish sometime before the Sun runs out of hydrogen we can't really go that far. There are other algorithms known to compute decimals of $\pi$ or $e$ that are faster, or that don't require you to know the previous digits to figure out the next one.

And that leads to yet another thing that people who are computing so many digits of $\pi$ and $e$ may be doing: testing algorithms for large number computations, for floating point accuracy, or for parallel computing (we are not very good at parallel computing, and trying to figure out how to do it effectively is a Very Big Deal; coming up with ways to do computations such as "compute $\pi$ to the $n$th millionth digit" are ways to test ideas for doing parallel computing).

That leads to a third interest in the computations: coming up with mathematical ideas that can zero in on desired digits of the decimal expansion quickly; not because we are particularly interested in the digits of $\pi$, necessarily, but because we often *are* interested in particular digits of *other* numbers. $\pi$ and $e$ are just very convenient benchmarks to test things with.

Why did Euler compute the numbers? Because he wanted to show off some of the uses for Taylor series; up to that time, the methods for approximating the value of $\pi$ were much more onerous. Ludolph van Ceulen famously used the methods of Archimedes (inscribing and circumscribing polygons in circles) to find the value of $\pi$ to 16 decimals (and after many more years of effort, to 25 places); this was in the late 16th century, before it was known that $\pi$ was transcendental (or even before the notion of being transcendental was very clear), so trying to compute $\pi$ precisely actually had a point. He was so proud of the accomplishment (given all the hard work it had entailed) that he had the value of $\pi$ he computed put into his tombstone. Euler showed that Taylor series could be used to obtain the same results with a lot less effort and with more precision.