I have read it a thousand times: "you only need local information to compute derivatives." To be more precise: when you take a derivative, in say point $a$, what you are essentially doing is taking a limit, so you only need to look at the open region $ (a-\delta,a+\delta) $.

Taylor's theorem seems to contradict this: from the derivatives in just one point, you can reconstruct the whole function within its radius of convergence (which can be infinity).

For example, consider the function: $f: \mathbb{R} \rightarrow \mathbb{R}:x\mapsto \left\{ \begin{array}{lr} x+3\pi/2:& x \leq-3\pi/2 \\ \cos(x): & -3\pi/2\leq x \leq3\pi/2\\ x+3\pi/2& : x\leq-3\pi/2 \end{array} \right.\\$

Wolfram Alpha tells me that $D^{100}f(0)=\cos(0)$... This should give us more than enough information to get a Taylor expansion that converges beyond the point where $f$ is the $\cos$ function ($R=\infty$ for $\cos$ so eventually we have to get there) ...

Let me put it this way: Look at the limiting case. All you need to have for a Taylor expansion that converges over all the reals is all the derivatives in 0. This would give you the exact same Taylor expansion as you'd get for the cosine function, while the function from which we took the derivatives is clearly not the cosine function over all the reals.

So my question is: Is Wolfram Alpha wrong? If it is right, why does this seem to violate Taylors theorem? If it's wrong, is that because the local region of the domain you need to compute the nth derivative grows with n?

*Edit 1*:
en.m.wikipedia.org/wiki/Taylor%27s_theorem. The most basic version of Taylors theorem for one variable does not mention analyticity, and it's easy to prove that the "remainder" goes to zero as you take more and more derivatives, so that f(x) is determined at any x by the derivatives of f in 0.