The big difference between $\mathbb{R}^2$ and $\mathbb{C}$: differentiability.

In general, a function from $\mathbb{R}^n$ to itself is differentiable if there is a linear transformation $J$ such that the limit exists:

$$\lim_{h \to 0} \frac{\mathbf{f}(\mathbf{x}+\mathbf{h})-\mathbf{f}(\mathbf{x})-\mathbf{J}\mathbf{h}}{\|\mathbf{h}\|} = 0$$

where $\mathbf{f}, \mathbf{x}, $ and $\mathbf{h}$ are vector quantities.

In $\mathbb{C}$, we have a stronger notion of differentiability given by the Cauchy-Riemann equations:

$$\begin{align*}
f(x+iy) &\stackrel{\textrm{def}}{=} u(x,y)+iv(x,y) \\
u_x &= v_y, \\
u_y &= -v_x.
\end{align*}
$$

These equations, if satisfied, do certainly give rise to such an invertible linear transformation as required; however, the definition of complex multiplication and division requires that these equations hold in order for the limit

$$\lim_{h\ \to\ 0} \frac{f(z+h)-f(z)-Jh}{h} = 0$$

to exist. Note the difference here: we divide by $h$, not by its modulus.

In essence, multiplication between elements of $\mathbb{R}^2$ is not generally defined (although we could, if we wanted to), nor is division (which we could also attempt to do, given how we define multiplication). Not having these things means that differentiability in $\mathbb{R}^2$ is a little more "topological" -- we're not overly concerned with where $\mathbf{h}$ is, just that it gets small, and that a non-singular linear transformation exists at the point of differentiation. This all stems from the generalization of the inverse function theorem, which can basically be approached completely topologically.

In $\mathbb{C}$, since we can divide by $h$, because we have a rigorous notion of multiplication and division, we want to ensure that the derivative exists independent of the path $h$ takes. If there is some trickeration due to the path $h$ is taking, we can't wash it away with topology quite so easily.

In $\mathbb{R}^2$, the question of path independence is less obvious, and less severe. Such functions are *analytic*, and in the reals we can have differentiable functions that are not analytic. In $\mathbb{C}$, differentiability implies analyticity.

Example:

Consider $f(x+iy) = x^2-y^2+2ixy$. We have $u(x,y) = x^2-y^2$, and $v(x,y) = 2xy$. It is trivial to show that
$$u_x = 2x = v_y, \\
u_y = -2y = -v_x,$$
so this function is analytic. If we take this over the reals, we have $f_1 = x^2-y^2$ and $f_2 = 2xy$, then
$$J = \begin{pmatrix} 2x & -2y \\ 2y & 2x \end{pmatrix}.$$
Taking the determinant, we find $\det J = 4x^2+4y^2$, which is non-zero except at the origin.

By contrast, consider
$f(x+iy) = x^2+y^2-2ixy$. Then,

$$u_x = 2x \neq -2x = v_y, \\
u_y = -2y \neq 2y = -v_x,$$

so the function is not differentiable.

However, $$J = \begin{pmatrix} 2x & 2y \\ -2y & -2x \end{pmatrix}$$ which is not everywhere singular, so we can certainly obtain a real-valued derivative of the function in $\mathbb{R}^2$.