2

I saw a fake proof using the power rule to show that $1 = 2$, thus disproving the power rule. It is obviously wrong but I can't spot the error. It goes something like this:

$$\frac{d}{dx}x^2 = 2x \\ x^2 = x\times x = \sum_{n = 1}^{x} x \\ \frac{d}{dx}x^2 = \frac{d}{dx}\sum_{n = 1}^{x} x = \sum_{n = 1}^{x} 1 = x \\ \therefore x = 2x \implies 1 = 2 $$

Where's the issue here? It seems to be something on the $3^{\text{rd}}$ line (which I know also contains more than any other line; I'm not versed in stating proofs), but I can't see the contradiction.

Blue
  • 69,412
  • 11
  • 108
  • 217
  • In line 2, you seem to be assuming that $x$ is an integer. – Angina Seng Aug 15 '20 at 03:10
  • In line two you are treating $x$ as an integer (which is bad but not $1=2$ bad). But in the third line you are treating the $x$ that is the upper index of the sum as a constant rather than a function of $x$. ANd that *is* $1=2$ bad. – fleablood Aug 15 '20 at 03:55

2 Answers2

5

The issue is with $\frac{d}{dx}\sum_{n=1}^xx$. The upper limit of the sum is also a function of $x$, and in particular differentiating that sum with respect to $x$ does not give you $\sum_{n=1}^x1$. It doesn't even make sense to begin with - the sum is only defined for $x\in\mathbb{N}$, but you can't meaningfully differentiate a function whose domain is the integers.

csch2
  • 4,126
  • 2
  • 12
  • 19
0

Suppose the proof went like this.

As we know if $f(x) = \color{blue}a\cdot x$ then $f'(x)= \color{blue}a$.

So that means if $f(x) = x^2 = \color{blue}x\cdot x$ so that means $f'(x)$ ought to be $\color{blue}x$.

Suppose we even took it further and figure $f'(x) = \lim_{h\to 0}\frac {f(x+h)-f(x)}h =\lim_{h\to 0} \frac {x\cdot (x+h)-x\cdot x}{h} = \lim_{h\to 0} \frac {x^2 + h\cdot x - x^2}h=\lim_{h\to 0}\frac {h\cdot x}h = x$.

We are treating the squaring of $x$ (multiplying $x$ by $x$) as though the $x$ we are multiplying by is constant and not a variable that is modifying in the same manner as the other $x$ is and clearly $f(x+h) \ne x\cdot (x+h)$; it is $(x+h)(x+h)$ which is entirely different.

It's true that $f(x) = x^2 =x\cdot x = \sum_{k=1}^x x$ assumes $x \in \mathbb N$ and it is true that $f: \mathbb N \to \mathbb R$ can not be differentiated. But more important, the upper index $\sum_{k=1}^{\color{blue} x} x$ is a variable and not a constant. Even we could (we can't be even if we could) define $\lim_{h\to 0}\frac {f(x+h) - f(x)}h$ where $f(x) = \sum_{k=1}^{\color{blue} x} x$ then derivative would be $\lim_{h\to 0} \frac {\sum_{k=1}^{\color{blue}{x+h}} (x + h) - \sum_{k=1}^{\color{blue} x} x}h$ and it WOULDN'T be $\lim_{h\to 0} \frac {\sum_{k=1}^{\color{blue}{x}} (x + h) - \sum_{k=1}^{\color{blue} x} x}h$ and the index $\color{blue}{x+h}$ would make all the difference.

Of course that argument is somewhat fetid dingo kidneys as $\sum_{k=1}^{\color{blue}{x+h}} (x + h)$ doesn't actually make any sense as $x+h$ is not a natural number index. But if it did (which it would if we defined $f(x)= x\cdot x$ so $f(x+h)$ as $(x+h)(x+h) \ne x\cdot (x+h)$) that argument would be ... healthy and fresh dingo kidneys.

fleablood
  • 1
  • 5
  • 39
  • 125