14

I really like to use paradoxes in my math classes, in order to awaken the interest of my students. Concretely, these last weeks I am proposing paradoxes that achieve the conclusion that 2=1. After one week, I explain the solution in the blackboard and I propose a new one. For example, I posted the following one some months ago: What is wrong with the sum of these two series? I would like to increase my repertoire of fake-proofs. I would be glad to read your proposals and discuss them! My students are 18 years old, so don't be too cruel :) Here is my own contribution:

\begin{equation} y(x) = \tan x \end{equation} \begin{equation} y^{\prime} = \frac{1}{\cos^{2} x} \end{equation} \begin{equation} y^{\prime \prime} = \frac{2 \sin x}{\cos^{3} x} \end{equation} This can be rewritten as: \begin{equation} y^{\prime \prime} = \frac{2 \sin x}{\cos^{3} x} = \frac{2 \sin x}{\cos x \cdot \cos^{2} x} = 2 \tan x \cdot \frac{1}{\cos^{2} x} = 2yy^{\prime} = \left( y^{2} \right)^{\prime} \end{equation} Integrating both sides of the equation $y^{\prime \prime} = \left( y^{2} \right)^{\prime}$: \begin{equation} y^{\prime} = y^{2} \end{equation} And therefore \begin{equation} \frac{1}{\cos^{2} x} = \tan^{2} x \end{equation} Now, evalueting this equation at $x = \pi / 4$ \begin{equation} \frac{1}{(\sqrt{2}/2)^{2}} = 1^{2} \end{equation} \begin{equation} 2 = 1 \end{equation}

Kikolo
  • 563
  • 2
  • 17
  • 13
    Cute. $2=1+C$ would be much less intresting. – André Nicolas Oct 14 '15 at 22:10
  • 1
    Yes, when integrating this a a integration constant $C$. So we actually get \begin{equation} \frac{1}{\cos^{2} x} = \tan^{2} x +C \end{equation} In fact $C=1$ and we correctly get $\frac{1}{\cos^{2} x} = 1+ \tan^{2} x $. – Ramiro Oct 14 '15 at 22:22
  • 1
    I think this is very cool: http://math.stackexchange.com/questions/417280/continued-fraction-fallacy-1-2/417322#417322 – Ant Oct 14 '15 at 22:32
  • 1
    You may like this one: Proposition: There no such number $i=\sqrt{-1}$. Proof: Let us work by contradiction. Suposse there is $i=\sqrt{-1}$. Then $$-i= (-1).i= i.i.i=\sqrt{-1}.\sqrt{-1}.\sqrt{-1}=\sqrt{(-1).(-1).(-1)}=\sqrt{-1}=i $$ So $i=0$. But, clearly $0^2 =0\neq -1$, so $0$ cannot be equal to $\sqrt{-1}$. Contradiction. – Ramiro Oct 14 '15 at 22:40
  • @Ant There are many forms to present Paradoxes. One of them is to present a proof of something we know to be false. We know that there is a complex number $i$. So the "proposition" in my coment above is false. However, the proof (at a naive first sight) may look correct. – Ramiro Oct 15 '15 at 00:01
  • @Ramiro To make it officially fit the challenge, multiply both sides of $i=0$ by $i$ and add $2$. – PyRulez Oct 15 '15 at 01:16
  • @AndréNicolas while $\exists_C 2 = 1 + C$ the error is already earlier in $2yy' \neq 2(y^2)'$. Taking $y = x$: $2(x^2)' = 4x \neq 2x = 2 x 1$. – Maciej Piechotka Oct 15 '15 at 01:17
  • @PyRulez You are right, just to force the conclusion 2=1. However, in my comment, I was considering a slightly broader approach of "paradoxes" that might be useful for teaching. In the way I presented, some students MAY find it more intriguing and not so obviously wrong. – Ramiro Oct 15 '15 at 02:58
  • @Ramiro actually, I misread! Apologies ;-) – Ant Oct 15 '15 at 07:07

15 Answers15

12

One of my favorites, and very simple to understand for most algebra students:

$2 = 1+1$

$2 = 1+\sqrt{1}$

$2 = 1+\sqrt{(-1)(-1)}$

$2 =^* 1+\sqrt{-1}\sqrt{-1}$

$2 = 1+i*i$

$2 = 1+i^2$

$2 = 1+(-1)$

$2 = 0$

$^*$ The wrong step

Divide both sides by 2 and add 1 and you would get $2=1$, as you desired.


To be thorough, the mistake occurs in the fourth line where the square root is split. In reality, the rule is:

$\sqrt{ab}=\sqrt{a}\sqrt{b}$ when either $a\geq0$ or $b\geq0$

$\sqrt{ab}=-\sqrt{a}\sqrt{b}$ when $a<0$ and $b<0$

So you would have an equality if you follow that rule, but many students aren't going to catch the error.

Poisson Fish
  • 443
  • 5
  • 11
  • Why the $1+$ in each step? Seems like it would be cleaner if you "prove" $1=-1$ in the main part and then use linearity to turn it into $2=1$. – Mario Carneiro Oct 22 '15 at 13:58
  • @MarioCarneiro, that's certainly true. I just relayed it here the way I first saw it. I imagine the appeal at first was "two is nothing?!" You could, of course, use this to "prove" any number equals zero. – Poisson Fish Oct 22 '15 at 15:48
10

$$0 = (1-1)+(1-1)+ … = 1 -(1-1)-(1-1)-… = 1 \implies 2=1$$

Calvin Khor
  • 31,342
  • 5
  • 40
  • 81
6

Why not show all numbers are equal to 1:

For any $z\in\mathbb R$, $$ \sum_{n=-\infty}^{\infty}z^{n}=z\sum_{n=-\infty}^{\infty}z^{n-1}=z\sum_{n=-\infty}^{\infty}z^{n}. $$ So $$ \sum_{n=-\infty}^{\infty}z^{n}=z\sum_{n=-\infty}^{\infty}z^{n}\Rightarrow 1=z. $$

DirkGently
  • 1,598
  • 7
  • 8
5

Another enjoyable "paradox": We first denote $$S:=\sum_{n\in\mathbb N}\dfrac{(-1)^{n+1}}{n}$$ The fact that $0\neq S\in\mathbb R$ can be established using elementary tools.
We then write:
$S=\frac{1}{1}-\frac {1}{2}+\frac{1}{3}-...+...-...$
$2S = 2(\frac{1}{1}-\frac {1}{2}+\frac{1}{3}-...+...-...)=\frac{2}{1}-\frac {2}{2}+\frac{2}{3}-\frac{2}{4}+\frac{2}{5}-\frac{2}{6}+\frac{2}{7}-...=$
$=\color{red}{\frac{2}{1}}\color{red}{-\frac {2}{2}}\color{green}{+\frac{2}{3}}\color{blue}{-\frac{2}{4}}+\frac{2}{5}\color{green}{-\frac{2}{6}}+\frac{2}{7}-...=\color{red}{\frac{1}{1}}\color{blue}{-\frac{1}{2}}\color{green}{+\frac{1}{3}}-...=S$
And at last: $$2S = S \Longrightarrow 2=1$$

Ranc
  • 1,937
  • 8
  • 20
  • This is that rearrangement theorem, right? The one that says that any convergent sequence can be rearranged to converge to any value? – Faraz Masroor Oct 14 '15 at 23:59
  • @FarazMasroor Yes, this is an abuse of Riemann series theorem (Rearrangement theorem). – Ranc Oct 15 '15 at 06:21
5

Here is a simple one:

$$ x=y\\ x^2=xy\\ 2x^2=x^2+xy\\ 2x^2-2xy=x^2-xy\\ 2(x^2-xy)=1(x^2-xy)\\ 2=1 $$

The error is quite obviously division by zero (from the 5th to 6th step).

AMACB
  • 247
  • 2
  • 8
5

$x = \underbrace{1 + 1 + 1 + \ldots + 1}_{x \textrm{ times}} = \underbrace{\frac{\mathrm{d}}{\mathrm{d}x}\left(x\right) + \frac{\mathrm{d}}{\mathrm{d}x}\left(x\right) + \frac{\mathrm{d}}{\mathrm{d}x}\left(x\right) + \ldots + \frac{\mathrm{d}}{\mathrm{d}x}\left(x\right)}_{x \textrm{ times}} = \frac{\mathrm{d}}{\mathrm{d}x}\underbrace{\left(x + x + x + \ldots + x\right)}_{x \textrm{ times}} = \frac{\mathrm{d}}{\mathrm{d}x}\left(x^2\right) = 2x$

Michael Biro
  • 13,362
  • 2
  • 23
  • 38
  • 4
    Beat me to it! Of course if you use the "chain rule" you get $$\frac{d}{dx}\underbrace{x+x+\cdots+x}_{x\ \mathrm{times}} = \underbrace{1+1+\cdots+1}_{x\ \mathrm{times}} + \underbrace{x+x+\cdots+x}_{1\ \mathrm{times}}$$ which works! – user7530 Oct 15 '15 at 22:43
4

Does this count or is it too obvious where things go wrong, considering there is only one step?

Define $f_n(x) = n \cdot 1_{x \le \frac 1n}$

Clearly for every $x$, $$\lim_{n \to \infty} f_n(x) = 0$$

Therefore

$$\lim_{n \to \infty} \int_0^1 f_n(x)\ dx = \int_0^1 0 \ dx = 0$$

But $\int_0^1 f_n(x) \ dx = 1$ for every $n$; hence it is proved that

$$1 = \lim_{n \to \infty} 1 = 0$$

Ant
  • 20,466
  • 5
  • 41
  • 97
  • Could you please explain where is the flaw? I think it's because $f_n(x)$ doesn't converge uniformly to $0$, but I can't see why it doesn't. –  Apr 27 '18 at 04:03
  • 1
    @Alnitak The problem is that you cannot bring the limit under the integral sign. In fact, lebesgue dominated convergence theorem doesn't apply; and, as you say, $f_n$ does not converge uniformly. In fact, $$sup |f_n(x) - f(x)| = n \not \to 0$$ – Ant Apr 27 '18 at 08:00
4

Here's one I just made up. $\log_{b} b^x = x$. And $\log_{b} 1 = 0$.

Let $b = 1, x = 1$, and $b^x = 1$. Then $0 = \log_b 1 = \log_b b^x = x = 1$

fleablood
  • 1
  • 5
  • 39
  • 125
4

In the same vein as your example, let's integrate $\frac1x$ by parts.

Let $I = \int\frac1x\ \textrm dx$, and set $u = \frac1x, \textrm dv = \textrm dx$. Then:

$$ \begin{align} I = \int u\ \textrm dv &= uv - \int v\ \textrm du \\ &= \frac1x\cdot x - \int x\left(\frac{-1}{x^2}\right) \textrm dx \\ &= 1 + \int\frac1x\ \textrm dx \\ &= 1 + I \end{align} $$

Therefore $0 = 1$, so clearly $1 = 2$.

Théophile
  • 25,974
  • 4
  • 36
  • 53
4

Here is one of my favorites.

Proof that $1=0$

Let's consider for real $x$ the function $f(x)=xe^{-x^2}$. Note, the following integral representation of $f$ is valid (substitute: $u=x^2/y$). \begin{align*} \int_{0}^{1}\frac{x^3}{y^2}e^{-x^2/y}\,dy =\left[xe^{-x^2/y}\right]_0^1 =xe^{-x^2} \end{align*}

We obtain for all $x$ the following relationship

\begin{align*} e^{-x^2}(1-2x^2)&=\frac{d}{dx}\left(xe^{-x^2}\right)\\ &=\frac{d}{dx}\int_0^1\frac{x^3}{y^2}e^{-x^2/y}\,dy\\ &=\int_0^1\frac{\partial}{\partial x}\left(\frac{x^3}{y^2}e^{-x^2/y}\right)\,dy\\ &=\int_0^1e^{-x^2/y}\left(\frac{3x^2}{y^2}-\frac{2x^4}{y^3}\right)\,dy \end{align*}

and observe by setting $x=0$ the left-hand side is one while the right-hand side is zero. \begin{align*} \text{LHS: }\qquad e^0(1-0)&=1\\ \text{RHS: }\qquad \int_0^1 0\,dy&=0 \end{align*}

Note: This example can be found in Counterexamples in Analysis by B.R. Gelbaum and J.H.M. Holmsted.

epi163sqrt
  • 94,265
  • 6
  • 88
  • 219
3

$(1 - x)(1 + x + x^2 + ..... )=$

$(1 + x + x^2 .... )(-x - x^2 - x^3 -......) = 1 + (x -x) + (x^2 - x^2)... = 1$ so

$1 + x + x^2 + .... = \frac{1}{1 - x}$

Let x = -1.

$ 1 - 1 + 1 - 1 + 1 - 1 .... = \frac{1}{1 -(-1)} = \frac 12$

but clearly $1 - 1 + 1 - 1 +... = (1-1) + (1-1) +... = 0$.

So $0 = \frac 12$ (and also 1, and -1).

fleablood
  • 1
  • 5
  • 39
  • 125
3

Although it's not quite what you're looking for, the Banach-Tarski paradox shows that, in a certain sense, $1$ does equal $2$:

Given a solid ball in $\mathbb R^3$, there is a way to decompose the ball into $5$ disjoint sets, move them by rigid motions, and obtain two solid balls of the same radius.

The catch is that these are non-measurable sets (and, of course, you need the Axiom of Choice).

Robert Israel
  • 416,382
  • 24
  • 302
  • 603
  • You can make this a "proper" $1=2$ proof by measuring the final sets (which are measurable). We can find sets $A_1,A_2,B_1,B_2,B_3$ such that $A_1\cup A_2=B_1\cup B_2\cup B_3=S$ and $A_1'\cup A_2'\cup B_1'\cup B_2'\cup B_3'=S$ (where $S$ is a sphere of measure $1$ and all the unions are disjoint, and the primes indicate euclidean transformations); then $$1=\mu(S)=(\mu(A_1)+\mu(A_2))+(\mu(B_1)+\mu(B_2)+\mu(B_3))=\mu(S)+\mu(S)=2.$$ (Of course the stuff involving $\mu(A_1)$ is nonsense, but it looks plausible.) – Mario Carneiro Oct 22 '15 at 14:11
2

Let $x=(x_{ij})$ be the infinite matrix (where omitted entries are $0$), $$x = \begin{pmatrix}1 \\ -1 & 1 \\ &-1 & 1 \\ &&-1 & 1\\ &&&\ddots& \end{pmatrix}$$ i.e. $x_{ij} = \Bbb 1_{i=j} - \Bbb 1_{i=j+1}$. Here $i$ is the row, $j$ is the column. Then $$∑_{ij} x_{ij}=∑_i\left(∑_jx_{ij}\right) = ∑_i0 = 0$$ While also $$∑_{ij} x_{ij}=∑_j\left(∑_ix_{ij}\right) = ∑_j\Bbb 1_{j=1} = 1$$

So $1=2$.


For the interested, this is a violation of Fubini.

Calvin Khor
  • 31,342
  • 5
  • 40
  • 81
2

I always liked the one that "proves" $1+2+3+4+...=1/12$


https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF

Albert Renshaw
  • 1,644
  • 2
  • 17
  • 34
1

Let $U_n$ be a probability measure on $[0,1]$ such that when restricted to $\{0,\frac{1}{n},…,\frac{n-1}{n},1\}$, is the uniform measure on that set. i.e. $$ U_n\left( A \right) := \left| \left\{ k∈{0,…,n} : \frac{k}{n} ∈ A\right\}\right|$$

Of course, as you send $n→∞$, $U_n$ tends to the (continuous) uniform measure on $[0,1], U_{[0,1]}$, $$U_n→ U_{[0,1]}$$ Note that if $Q=\Bbb Q∩ [0,1]$, $Q$ is measurable and $U_{[0,1]}$-null, so $$1 = U_n(Q) → U_{[0,1]}(Q) = 0$$ Hence $2=1$.

Calvin Khor
  • 31,342
  • 5
  • 40
  • 81