51

Using mathematical induction, I have proved that

$$1+ 5+ 9 + \cdots +(4n-3) = 2n^{2} - n$$

for every integer $n > 0$.

I would like to know if there is another way of proving this result without using PMI. Is there any geometric solution to prove this problem? Also, are there examples of problems where only PMI is our only option?

Here is the way I have solved this using PMI.

Base Case: since $1 = 2 · 1^2 − 1$, the formula holds for $n = 1$.

Assuming that the formula holds for some integer $k ≥ 1$, that is,

$$1 + 5 + 9 + \dots + (4k − 3) = 2k^2 − k$$

I show that

$$1 + 5 + 9 + \dots + [4(k + 1) − 3] = 2(k + 1)^2 − (k + 1).$$

Now if I use hypothesis I observe.

$$ \begin{align} 1 + 5 + 9 + \dots + [4(k + 1) − 3] & = [1 + 5 + 9 + \dots + (4k − 3)] + 4(k + 1) −3 \\ & = (2k^2 − k) + (4k + 1) \\ & = 2k^2 + 3k + 1 \\ & = 2(k + 1)^2 − (k + 1) \end{align} $$

$\diamond$

Martin Sleziak
  • 50,316
  • 18
  • 169
  • 342
Chan C
  • 657
  • 1
  • 6
  • 8
  • 27
    For everybody who wonders it like I did, Google told me PMI stands for "Principle of Mathematical Induction". – JiK May 28 '16 at 17:36
  • 3
    Not really what the OP wants, probably, but it might be possible to determine whether this result *requires* induction by figuring out whether it can be proved in [Robinson arithmetic](https://en.wikipedia.org/wiki/Robinson_arithmetic). – Nate Eldredge May 28 '16 at 17:43
  • 1
    @Nate: It surely doesn't; e.g. it should be straightforward to axiomatically define a summation operator in fairly weak contexts for which most of these answers are a simple manipulation of identities that prove the theorem for anything satisfying the definition. Induction, if it even comes into play, would be relegated to merely proving one particular interpretation is a model of the axioms. –  May 29 '16 at 01:41
  • 2
    That seems delicate. You can develop, for $n$ up to any constant bound, a theory of finite sums of $x\choose n$, in a finitely axiomatized extension of Robinson arithmetic. Or an equivalent theory of solutions of finite difference equations $\Delta^k = 0$. But we are usually presented with the polynomials written in the basis of powers $x^n$ and the equivalence between that and the basis that works for finite-difference problems seems to use principles of uniqueness of summation that are a form of induction. @Hurkyl – zyx May 29 '16 at 03:49
  • 2
    It seems to me that the LHS cannot be defined without assuming something equivalent to the well-ordering principle (that < is a well-order on N). – DanielWainfleet May 30 '16 at 03:13
  • 1
    @ChanC, you may want to weigh in on the comments here. The discussion going on by Nate, Hurkyl and zyx is "correct," as in it is the discussion of how to answer the exact question you asked. Induction is a very fundamental part of our understanding of mathematics, and there is a great deal of work done on trying to work without it. You may want to weigh in on whether this is actually the direction you wanted the discussion to go. – Cort Ammon May 30 '16 at 16:39
  • The principle of math induction is true. Why are you worried about using induction? – Jacob Wakem Jun 11 '16 at 21:44
  • See also [Proof for formula for sum of sequence $1+2+3+\ldots+n$?](http://math.stackexchange.com/questions/2260/proof-for-formula-for-sum-of-sequence-123-ldotsn) and other questions [linked there](http://math.stackexchange.com/questions/linked/2260). – Martin Sleziak Jun 30 '16 at 08:36

15 Answers15

206

Follow the rainbow!

$$ \Large \color{PaleVioletRed}1 + \color{DarkViolet}5 + \color{DodgerBlue}9 + \dots + \color{LightCoral}{(4n-3)} = n(2n-1) = 2n^2-n$$

enter image description here

Lynn
  • 3,266
  • 1
  • 11
  • 29
  • 25
    Quite the most unexpectedly beautiful answer I’ve seen on any Stackexchange site, ever! Whimsical, pretty, and mathematically lucid, all at the same time. – Peter LeFanu Lumsdaine May 28 '16 at 22:30
  • 5
    A fantastic demonstration that geometric and pictorial methods are still a very concise and complete way of proving results. – Nij May 29 '16 at 08:07
  • 14
    I agree that this is a neat, concise answer. However, for it to be an actual proof I'm sort of missing an argument as to how we know that the geometric interpretation is even correct (for arbitrary `n`). – Ingo Bürk May 29 '16 at 14:56
  • @IngoBürk I sketch how to mechanically derive such proofs in my answer, using a 2-dimensional form of telescopy, This allows us to view the rectangle (or its area) as the sum of the successive differences of the preceding rectangles, so we can view each rectangle as inductively built layer-by-layer - like a 2D printer building it by applying each differential (border) layer to increment rectangle $R_n$ to $R_{n+1}$. The incremental (border) layers are the arch-shaped regions formed by the same-colored squares above (of area $4n-3).\ \ $ – Bill Dubuque May 29 '16 at 16:12
  • 2
    @IngoBürk: I would read it as describing a partition of the set $[1…n] \times [1…2n-1]$, along with a numbering of the partition classes, such that the $i$th class contains $4i-3$ elements. (Here $[a…b]$ denotes the “integer interval” $\{ x \in \mathbb{Z}\ |\ a \leq x \leq b \}$.) Filling in the details of this sort of argument is an interesting and non-trivial exercise if you haven’t before; once you’ve done it a couple of times, it quickly becomes routine. – Peter LeFanu Lumsdaine May 30 '16 at 10:16
  • @Lynn Out of curiosity, how did you produce the image? – ColinK May 30 '16 at 12:09
  • 4
    @ColinK: Google Spreadsheets! http://puu.sh/pagwd/0fd85b246c.png – Lynn May 30 '16 at 13:15
  • @PeterLeFanuLumsdaine Could you please provide details of the alternative interpretation that you have in mind. I cannot (uniquely) infer them from what you wrote. – Bill Dubuque May 30 '16 at 18:29
  • 2
    @BillDubuque: ok! Actually I’ll deviate slightly from my previous comment — instead of $[1…n] \times [1…2n-1]$, set $m = n-1$ and work with $X = [0…m] \times [-m…m]$. Take the partition on $X$ induced by the function $f : X \to [0…m]$ sending $(i,j)$ to $\max(|i|,|j|)$. Now $|f^{-1}(k)| = 4k+1$ (one can enumerate it explicitly), so $n(2n-1) = (m+1)(2m+1) = |X| = \sum_{k=0}^m |f^{-1}k| = \sum_{k=0}^{m} 4k+1 = \sum_{k=1}^n 4k - 3$. $\qquad \square$ – Peter LeFanu Lumsdaine May 30 '16 at 19:17
  • I've been waiting for a day where I can actually understand one of the answers given on mathematics. That day has finally arrived. – Pharap May 31 '16 at 01:25
  • @IngoBürk +1 I decided it was simplest to say the top is $2n-1$ and the remaining sides are $n-1$, leading to $2n-1+2(n-1)=4n-3$. – Mark Hurd May 31 '16 at 03:33
31

Below we show how to give a rigorous interpretation of some of the other "proof by picture" answers. Namely, we describe how a $2$-dimensional form of telescopy allows us to view a sequence of rectangles as being built-up layer-by-layer from successive differences of prior rectangles - as if built by a $2$-D printer. Further, applying a simple product rule for differences allows us to simplify these differences into a form more amenable to geometric visualization. This yields a simple and purely mechanical way to generate such pictorial proofs. Moreover, it provides a rigorous interpretation of such proofs using standard techniques for telescopic sums. Finally we mention an analogy with calculus: such telescopic summation is a discrete analog of the Fundamental Theorem of Calculus (but you don't need to know any calculus to understand the much simpler discrete analog that we describe below).

Let $\,f_n,\, g_n\,$ be sequences of naturals and $\bar R_n$ a sequence of $f_n\times g_n$ rectangles of area $ R_n = f_n g_n.$ Below we recall the product rule for the difference operator $\,\Delta h_n := h_{n+1} - h_n$ then we apply the rule to the special case $\,f_n = n,\ g_n = 2n\!-\!1\,$ in Lynn's picture below.

$$ \large \color{PaleVioletRed}1 + \color{DarkViolet}5 + \color{DodgerBlue}9 + \dots + \color{LightCoral}{(4n-3)}\, =\, n(2n-1) = 2n^2-n$$

enter image description here

Note: below we give related objects the same color (not related to the coloring above).

$ \begin{eqnarray}{\bf Product\ Rule}\qquad\ \: &&\Delta(f_n g_n ) &=&\ f_{n+1}\ \ \ \Delta g_n &+&\qquad\ \Delta f_n\cdot g_n\\[.3em] {\bf Proof}\quad\ \ \,f_{n+1}\ g_{n+1}\ \ &-&\ \ f_n\ g_n &=&\ f_{n+1}(g_{n+1}\!-g_n) &+&\, (f_{n+1}\!-f_n)\, g_n \\[.5em] {\rm e.g.}\quad\, \smash[b]{\underbrace{(n\!+\!1)(2n\!+\!1)}_{\large R_{\,\Large{n+1}}}}&-&\smash[b]{\underbrace{n(2n\!-\!1)}_{\large R_{\,\Large{n}}}} &=&\ \smash[b]{\underbrace{(\color{#c00}{n\!+\!1})\cdot 2}_{\large \rm arch\ \color{#c00}{sides}}} &+&\quad\ \smash[b]{\underbrace{1\,\cdot\, (\color{#0a0}{2n\!-\!1})}_{\large\rm arch\ \color{#0a0}{top}}}\ \ \ {\rm in\ picture}\\[.2em] \phantom{} \end{eqnarray}$

So to increment an $\,n\!\times\! (2n\!-\!1)$ rectangle $\bar R_n$ to its successor $\bar R_{n+1}$ of size $\,(n\!+\!1)\!\times\!(2n\!+\!1)$ we can add $\,\color{#c00}{n\!+\!1}$ squares on each $\rm\color{#c00}{side}$ of $\bar R_n$ and $\,\color{#0a0}{2n\!-\!1}\,$ on $\rm\color{#0a0}{top}$ of $\bar R_n.\,$ For example, let $\,n=3.\,$ In the picture, to increment the $3\times 5$ blue $\bar R_3$ to the $4\times7$ green $\bar R_4$ rectangle, the rule says we can append $\,\color{#c00}{n\!+\!1} = 4$ green squares to each side of blue $\bar R_3,\,$ plus $\color{#0a0}{2n\!-\!1} = 5$ green squares on top of blue $\bar R_3$ yielding the $\,7\times 4$ green arch - which constitutes the (area) difference between the green $4\times7\ (\bar R_4)$ and blue $3\times 5\ (\bar R_3)$ rectangles. Thus the $\rm\color{#c00}{sides}$ and $\rm\color{#0a0}{top}$ of the arch are precisely the summands in the product rule, so the rule shows how to construct successive differences of $2$-D rectangles $f_n\times g_n$ via differences $\,\Delta f_n,\, \Delta g_n$ of their $1$-D sides.

Thus we can view each rectangle as being built-up layer-by-layer from the differences (= arches) of successive prior rectangles (the separately colorered arches in the picture). Dynamically, we can think of a $2$-D printer building the rectangle arch-layer-by-layer (similar to a $3$-D printer).

The sought equality follows by computing the $n(2n\!-\!1)$ area of rectangle $\bar R_n$ a second way, using said layer-by-layer inductive construction of $\bar R_n.$ By the rule, the difference arches have area $\,R_{k+1}\!-R_k = 2(\color{#c00}{k\!+\!1)} + \color{#0a0}{2k\!-\!1} = 4k\!+\!1.\,$ Adding these to initial area $= 1_{\phantom{\frac{i}i}}\!\!\!$ of $\,1\!\times\! 1$ $\,\bar R_1$

$\qquad\qquad\qquad\begin{eqnarray} {\underbrace{1}_{\color{#c0f}{\large R_{\Large1}}} +\!\!\! \underbrace{5}_{\large{\color{#48f}{R_{\Large 2}}-\color{#c0f}{R_{\Large 1}}}}\!\! +\! \underbrace{9}_{\large{R_{\Large 3}-\color{#48f}{R_{\Large 2}}}}\!\! +\, \cdots + \!\!\underbrace{4n\!-\!3}_{\large{R_{\Large n}-R_{\Large{n-1}}}} \!=\, \smash{\sum_{\large k\,=\,0}^{\large{{n-1}}}\,(4k\!+\!1)}}\\[-.2em] \end{eqnarray}$

is an equal value for the area of $\,n\!\times\!(2n\!-\!1)\,$ rectangle $\,\bar R_n$. This is the sought equality.

The above LHS sums to $ R_n$ since it is a telescoping sum so $\rm\color{#c0f}{successive}\ \color{#48f}{terms}$ cancel out, which becomes clearer if we reorder the sum by rewriting $\,R_{k+1}-R_k\,$ as $\ {-}R_k\! + R_{k+1}$ yielding

$\qquad \begin{eqnarray}{\underbrace{\color{#c0f}{R_1}\! + (\color{#c0f}{-R_1}}_{\large =\ 0}\! + \underbrace{\color{#48f}{R_2}) + (\color{#48f}{-R_2}}_{\large =\ 0}\! +\underbrace{ R_3) + (-R_3}_{\large =\ 0}\! +\cdots +\underbrace{R_{n-1}) +(-R_{n-1}}_{\large =\ 0}\! + R_n)\, =\, R_n}\\[-.1em] \end{eqnarray} $

Here lies the hidden use of induction. Though such telescopic cancellation may seem "obvious", it does require rigorous proof. But the inductive step is easy: adding $\, -R_n\! + R_{n+1}$ to both sides of the above statement $P(n)\,$ yields $P(n\!+\!1)\ $ so $\:P(n)\,\Rightarrow\,P(n\!+\!1)\,$ [to be completely rigorous we should also eliminate the ellipses, replacing them by a more precise summation operator].

Many inductive proofs have a very natural telescopic form, and expressing them in this format often leads to great simplification. You can find many examples of telescopy in my prior answers (both in additive and multiplicative form).

Finally we briefly mention that analogy with calculus. The above remarks on telescopic sums show that $\,f(n)\,$ is the sum of its differences. This is the difference analog of the Fundamental Theorem of Differential Calculus, i.e. we have the analogous theorems

$$\begin{eqnarray} \sum_{k=0}^{n-1}\ \Delta_k f(k)\ \, &=&\ f(n)-f(0)\\[.2em] \int_0^x\! D_t f(t)\, dt &=&\ f(x) - f(0) \end{eqnarray}$$

This is but one small part of the calculus of finite differences, a discrete analog of differential calculus. Many things familiar from calculus have discrete difference analogs, and their proofs are often much simpler (such as the difference product rule above). For example, to verify that $F(n) = \sum_{k=0}^{n-1} f(k)$ it suffices to show that they have the same difference and same initial condition, i.e $\,F(n\!+\!1)-F(n) = f(n)\,$ and $F(0) = 0$. When, as in the OP, both $F$ and $f$ are polynomials this reduces to simply checking the equality of two polynomials, which can be done purely mechanically (so no intuition or visual analogies are needed). E.g. for the OP we have $\,F(n) = n(2n\!-\!1)$ so the proof reduces to verifying $\,F(n\!+\!1)-F(n) = 4n\!+1,\,$ and $\,F(n)= 0,\,$ which is trivial polynomial arithmetic - so trivial we can program calculators to perform all such proofs. In fact there are general summation algorithms due to Karr, Gosper and others that are discrete analogs of the Risch-Bronstein integration algorithms implemented in computer algebra systems such as Macsyma, Maple and Mathematica.

Therefore such difference calculus proofs remain completely mechanical even for higher degree polynomials, but generalizing the geometrical picture-based proofs to higher dimensions will prove much more difficult because we typically lack (real-world) intuition for high dimensional spaces. So difference calculus serves to algebraicize these geometrical proofs in a "calculus" that is so mechanical that it can be performed by a computer - freeing our intuition to work on less trivial matters.

Bill Dubuque
  • 257,588
  • 37
  • 262
  • 861
  • 2
    I didn't downvote, but I don't understand what's going on here. What are $f$ and $g$? What is $R$? Why is $n+1$ red? Are you proving the product rule, or something else? Those are some things that confused me, if you're interested. – David May 29 '16 at 04:54
  • 4
    I didn't downvote either, but this post doesn't appear to provide an answer to the OP's question - OP asked "How can one prove these two functions are equal, besides induction", and the post appears to give one proof, which is explicitly inductive. – psmears May 29 '16 at 08:12
  • 5
    @psmears Many students make the mistake of thinking that such "proofs by picture" are *not* inductive. The point of the answer is to (1) show how to formalize the proof, (2) show that the resulting formal proof is in fact inductive, (3) show how to *mechanically* derive such proofs using 2-D telescopy - by way of the product rule for differences. – Bill Dubuque May 29 '16 at 16:01
  • @David I greatly expanded the answer. Hopefully it is now more clear. Thanks very much for your feedback. I welcome further feedback to help make these old, beautiful ideas as accessible as possible. – Bill Dubuque May 29 '16 at 20:44
  • 2
    @BillDubuque: I'm not disputing that there's a lot of useful information in the answer (and indeed, as I said, I didn't downvote!) - I'm just pointing out that other people may do so as there is no direct answer to the question asked. – psmears May 30 '16 at 15:01
  • @psmears But there is an answer, namely the proof by (explicit) telescopy described in detail. As is clear by the (likely hot-list) votes on the other answers, many readers don't consider proofs like that to be (explicit) induction, even if cognoscenti do. – Bill Dubuque May 30 '16 at 17:07
  • 1
    @BillDubuque I've found your answers about telescopy to be quite intriguing. I am wondering, however, if there is an easy, algorithmic way to compute the function whose forwards difference yields 4k+1 (here $2k^2-k$), the same way we do with integration. I notice it is very close to the "integral" from 0 to n with respect to k which yields $2n^2+n$. However, this doesn't work very well for more complicated formulas, although I noticed it is often semi-close to the real solution. In other words, can telescopy be used without knowing the closed form in the first place ? – Evariste Jun 01 '16 at 21:58
  • 1
    @Evariste Yes, there are known algorithms, e.g. search on: "summation in finite terms Karr" to find links to the literature. – Bill Dubuque Jun 01 '16 at 22:03
30

$$\sum\limits_{k = 1}^n {\left( {4k - 3} \right)} = 4\sum\limits_{k = 1}^n k - 3 \cdot \sum\limits_{k = 1}^n 1 = 4\frac{{n\left( {n + 1} \right)}}{2} - 3 \cdot n = 2n^2 - n$$

Mohammad W. Alomari
  • 1,143
  • 7
  • 17
23

Hint:

Let $$\begin{alignat}{10}S_n&=&1+&&5+&&9+&\cdots+&(4n-3)\\&=&(4n-3)+&&(4n-7)+&&(4n-11)+&\cdots+&1.\end{alignat}$$ Thus $$2S_n=(4n-2)+(4n-2)+(4n-2)+\cdots+(4n-2).$$


Additional:

As pointed out in the comments, the proof above relies on mathematical induction. So I want to explain what role induction plays in the proof.

For a given number $n$, say $10000$, the proof is valid, though it involves ellipsis, since one can write down all numbers omitted.
Now we claim that the proposition is true for all natural numbers. This would be difficult to prove without using mathematical induction, although I have no idea whether this is possible. Intuitively, we can see that this proof is valid for all natural numbers, but how do we prove that $$\mathbb{Z}_+=\{n\in\mathbb{N}:S_n=2n^2-n\}?$$

One can start from Peano axioms or axiom of infinity, or other equivalent assumptions, and prove that the set on the right-handed side is $\mathbb{Z}_+$ easily.

I don't think that there exists any proposition we can prove without mathematical induction that it is true for every natural numbers, since when we talk about natural numbers, we are using mathematical induction.

Eclipse Sun
  • 9,128
  • 19
  • 42
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/40440/discussion-on-answer-by-eclipse-sun-besides-mathematical-induction-how-else-can). – Daniel Fischer May 28 '16 at 20:01
21

A geometrical proof. A rectangle formed by a sum of gnomons.

Intelligenti pauca
  • 41,868
  • 4
  • 35
  • 64
19

You can use the “trick” attributed to Gauss for finding the sum of the first $n$ integers. Call your sum $S_n$, and consider this.

$$\begin{align} &\quad S_n=+1&+ 5 +& \quad\cdots&+(4n-7)&+(4n-3)\\ +&\quad S_n=+(4n-3)&+(4n-7)+&\quad\cdots&+5&+1 \\ &\hline\\ \quad&2S_n\quad =+\underbrace{(4n-2)}_{1^{st}} &+\underbrace{(4n-2)}_{2^{nd} }+&\quad\underbrace{\cdots}_{3^{rd}\mbox{ to }{(n-2)}^{nd}}&+\underbrace{(4n-2)}_{(n-1)^{st}}&+\underbrace{(4n-2)}_{n^{th}}\\\end{align}$$ Thus $2S_n$ equals $n(4n-2)$, and $S_n=2n^2-n$.

Steve Kass
  • 14,195
  • 1
  • 19
  • 31
  • 13
    The ellipses amount to using induction. – Bill Dubuque May 28 '16 at 02:06
  • 3
    @Bill: No, they really don't. The places where you might be able to play that game are the relevant form of associativity and the conversion between repeated addition and multiplication. –  May 28 '16 at 05:16
  • @Hurkyl it's been a while, and not quite fresh on my mind, but doesn't the commutativity of sddition (which is used here, i believe) for an arbitrary number of summands rely on induction? – Clement C. May 28 '16 at 08:47
  • 5
    @ClementC. I don't know whether it is avoidable in some way, but the usual proof of commutativity of addition (for two summands) on $\mathbb{N}$ is by induction. So avoiding all vestiges of induction may be impossible, and if it is possible, it's very cumbersome. – Daniel Fischer May 28 '16 at 09:28
  • 2
    After all, the construction of $\mathbb{N}$ is inductive! (at least the Peano one) So it's not surprising that we get to induction somewhere – marmistrz May 28 '16 at 12:30
  • 1
    Remark that even textbooks aimed at computer scientists don't accept the Gauss-trick argument in this answer as *rigorous,* e.g. from p.7 of the famous textbook [Concrete Mathematics:](http://goo.gl/Qdy2HM) "As experts, we might be satisfied with this derivation and consider it a proof, even though we waved our hands a bit when doing the unfolding and reflecting. But **students of mathematics** should be able to meet **stricter** standards; so it’s a good idea to construct a rigorous proof by induction. The key induction step is..." (my emph.) – Bill Dubuque May 28 '16 at 22:45
  • 2
    @Clement: The usual formulation of summation and proof of that property does depend on induction... but then it wasn't *trying* to avoid induction. This doesn't preclude other formulations from avoiding it; e.g. one definition of summation for cardinal numbers is just the (cardinal class of the) union of an (unordered) collection of disjoint sets; you get commutativity for free! –  May 28 '16 at 23:57
  • @Hurkyl fair point. Thanks! – Clement C. May 29 '16 at 09:42
  • @Hurkyl, how do you show the sum in reverse order is the same without induction? If the theory cannot prove basic properties of whatever summation formalism you want to axiomatize, it is not doing the summation without induction, but only describing a computational routine for the summation (of fixed degree polynomials). That is something that can probably be done in very weak systems, much like describing the solution set of Fermat's Last Theorem, which is quite different from the system being able to derive FLT. – zyx May 31 '16 at 04:49
  • @zyx: With the definition of summing cardinals I mentioned? There's nothing to prove, since the sum is defined for a set of terms, so there *isn't* an order. Extending this to a sequence of terms just means distinguishing terms forgetting the ordering; e.g. $$ \sum_{k=1}^n S_k = \sum\{ \{k\} \times S_k \mid k \in [1..n] \} =\{ (\{k\} \times S_k, x) \mid k \in [1..n], x \in \{k\} \times S_k \}$$ where the $S_k$ are sets representing the cardinals to be summed. The reordering itself induces an explicit bijection between the sets representing the sum of a sequence and its reordering. –  May 31 '16 at 17:54
10

Here is a method based upon formal power series. We encode thereby sequences $\left(a_n\right)_{n\geq 0}$ by formal power series $\sum_{n=0}^\infty a_nx^n$ and show equality of sequences by showing equality of corresponding power series.

Using the summation symbol $\sum$ we claim

The following is valid \begin{align*} \sum_{j=1}^{n}\left(4j-3\right)=2n^2-n\qquad\qquad n\geq 1 \end{align*}

We encode the sequence
\begin{align*} \left(\sum_{j=1}^{n}\left(4j-3\right)\right)_{n\geq 1}=\left(1,6,15,\cdots\right)\quad\rightarrow\quad \sum_{n=1}^\infty\left(\sum_{j=1}^n\left(4j-3\right)\right)x^n=\color{blue}{1}x+\color{blue}{6}x^2+\color{blue}{15}x^3+\cdots \end{align*} by the power series and in the same way we encode \begin{align*} \left(2n^2-n\right)_{n\geq 1}=\left(1,6,15,\cdots\right) \quad\rightarrow\quad\sum_{n=1}^{\infty}\left(2n^2-n\right)x^n=\color{blue}{1}x+\color{blue}{6}x^2+\color{blue}{15}x^3+\cdots \end{align*} and show the equality of the corresponding power series.

We obtain \begin{align*} \sum_{n=1}^\infty\left(\sum_{j=1}^n\left(4j-3\right)\right)x^n &=\sum_{n=1}^\infty\left(\sum_{j=0}^{n-1}\left(4j+1\right)\right)x^n\tag{1} \\ &=\sum_{n=0}^\infty\left(\sum_{j=0}^n\left(4j+1\right)\right)x^{n+1} \tag{2}\\ &=\frac{x}{1-x}\sum_{n=0}^\infty\left(4n+1\right)x^n\tag{3}\\ &=\frac{x}{1-x}\left(4\sum_{n=0}^\infty nx^n+\sum_{n=0}^\infty x^n\right)\tag{4}\\ &=\frac{x}{1-x}\left(4(xD_x)\sum_{n=0}^\infty x^n+\frac{1}{1-x}\right)\tag{5}\\ &=\frac{4x^2}{1-x}D_x\left(\frac{1}{1-x}\right)+\frac{x}{(1-x)^2}\tag{6}\\ &=\frac{x(3x+1)}{(1-x)^3}\tag{7}\\ \end{align*}

Comment:

  • In (1) we shift the index $j$ by one to start with $j=0$

  • In (2) we shift the index $n$ by one to start with $n=0$

  • In (3) we use the nice fact that summing up elements is encoded in formal power series by multiplication with $\frac{1}{1-x}$. This is due to the Cauchy product formula.

  • In (4) we do some rearrangement to be prepared for further steps

  • In (5) we use the formula for the geometric power series and also that differentiating a power series and multiplication with $x$ results in \begin{align*} xD_x \sum_{n=0}^\infty a_n x^n = \sum_{n=0}^\infty na_nx^n \end{align*} Here we denote with $D_x:=\frac{d}{dx}$ the differential operator.

  • In (6) we use the formula for the geometric power series again and multiply out

  • In (7) we perform the differentiation and simplify the expression

on the other hand we obtain with similar reasoning \begin{align*} \sum_{n=1}^\infty\left(2n^2-n\right)x^n &=2\sum_{n=1}^\infty n^2x^n-\sum_{n=1}^\infty nx^n\\ &=2(xD_x)^2\sum_{n=1}^\infty x^n-(xD_x)\sum_{n=1}^\infty x^n\\ &=2(xD_x)^2\frac{x}{1-x}-(xD_x)\frac{x}{1-x}\\ &=\frac{2x(1+x)}{(1-x)^3}-\frac{x}{(1-x)^2}\\ &=\frac{x(3x+1)}{(1-x)^3}\\ \end{align*} and the claim follows.


Note: Of course applying this method here is to take a sledgehammer to crack a nut. But these techniques are useful also in more difficult context. See e.g. this answer.

Pang
  • 399
  • 4
  • 8
epi163sqrt
  • 94,265
  • 6
  • 88
  • 219
  • Is this really easier than using induction? – Marc van Leeuwen May 28 '16 at 17:57
  • 8
    @MarcvanLeeuwen: Of course not - see my note at the end of the answer! :-) But OP was asking for alternatives, not for easier variants. This technique opens up doors to attack more challenging problems which could be helpful for OP. Regards, – epi163sqrt May 28 '16 at 18:17
9

My favorite proof makes use of a telescoping sum. Let's recall how that works: given a sequence $a_k$ then: $$ \sum_{k=1}^n (a_{k+1}-a_k)=a_{n+1}-a_1, $$ because intermediate terms cancel out pairwise.

Just take now $a_k=k^2$ and you have: $$ \sum_{k=1}^n [(k+1)^2-k^2]=(n+1)^2-1=n^2+2n. $$ But, on the other hand: $$ \sum_{k=1}^n [(k+1)^2-k^2]=\sum_{k=1}^n (2k+1)= \sum_{k=1}^n 2k+\sum_{k=1}^n 1=\sum_{k=1}^n 2k+n. $$ By equating you get then $$ \sum_{k=1}^n 2k+n=n^2+2n, \quad\hbox{that is}\quad \sum_{k=1}^n 2k=n^2+n \quad\hbox{and}\quad \sum_{k=1}^n 4k=2n^2+2n, $$ so that: $$ \sum_{k=1}^n (4k-3)=2n^2+2n-3n=2n^2-n $$

Intelligenti pauca
  • 41,868
  • 4
  • 35
  • 64
  • This is a valid method to avoid induction, but at the end, it uses the same idea. – Emre May 29 '16 at 22:04
  • @E.Girgin Yes, telescopy is a special (but ubiquitous) form of induction. Presumably this is the same method used in the pictorial proofs (as I explain at length in my answer), though we cannot be sure since "picture proofs" are not *uniquely* readable. – Bill Dubuque May 30 '16 at 18:35
7

If you know the expression for the sum of an arithmetic sequence (or can prove it, it's not too hard, see below) then that's another way. For an arithmetic series with first term $a$ and common difference $d$, the sum to $n$ terms, $S_n$ is given by: $$S_n = \frac{n}{2}\left(2a+(n-1)d\right)$$ or alternatively, if the last term is $l$ then $$S_n = \frac{n(a+l)}{2}$$

For your question, we have $a=1$ and $d=4$ so $$S_n = \frac{n}{2}\left(2+4(n-1)\right) = n(2n-1) = 2n^2-n$$


To get the first expression for $S_n$, write the sum out twice, in opposite orders, and then take their sum:

$$S_n = a + (a+d) + (a+2d) + ... + [a+(n-2)d] + [a+(n-1)d]$$ $$S_n = [a+(n-1)d] + [a+(n-2)d] + ... + (a+2d) + (a+d) + a$$

$\implies$ $$2S_n = [2a+(n-1)d] + [2a+(n-1)d] + ...+[2a+(n-1)d] + [2a+(n-1)d]$$ where there are $n$ terms, so that

$$2S_n = n [2a+(n-1)d] $$

i.e.

$$S_n = \frac{n}{2}\left(2a+(n-1)d\right)$$

danimal
  • 1,889
  • 1
  • 9
  • 12
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/40441/discussion-on-answer-by-danimal-besides-mathematical-induction-how-else-can-you). – Daniel Fischer May 28 '16 at 20:03
5

Let

$$a_k := 4 k - 3 \qquad \qquad \qquad b_k := k - 1$$

Using summation by parts,

$$\begin{array}{rl} \displaystyle\sum_{k=1}^n a_k &= \displaystyle\sum_{k=1}^n a_k (b_{k+1} - b_k)\\ &= \left(a_k \, b_k \,\bigg|_1^{n+1}\right) - \displaystyle\sum_{k=1}^n (a_{k+1} - a_k) \, b_{k+1}\\ &= a_{n+1} \, b_{n+1} - a_1 \, b_1 - \displaystyle\sum_{k=1}^n 4 k\\ &= a_{n+1} \, b_{n+1} - \left(\displaystyle\sum_{k=1}^n (4 k - 3)\right) - 3n\\ &= (4 n + 1) n - 3 n - \displaystyle\sum_{k=1}^n a_k\\ &= 4 n^2 - 2n - \displaystyle\sum_{k=1}^n a_k\end{array}$$

Hence,

$$\displaystyle\sum_{k=1}^n a_k = 2 n^2 - n$$

Rodrigo de Azevedo
  • 18,977
  • 5
  • 36
  • 95
4

The most straightforward way is of course

$$\frac n2(a+\ell))=\frac n2 \big(1+(4n-3)\big)=\color{red}{2n^2-n}\qquad\blacksquare$$


Another method:

The well-known result of the sum of the first $n$ integers: $$1+2+3+4+\cdots+n=\frac{n(n+1)}2$$ Subtract $1$ from each term ($n$ in total): $$0+1+2+3+\cdots+(n-1)=\frac{n(n-1)}2$$ Multiply by $4$: $$0+4+8+12+\cdots+4(n-1)=2n(n-1)$$ Add $1$ to each term ($n$ in total): $$1+5+9+13+\cdots+(4n-3)=\color{red}{2n^2-n}\quad\blacksquare$$

Hypergeometricx
  • 21,842
  • 2
  • 25
  • 82
1

By counting dots (of Hexagonal number) in two ways.

enter image description here

On the left: there are four "outer" sides of each hexagon $j = 2, \dots, n$, where each side has $j$ dots but 3 of them are shared, in addition the trivial hexagon $j=1$ is one dot. Therefore, in total there are $\sum_{j=1}^n4j-3$ dots.

On the right: there are $2n-1$ rows (since a hexagon has both top and bottom sides, except the trivial case), and each row adds $n$ dots. Hence $n(2n-1) = 2n^2-n$ dots.

*Despite being geometric, induction may well be needed to justify the counting arguments, as Bill Dubuque noted.

Weaam
  • 2,799
  • 1
  • 18
  • 25
0

There are some very clever answers here, but what if you don't spot the clever trick? It would be nice to be able to mechanically grind out the answer, and in order to do that, you can use the discrete calculus.

So here the problem is to evaluate $ \sum_{1\le i\le n}{4i-3} $. First, expand all powers of the summation variable (here, $i$) into sums of falling powers (by inspection in simple cases, or using Stirling cycle numbers). This is trivial here, since $i = i^{\underline 1}$, so $$ \sum_{1\le i\le n}{4i-3} = \sum_{1\le i\le n}{4i^{\underline 1}-3}. $$ Now, summation of falling powers is just like integration of ordinary powers, except for the upper evaluation limit: $$ \sum_{1\le i\le n}{4i^{\underline 1}-3} = \bigg({4\over2}i^{\underline 2}-3i^{\underline 1}\bigg)\bigg\rvert^{n+1}_1. $$ Expand the falling powers and apply the evaluation limits, getting $$ \bigg({4 \over 2}(n+1)(n) - 3(n+1)\bigg) - \bigg({4 \over 2}(1)(0) - 3(1)\bigg) $$ which can be simplified to $ 2n^2 - n $.

No insight required, just turn the handle and compute!

Gareth Rees
  • 631
  • 3
  • 11
0

Induction is fundamental to the structure of mathematics. The $``\cdots"$ in the expression $1+ 5+ 9 + \cdots +(4n-3)$ cannot be explained rigorously without using some form of induction. All of the picture proofs here have an implied $``\cdots"$ in them. Similarly, $\text{$\Sigma$-notation}$ and telescoping sums cannot be defined without some form of induction.

Putting that aside, I have always been curious about the process of converting sums to integrals, so I offer the following flawed solution.

Note that $\int_\limits k^{k+1}(4x-5)\,dx = \left. (2x^2-5x) \right|_k^{k+1} = (2k^2+4k+2-5k-5)-(2k^2-5k) = 4k-3$.

So \begin{align} \sum\limits_{k = 1}^n (4k - 3) &= \sum\limits_{k = 1}^n \left( \int_\limits k^{k+1}(4x-5)\,dx \right) \\ &= \int_\limits 1^{n+1}(4x-5)\,dx &\text{(This step requires induction.)} \\ &= \left. (2x^2-5x) \right|_1^{n+1} \\ &= (2n^2 + 4n+2 -5n - 5) - (2 - 5) \\ &= 2n^2-n \end{align}

Addendum

I got the integrand by solving $4k - 3 = \int_\limits k^{k+1}(2ax+b)\,dx = 2ak + (a+b)$

The implication is that

\begin{align} 1 &= u_2 - u_1 \\ 5 &= u_3 - u_2 \\ 9 &= u_4 - u_3 \\ &\vdots \\ 4k-3 &= u_{k+1} - u_k \\ &\vdots \end{align}

where $u_k = 2k^2 - 5k$.

Which converts the sum to the sum of a collapsing series.

This implies Lynn's graphical proof shown below and, though it still requires induction, I thought it shows an interesting way to compute sums.

Steven Alexis Gregory
  • 25,643
  • 4
  • 42
  • 84
  • If you've to ask whether a step requires induction, it means you haven't even proven it... Yes it does, and all the answers that rely on properties of summation also do. Worse still, your answer involves integration, and you won't be able to prove most properties of integrals without induction!! – user21820 Jun 07 '16 at 02:40
  • @user21820 - OK then, define $1+ 5+ 9 + \cdots +(4n-3)$ without using induction. – Steven Alexis Gregory Jun 09 '16 at 02:23
  • 1
    You can't. That's my point, namely that it is pointless to use something that is **strictly harder to prove** than the trivial proof by induction, unless that something also gives some deeper insight. Otherwise, hiding the induction somewhere not only obscures the fact that it is necessary but also misleads people into thinking they have avoided induction even though it's intrinsic. What I say can be made precise; compare first-order PA (with its induction schema) and the discrete ordered semiring, and note that integer polynomials with positive leading coefficient satisfy the latter. – user21820 Jun 09 '16 at 14:15
  • 1
    The alternative way I know of that gives an insight into an intrinsic structure of summations is via the forward difference $Δ$, which act $R$-linearly on sequences from an $R$-module (usually we use $R = \mathbb{R}$), and hence we can factor $R$-polynomials of $Δ$, which gives the general solution to the recurrence relation. One can use this to solve summations. We can also observe that $Δ$ is a left-shift on the sequence of binomial coefficients $k \mapsto ( n \mapsto C(n,k) )$, which gives Newton's formula and hence an $O(d^2)$ algorithm to compute the summation of degree-$d$ polynomials. – user21820 Jun 09 '16 at 14:39
  • 1
    In your case, your method gives no insight whatsoever because: (1) the very proof of the integral requires much more than induction. The fastest way I know that avoids Riemann sums is via anti-derivatives and yet it has to first prove the product rule. Then after that you use the property that the integral of a finite sum is the sum of the separate integrals, about which you asked whether it uses induction (indeed it does, and the essential case of two integrals is highly non-trivial if you use Riemann integrals!). [continued] – user21820 Jun 09 '16 at 14:45
  • 2
    [continued] So even if we ignore all the induction used in proving the properties of integrals, the final induction that your method hides inside this property of integrals is really no different from the induction used in the conventional proof of the original summation. The technique you used gives little insight too, being inferior to finding the anti-difference (which is the 'right' notion here rather than the anti-derivative). – user21820 Jun 09 '16 at 14:48
  • You' re missing my point. I am merely suggesting that you step out of the box for a while and prove it using a more powerful tool. THEN you look at what you have and see if you can use it to find an acceptable proof. In this case, it implies Lynn's pictoral proof. – Steven Alexis Gregory Jul 16 '17 at 16:11
  • Honestly, I think I am a reasonably capable logician, so if I can miss your point then the majority of students who read your post will also. Whatever it is, my initial point stands; your answer does not address the question because induction is necessary at some point. It's no use to invoke more powerful tools that cannot be proven without induction if the question explicitly asks for whether it can be done without. Neither is Lynn's proof an answer to the question. Blame the other SE users for upvoting all the non-answers due to lack of grasp of fundamental logic. I didn't even downvote you. – user21820 Jul 16 '17 at 16:52
  • I am generously but not passionately indifferent to upvotes and downvotes. I consider all of the "pictoral" proofs used here, the use of "$\dots$", and the use of $\Sigma$ to be vacuous without recourse to the notion of induction. So I consider every answer here, including mine, to be flawed. The fact that you threw first order PA and "the intrinsic structure of summations" at me suggests that you understand that too. Personally, I wondered if the sum could be converted to an integral and I found that it could. I also saw that it "explained" another answer. So I posted it. – Steven Alexis Gregory Jul 16 '17 at 17:30
  • Okay that's fine by me. I will upvote your answer if you clearly state at the top that induction is necessary in any reasonable foundations, and change your "Does this step require induction?" to "Proving this step still requires induction.". As it is now it is a bit too vague to me. – user21820 Jul 16 '17 at 18:26
-1

One way using distributive laws and multiplication:

$\sum_{k=1}^n(4k-3)$

$=\sum_{k=1}^n(4k)+\sum_{k=1}^n(-3)$

$\sum_{k=1}^n(-3)=-3n$

$\sum_{k=1}^n(4k)=4\cdot\sum_{k=1}^n(k)$

Assuming n is even:

$\sum_{k=1}^n(k)=1+2+3+4...+n=(1+n)+(2+n-1)...+(\frac{n}{2}+\frac{n}{2}+1)=\frac{n(1+n)}{2}$

Assuming n is odd: $\sum_{k=1}^n(k)=n+\sum_{k=1}^{n-1}k=\frac{2n}{2}+\frac{(n-1)(n)}{2}=\frac{n(1+n)}{2}$

Therefore it works for all n

Putting all of these together:$\sum_{k=1}^n(4k-3)=4\cdot\sum_{k=1}^n(k)+\sum_{k=1}^n(-3)=\frac{4n(1+n)}{2}-3n=2n^2+2n-3n=2n^2-n$ Q.E.D. Or you could just know the summation identities and easily proof it.

Ariana
  • 688
  • 3
  • 18