165

What are your favorite applications of integration by parts?

(The answers can be as lowbrow or highbrow as you wish. I'd just like to get a bunch of these in one place!)

Thanks for your contributions, in advance!

Jon Bannon
  • 3,071
  • 4
  • 30
  • 44
  • 61
    It can also be a good career move. A (likely apocryphal) story goes: when Peter Lax was awarded the National Medal of Science, the other recipients (presumably non-mathematicians) asked him what he did to deserve the Medal. Lax responded: "I integrated by parts." – Willie Wong Apr 24 '11 at 23:42
  • 4
    Great story, Willy. – Jon Bannon Apr 28 '11 at 17:40
  • 26
    Two more stories: 1. Supposedly when Laurent Schwartz received the Fields Medal (for his work on distributions, of course), someone present remarked, "So now they're giving the Fields Medal for integration by parts." 2. I believe I remember reading -- but have no idea where -- that someone once said that a really good analyst can do marvelous things using only the Cauchy-Schwarz inequality and integration by parts. I do think there's some truth to that. – Carl Offner Oct 11 '11 at 02:15
  • 4
    More physics, but it's useful in the derivation of the [Euler-Lagrange equation](http://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation#Statement), which itself is very nice. – Meow Jan 12 '13 at 12:31
  • 3
    @WillieWong Your comment is quoted in the book "Physics from Symmetry" https://books.google.de/books?id=_vLLCQAAQBAJ&pg=PA256&lpg=PA256&dq=A+%28likely+apocryphal%29+story+goes:+when+Peter+Lax+was+awarded+the&source=bl&ots=TbPF0vhSni&sig=tnQ4yvjLZ2yso8ch_TnJ9NayylY&hl=de&sa=X&ei=QE2FVeyLAsWWsAHgwIXIBw&ved=0CCoQ6AEwAQ#v=onepage&q=A%20%28likely%20apocryphal%29%20story%20goes%3A%20when%20Peter%20Lax%20was%20awarded%20the&f=false – jak Jun 20 '15 at 11:25
  • @WillieWong: I can't understand your comment clearly. What's the good career move ? –  Apr 05 '16 at 01:18
  • @ArkaKarmakar: It == "Integrating by parts". – Willie Wong Apr 08 '16 at 16:04
  • @WillieWong: I am stupid than most of the users here, I still can't understand how integrating by part helped Peter Lax. –  Apr 08 '16 at 16:19
  • 1
    @ArkaKarmarkar: Peter Lax intimated that the work he did that led to his being awarded the National Medal of Science was essentially integration by parts. If you win a National Medal of Science then this helps your career as a mathematician. – Jon Bannon Apr 08 '16 at 16:35

20 Answers20

146

I always liked the derivation of Taylor's formula with error term:

$$\begin{array}{rl} f(x) &= f(0) + \int_0^x f'(x-t) \,dt\\ &= f(0) + xf'(0) + \int_0^x tf''(x-t)\,dt\\ &= f(0) + xf'(0) + \frac{x^2}2f''(0) + \int_0^x \frac{t^2}2 f'''(x-t)\,dt \end{array}$$

and so on. Using the mean value theorem on the final term readily gives the Cauchy form for the remainder.

Greg Graviton
  • 5,004
  • 3
  • 23
  • 37
109

Let $f$ be a differentiable one-to-one function, and let $f^{-1}$ be its inverse. Then,

$$\int f(x) dx = x f(x) - \int x f'(x)dx = x f(x) - \int f^{-1}(f(x))f'(x)dx = x f(x) - \int f^{-1}(u) du \,.$$

Thus, if we know the integral of $f^{-1}$, we get the integral of $f$ for free.

BTW: This is the reason why the integrals $\int \ln(x) dx \,;\, \int \arctan(x) dx \,; ...$ are always calculated using integration by parts.

Will Byrne
  • 458
  • 2
  • 14
N. S.
  • 128,896
  • 12
  • 138
  • 247
106

My favorite this week, since I learned it just yesterday: $n$ integrations by parts produces $$ \int_0^1 \frac{(-x\log x)^n}{n!}dx = (n+1)^{-(n+1)}.$$ Then summing on $n$ yields $$\int_0^1 x^{-x}\,dx = \sum_{n=1}^\infty n^{-n}.$$

Bob Pego
  • 4,999
  • 3
  • 24
  • 18
94

Repeated integration by parts gives $$\int_0^\infty x^n e^{-x} dx=n!$$

  • 60
    ...which is dual to $\sum_{n\ge0} x^n/n! = e^x$. – Mitch Feb 04 '11 at 18:12
  • 6
    @Mitch, is this duality a consequence of any deeper facts? How does it generalize, if at all? – Skatche Apr 25 '11 at 07:13
  • 9
    @Skatche: Excellent point, especially since [I asked exactly that question immediately after I posted that comment](http://math.stackexchange.com/questions/20441/factorial-and-exponential-dual-identities). So see that link for discussion. – Mitch Apr 25 '11 at 13:49
54

High brow: Let $f(\theta)$ be a smooth function from the circle to $\mathbb{R}$. The Fourier coefficients of $f$ are given by $a_n = 1/(2 \pi) \int f(\theta) e^{-i n \theta} d \theta$.

Integrating by parts: $$a_n = \frac{1}{n} \frac{i}{2 \pi} \int f'(\theta) e^{- i n \theta} d \theta = \frac{1}{n^2} \frac{-1}{2 \pi} \int f''(\theta) e^{- i n \theta} d \theta = \cdots$$ $$\cdots = \frac{1}{n^k} \frac{i^k}{2 \pi} \int f^{(k)}(\theta) e^{- i n \theta} d \theta = O(1/n^k)$$ for any $k$.

Thus, if $f$ is smooth, it Fourier coefficients die off faster than $1/n^k$ for any $k$. More generally, if $f$ has $k$ continuous derivatives, then $a_n = O(1/n^k)$.

David E Speyer
  • 57,193
  • 5
  • 167
  • 259
32

As with Taylor's Theorem, the Euler-Maclaurin summation formula (with remainder) can be derived using repeated application of integration by parts.

Tom Apostol's paper "An Elementary View of Euler's Summation Formula" (American Mathematical Monthly 106 (5): 409–418, 1999) has a more in-depth discussion of this. See also Vito Lampret's "The Euler-Maclaurin and Taylor Formulas: Twin, Elementary Derivations" (Mathematics Magazine 74 (2): 109-122, 2001).

Mike Spivey
  • 52,894
  • 17
  • 169
  • 272
28

Perhaps not really an application, but the definition of the derivative of a distribution is based on partial integration:

if $u\in C^1(X)$ and $\phi\in C^\infty_c(X)$ is a test function, then

$\left<\partial_i u,\phi\right>=\int\phi\partial_i u=-\int u\partial_i\phi=-\left<u,\partial_i\phi\right>$ by partial integration.

Extending this, for a distribution $u$ we then define its derivative $\partial_i u$ by this formula.

wildildildlife
  • 5,240
  • 29
  • 25
28

Highbrow: Derivation of the Euler-Lagrange equations describing how a physical system evolves through time from Hamilton's Least Action Principle.

Here's a very brief summary. Consider a very simple physical system consisting of a point mass moving under the force of gravity, and suppose you know the position $q$ of the point at two times $t_0$ and $t_f$. Possible trajectories of the particle as it moved from its starting to ending point correspond to curves $q(t)$ in $\mathbb{R}^3$.

One of these curves describes the physically-correct motion, wherein the particle moves in a parabolic arc from one point to the other. Many curves completely defy the laws of physics, e.g. the point zigs and zags like a UFO as it moves from one point to the other.

Hamilton's Principle gives a criteria for determining which curve is the physically correct trajectory; it is the curve $q(t)$ satisifying the variational principle

$$\min_q \int_{t_0}^{t_f} L(q, \dot{q}) dt$$ subject to the constraints $q(t_0) = q_0, q(t_f) = q_f$. Where $L$ is a scalar-valued function known as the Lagrangian that measures the difference between the kinetic and potential energy of the system at a given moment of time. (Pedantry alert: despite being historically called the "least" action principle, really instead of minimizing we should be extremizing; ie all critical points of the above functional are physical trajectories, even those that are maxima or saddle points.)

It turns out that a curve $q$ satisfies the variational principle if and only if it is a solution to the ODE $$ \frac{d}{dt} \frac{\partial L}{\partial \dot{q}} + \frac{\partial L}{\partial q} = 0,$$ roughly equivalent to the usual Newton's Second Law $ma-F=0$, and the key step in the proof of this equivalence is integration by parts. What is remarkable here is that we started with a boundary-value problem -- given two positions, how did we get from one to the other? -- and ended with an ODE, an initial-value problem -- given an initial position and velocity, how does the point move as we advance through time?

user7530
  • 45,846
  • 11
  • 84
  • 142
24

My favorite example is getting an asymptotic expansion: for example, suppose we want to compute $\int_x^\infty e^{-t^2}\cos(\beta t)dt$ for large values of $x$. Integrating by parts multiple times we end up with $$ \int_x^\infty e^{-t^2}\cos(\beta t)dt \sim e^{-x^2}\sum_{k=1}^\infty(-1)^n\frac{H_{k-1}(x)}{\beta^k} \begin{cases} \cos(\beta x) & k=2n \\ \sin(\beta x) & k=2n+1 \end{cases}$$ where the Hermite polynomials are given by $H_n(x) = (-1)^ne^{x^2}\frac{d^n}{dx^n}e^{-x^2}$.

This expansion follows mechanically applying IBP multiple times and gives a nice asymptotic expansion (which is divergent as a power series).

Greg Graviton
  • 5,004
  • 3
  • 23
  • 37
Apollo
  • 1,629
  • 10
  • 15
23

A lowbrow favorite of mine:

$$\int \frac{1}{x} dx = \frac{1}{x} \cdot x - \int x \cdot\left(-\frac{1}{x^2}\right) dx = 1 + \int \frac{1}{x} dx$$

Therefore, $1=0$.

A bit more highbrow, I like the use of partial integration to establish recursive formulas for integrals.

Raskolnikov
  • 14,777
  • 2
  • 44
  • 81
  • Hm, this example does not depend on integration by parts so much as it depends on not keeping track of the limits of integration. – Greg Graviton Feb 04 '11 at 16:05
  • It's true that the crux of the problem is not so much in the integration by parts, but if you integrate in a different way (what way, by the way?) you won't have that problem. – Raskolnikov Feb 04 '11 at 19:40
  • 23
    @Greg: Actually, it's not the limits of integration that matter here, but the *constant* of integration. $\int \frac{1}{x}\,dx$ is the entire family of antiderivatives, which is exactly the same as the family you get if you add $1$ to every member of the family. – Arturo Magidin Feb 04 '11 at 22:08
  • 17
    Perhaps a more direct proof, using the same idea: $1 = \sin^2 x + \cos^2 x = \int \frac{d}{dx} (\sin^2 x + \cos^2 x)dx = \int (2 \sin x \cdot \cos x - 2 cos x \cdot \sin x)dx = \int 0 dx = 0$. – Mike F Apr 16 '11 at 06:47
  • @Raskolnikov, "*but if you integrate in a different way (what way, by the way?) you won't have that problem*"$-$ what way(BTW)? – HeWhoMustBeNamed Oct 17 '17 at 14:32
  • @Mr Reality: I've written that comment so long ago. I think I just meant with the antiderivative $\ln x$. That's all. Which is just knowing the relationship between $\ln x$ and $1/x$. – Raskolnikov Oct 17 '17 at 16:38
23

Highbrow: Integration by parts can be used to compute (or verify) formal adjoints of differential operators. For instance, one can verify, and this was indeed the proof I saw, that the formal adjoint of the Dolbeault operator $\bar{\partial}$ on complex manifolds is $$\bar{\partial}^* = -* \bar{\partial} \,\,\, *, $$ where $*$ is the Hodge star operator, using integration by parts.

Raeder
  • 1,459
  • 1
  • 17
  • 26
21

My favorite example of integration by parts (there are other nice tricks as well in this example but integration by parts starts it off) is this:

Let $I_n = \displaystyle \int_{0}^{\frac{\pi}{2}} \sin^n(x) dx$.

$I_n = \displaystyle \int_{0}^{\frac{\pi}{2}} \sin^{n-1}(x) d(-\cos(x)) = -\sin^{n-1}(x) \cos(x) |_{0}^{\frac{\pi}{2}} + \int_{0}^{\frac{\pi}{2}} (n-1) \sin^{n-2}(x) \cos^2(x) dx$

The first expression on the right hand side is zero since $\sin(0) = 0$ and $\cos(\frac{\pi}{2}) = 0$.

Now rewrite $\cos^2(x) = 1 - \sin^2(x)$ to get

$I_n = (n-1) (\displaystyle \int_{0}^{\frac{\pi}{2}} \sin^{n-2}(x) dx - \int_{0}^{\frac{\pi}{2}} \sin^{n}(x) dx) = (n-1) I_{n-2} - (n-1) I_n$.

Rearranging we get $n I_n = (n-1) I_{n-2}$, $I_n = \frac{n-1}{n}I_{n-2}$.

Using this recurrence we get $$I_{2k+1} = \frac{2k}{2k+1}\frac{2k-2}{2k-1} \cdots \frac{2}{3} I_1$$

$$I_{2k} = \frac{2k-1}{2k}\frac{2k-3}{2k-2} \cdots \frac{1}{2} I_0$$

$I_1$ and $I_0$ can be directly evaluated to be $1$ and $\frac{\pi}{2}$ respectively and hence,

$$I_{2k+1} = \frac{2k}{2k+1}\frac{2k-2}{2k-1} \cdots \frac{2}{3}$$

$$I_{2k} = \frac{2k-1}{2k}\frac{2k-3}{2k-2} \cdots \frac{1}{2} \frac{\pi}{2}$$

  • 1
    This is what is usually called a [reduction formula](http://en.wikipedia.org/wiki/Integration_by_reduction_formulae) – Abel Feb 05 '11 at 00:12
  • 11
    This is also called Wallis formula/product I believe. – Aryabhata Feb 05 '11 at 18:04
  • @Aryabhata Yes. This would've been more interesting is he showed how to get it. It's not too hard. – Pedro Feb 23 '12 at 16:58
  • @PeterT.off: Are you talking about the infinite version? He did show the finite version. – Aryabhata Feb 23 '12 at 17:00
  • @Aryabhata I've never see the Wallis finite product. I always seen Walli's *infinite* product. I guess it'd be better to at least hint what $\dfrac{I_{2k+1}}{I_{2k}}$ is, and that it tends to 1. – Pedro Feb 23 '12 at 17:05
  • @PeterT.off: Even the finite one is called Wallis product. Not just the infinite. For the question as asked, this answer is sufficient I guess and comments should be enough for anyone curious enough. – Aryabhata Feb 23 '12 at 17:07
  • @Aryabhata I'm not saying the answer is not enough, just saying that when infinity comes into play, things get interesting. – Pedro Feb 23 '12 at 17:21
  • @PeterT.off: I agree. It is one of my favourites! A proof is here: http://crypto.stanford.edu/pbc/notes/pi/wallis.xhtml – Aryabhata Feb 23 '12 at 17:27
  • @Aryabhata If you're interested, I have proofs for the Poisson integral and Strilings Formula via the Wallis product. – Pedro Feb 23 '12 at 17:40
  • @PeterT.off: (And apologies to Sivaram), those I believe have already appeared on this site: http://math.stackexchange.com/questions/23814/how-best-to-explain-the-sqrt2-pi-n-term-in-stirlings – Aryabhata Feb 23 '12 at 17:43
15

$\newcommand{\+}{^{\dagger}}% \newcommand{\angles}[1]{\left\langle #1 \right\rangle}% \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% \newcommand{\dd}{{\rm d}}% \newcommand{\isdiv}{\,\left.\right\vert\,}% \newcommand{\ds}[1]{\displaystyle{#1}}% \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% \newcommand{\expo}[1]{\,{\rm e}^{#1}\,}% \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}% \newcommand{\ic}{{\rm i}}% \newcommand{\imp}{\Longrightarrow}% \newcommand{\ket}[1]{\left\vert #1\right\rangle}% \newcommand{\pars}[1]{\left( #1 \right)}% \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}}% \newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}% \newcommand{\sech}{\,{\rm sech}}% \newcommand{\sgn}{\,{\rm sgn}}% \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}}% \newcommand{\verts}[1]{\left\vert #1 \right\vert}% \newcommand{\yy}{\Longleftrightarrow}$

\begin{align}{\large% \int_{-\infty}^{\infty}{\sin^{2}\pars{x} \over x^{2}}\,\dd x} &= \left.-\,{\sin^{2}\pars{x} \over x}\right\vert_{-\infty}^{\infty} + \int_{-\infty}^{\infty}{2\sin\pars{x}\cos\pars{x} \over x}\,\dd x = \int_{-\infty}^{\infty}{\sin\pars{2x} \over x}\,\dd x \\[3mm]&={\large% \int_{-\infty}^{\infty}{\sin\pars{x} \over x}\,\dd x} \end{align}

Felix Marin
  • 84,132
  • 10
  • 143
  • 185
  • 1
    I am wondering about the last step in the integral. How can we change the 2x to x in the argument of the sine function? Does this have to do with the infinite limits? – Saud Molaib Oct 07 '18 at 09:51
  • 1
    @Saudman97 It's equivalent to the change $\displaystyle t \equiv 2x$ such that $\displaystyle{\sin\left(2x\right) \over x}\,\mathrm{d}x$ goes over $\displaystyle{\sin\left(t\right) \over t}\,\mathrm{d}t$ and $\displaystyle x \to \pm\infty \implies t \to \pm\infty$, respectively. As $\displaystyle x$ and $\displaystyle t$ are "mute variables" you can still use $\displaystyle x$ instead of $\displaystyle t$ in the last integral. – Felix Marin Oct 07 '18 at 20:30
10

Integrating by parts is the how one discovers the adjoint of a differential operator, and thus becomes the foundation for the marvelous spectral theory of differential operators. This has always seemed to me to be both elementary and profound at the same time.

Carl Offner
  • 287
  • 2
  • 4
9

This is one of many integration by-parts applications/derivation I like.

And here is one:

The Gamma Distribution

A random variable is said to have a gamma distribution with parameters ($\alpha,\lambda),~\lambda\gt 0,~\alpha\gt 0,~$ if its density function is given by the following

$$ f(x)= \begin{cases} \frac{\lambda e^{-\lambda\:x}(\lambda x)^{\alpha-1}}{\Gamma(\alpha)}~~~\text{for }~x\ge 0 \\ \\ 0 \hspace{1.09in} {\text{for }}~x\lt 0 \end{cases} $$

where $\Gamma(\alpha),$ called the gamma function is defined as

$$ \Gamma(\alpha) = \int_{0}^{\infty} \! e^{-y} y^{\alpha-1}\, \mathrm{d}y $$

Integration of $\Gamma(\alpha)$ yields the following

$$ \begin{array}{ll} \Gamma(\alpha) &=\; -e^{-y} y^{\alpha-1} \Bigg|_{0}^{\infty}~+~\int_{0}^{\infty} \! e^{-y} (\alpha-1)y^{\alpha-2}\,\mathrm{d}y \\ \\ \;&=\; (\alpha-1) \int_{0}^{\infty} \! e^{-y} y^{\alpha-2}\,\mathrm{d}y ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(1) \\ \\ \;&=\; (\alpha-1) \Gamma(\alpha-1) \end{array} $$

For integral values of $\alpha,$ let's say, $\alpha=n,$ we will obtain, by applying Equation ($1$) repeatedly,

\[ \begin{array}{llll} \Gamma(n)&=(n-1)\Gamma(n-1) \\ &=(n-1)(n-2)\Gamma(n-2) \\ &=\ldots \\ &=(n-1)(n-2)\ldots3~\cdot~2\Gamma(1) \end{array} \]

Since $\Gamma(1)=\int_{0}^{\infty} \! e^{-x}~\mathrm{d}x=1,$ it follows that, for integral values of n,

\[ \Gamma(n)=(n-1)! \]

Hope you enjoy reading $\ldots$ :)

Francesco
  • 298
  • 5
  • 15
night owl
  • 1,794
  • 2
  • 21
  • 25
8

Lowbrow: $\int\sin(x)\cos(x)dx=\sin^2x-\int\sin(x)\cos(x)dx+C$.

Finding the unknown integral again after integrating by parts is an interesting case. Solving the resulting equation immediately gives the result $\int\sin(x)\cos(x)dx=\dfrac12\sin^2x$

Dalker
  • 171
  • 4
7

There are a couple of applications in PDEs that I am quite fond of. As well as verifying that the Laplace operator $-\Delta$ is positive on $L^2$, I like the application of integration by parts in the energy method to prove uniqueness.

Suppose $U$ is an open, bounded and connected subset of $\mathbb{R}^n$. Introduce the BVP \begin{equation*} -\Delta u=f~\text{in}~U \end{equation*} with initial position $f$ on the boundary $\partial U$. Suppose $v\in C^2(\overline{U})$ and set $w:=u-v$ such that we can establish a homogeneous form of our equation. Then an application of integration by parts gives us \begin{equation*} 0=-\int_U w\Delta wdx=\int_U \nabla w\cdot \nabla wdx-\int_{\partial U}w\frac{\partial w}{\partial\nu}dS=\int_U|\nabla w|^2dx \end{equation*} with outward normal $\nu$ of the set $U$. By establishing that $\nabla w=0$, we can then conclude uniqueness of the solution in $U$.

7

Lowbrow: $\int e^x\sin x\ dx$ and its ilk.

msh210
  • 3,616
  • 2
  • 21
  • 35
  • 16
    This can be done more efficiently using integration of complex functions: integrate $e^{x(1+i)}$, which is trivial, then take the imaginary part. – Alex B. Oct 11 '11 at 00:48
6

Really simple but nice:

$\int \log (x) dx = \int 1 \cdot \log(x)dx = x \log(x) - \int x d(\log(x))=x (\log(x)-1) $

also:

$ \int \frac{\log^k(x)}{x}dx = \int \log^k(x)d \log(x)=\frac{\log^{k+1}(x)}{k+1} $

sigma.z.1980
  • 1,629
  • 1
  • 16
  • 18
1

Integration by parts shows that (modulo a constant) the Fourier transform interchanges differentiation and multiplication by the variable:

$\begin{align*} f'(x) \rightarrow \widehat{f'}(\xi) & = \int_{\mathbb{R}} f'(x)e^{-2 \pi i x \xi}dx\\ & = f(x)e^{-2 \pi i x \xi}|_{-\infty}^{\infty} - \int_{\mathbb{R}} f(x) e^{-2 \pi i x \xi} (-2 \pi i \xi) dx \\ & = (2\pi i \xi) \widehat{f}(\xi) \end{align*}$

where $f(x)e^{-2 \pi i x \xi}|_{-\infty}^{\infty}$ vanishes if $f$ decays fast.

Dan L
  • 613
  • 3
  • 12