What are your favorite applications of integration by parts?
(The answers can be as lowbrow or highbrow as you wish. I'd just like to get a bunch of these in one place!)
Thanks for your contributions, in advance!
What are your favorite applications of integration by parts?
(The answers can be as lowbrow or highbrow as you wish. I'd just like to get a bunch of these in one place!)
Thanks for your contributions, in advance!
I always liked the derivation of Taylor's formula with error term:
$$\begin{array}{rl} f(x) &= f(0) + \int_0^x f'(x-t) \,dt\\ &= f(0) + xf'(0) + \int_0^x tf''(x-t)\,dt\\ &= f(0) + xf'(0) + \frac{x^2}2f''(0) + \int_0^x \frac{t^2}2 f'''(x-t)\,dt \end{array}$$
and so on. Using the mean value theorem on the final term readily gives the Cauchy form for the remainder.
Let $f$ be a differentiable one-to-one function, and let $f^{-1}$ be its inverse. Then,
$$\int f(x) dx = x f(x) - \int x f'(x)dx = x f(x) - \int f^{-1}(f(x))f'(x)dx = x f(x) - \int f^{-1}(u) du \,.$$
Thus, if we know the integral of $f^{-1}$, we get the integral of $f$ for free.
BTW: This is the reason why the integrals $\int \ln(x) dx \,;\, \int \arctan(x) dx \,; ...$ are always calculated using integration by parts.
My favorite this week, since I learned it just yesterday: $n$ integrations by parts produces $$ \int_0^1 \frac{(-x\log x)^n}{n!}dx = (n+1)^{-(n+1)}.$$ Then summing on $n$ yields $$\int_0^1 x^{-x}\,dx = \sum_{n=1}^\infty n^{-n}.$$
Repeated integration by parts gives $$\int_0^\infty x^n e^{-x} dx=n!$$
High brow: Let $f(\theta)$ be a smooth function from the circle to $\mathbb{R}$. The Fourier coefficients of $f$ are given by $a_n = 1/(2 \pi) \int f(\theta) e^{-i n \theta} d \theta$.
Integrating by parts: $$a_n = \frac{1}{n} \frac{i}{2 \pi} \int f'(\theta) e^{- i n \theta} d \theta = \frac{1}{n^2} \frac{-1}{2 \pi} \int f''(\theta) e^{- i n \theta} d \theta = \cdots$$ $$\cdots = \frac{1}{n^k} \frac{i^k}{2 \pi} \int f^{(k)}(\theta) e^{- i n \theta} d \theta = O(1/n^k)$$ for any $k$.
Thus, if $f$ is smooth, it Fourier coefficients die off faster than $1/n^k$ for any $k$. More generally, if $f$ has $k$ continuous derivatives, then $a_n = O(1/n^k)$.
As with Taylor's Theorem, the Euler-Maclaurin summation formula (with remainder) can be derived using repeated application of integration by parts.
Tom Apostol's paper "An Elementary View of Euler's Summation Formula" (American Mathematical Monthly 106 (5): 409–418, 1999) has a more in-depth discussion of this. See also Vito Lampret's "The Euler-Maclaurin and Taylor Formulas: Twin, Elementary Derivations" (Mathematics Magazine 74 (2): 109-122, 2001).
Perhaps not really an application, but the definition of the derivative of a distribution is based on partial integration:
if $u\in C^1(X)$ and $\phi\in C^\infty_c(X)$ is a test function, then
$\left<\partial_i u,\phi\right>=\int\phi\partial_i u=-\int u\partial_i\phi=-\left<u,\partial_i\phi\right>$ by partial integration.
Extending this, for a distribution $u$ we then define its derivative $\partial_i u$ by this formula.
Highbrow: Derivation of the Euler-Lagrange equations describing how a physical system evolves through time from Hamilton's Least Action Principle.
Here's a very brief summary. Consider a very simple physical system consisting of a point mass moving under the force of gravity, and suppose you know the position $q$ of the point at two times $t_0$ and $t_f$. Possible trajectories of the particle as it moved from its starting to ending point correspond to curves $q(t)$ in $\mathbb{R}^3$.
One of these curves describes the physically-correct motion, wherein the particle moves in a parabolic arc from one point to the other. Many curves completely defy the laws of physics, e.g. the point zigs and zags like a UFO as it moves from one point to the other.
Hamilton's Principle gives a criteria for determining which curve is the physically correct trajectory; it is the curve $q(t)$ satisifying the variational principle
$$\min_q \int_{t_0}^{t_f} L(q, \dot{q}) dt$$ subject to the constraints $q(t_0) = q_0, q(t_f) = q_f$. Where $L$ is a scalar-valued function known as the Lagrangian that measures the difference between the kinetic and potential energy of the system at a given moment of time. (Pedantry alert: despite being historically called the "least" action principle, really instead of minimizing we should be extremizing; ie all critical points of the above functional are physical trajectories, even those that are maxima or saddle points.)
It turns out that a curve $q$ satisfies the variational principle if and only if it is a solution to the ODE $$ \frac{d}{dt} \frac{\partial L}{\partial \dot{q}} + \frac{\partial L}{\partial q} = 0,$$ roughly equivalent to the usual Newton's Second Law $ma-F=0$, and the key step in the proof of this equivalence is integration by parts. What is remarkable here is that we started with a boundary-value problem -- given two positions, how did we get from one to the other? -- and ended with an ODE, an initial-value problem -- given an initial position and velocity, how does the point move as we advance through time?
My favorite example is getting an asymptotic expansion: for example, suppose we want to compute $\int_x^\infty e^{-t^2}\cos(\beta t)dt$ for large values of $x$. Integrating by parts multiple times we end up with $$ \int_x^\infty e^{-t^2}\cos(\beta t)dt \sim e^{-x^2}\sum_{k=1}^\infty(-1)^n\frac{H_{k-1}(x)}{\beta^k} \begin{cases} \cos(\beta x) & k=2n \\ \sin(\beta x) & k=2n+1 \end{cases}$$ where the Hermite polynomials are given by $H_n(x) = (-1)^ne^{x^2}\frac{d^n}{dx^n}e^{-x^2}$.
This expansion follows mechanically applying IBP multiple times and gives a nice asymptotic expansion (which is divergent as a power series).
A lowbrow favorite of mine:
$$\int \frac{1}{x} dx = \frac{1}{x} \cdot x - \int x \cdot\left(-\frac{1}{x^2}\right) dx = 1 + \int \frac{1}{x} dx$$
Therefore, $1=0$.
A bit more highbrow, I like the use of partial integration to establish recursive formulas for integrals.
Highbrow: Integration by parts can be used to compute (or verify) formal adjoints of differential operators. For instance, one can verify, and this was indeed the proof I saw, that the formal adjoint of the Dolbeault operator $\bar{\partial}$ on complex manifolds is $$\bar{\partial}^* = -* \bar{\partial} \,\,\, *, $$ where $*$ is the Hodge star operator, using integration by parts.
My favorite example of integration by parts (there are other nice tricks as well in this example but integration by parts starts it off) is this:
Let $I_n = \displaystyle \int_{0}^{\frac{\pi}{2}} \sin^n(x) dx$.
$I_n = \displaystyle \int_{0}^{\frac{\pi}{2}} \sin^{n-1}(x) d(-\cos(x)) = -\sin^{n-1}(x) \cos(x) |_{0}^{\frac{\pi}{2}} + \int_{0}^{\frac{\pi}{2}} (n-1) \sin^{n-2}(x) \cos^2(x) dx$
The first expression on the right hand side is zero since $\sin(0) = 0$ and $\cos(\frac{\pi}{2}) = 0$.
Now rewrite $\cos^2(x) = 1 - \sin^2(x)$ to get
$I_n = (n-1) (\displaystyle \int_{0}^{\frac{\pi}{2}} \sin^{n-2}(x) dx - \int_{0}^{\frac{\pi}{2}} \sin^{n}(x) dx) = (n-1) I_{n-2} - (n-1) I_n$.
Rearranging we get $n I_n = (n-1) I_{n-2}$, $I_n = \frac{n-1}{n}I_{n-2}$.
Using this recurrence we get $$I_{2k+1} = \frac{2k}{2k+1}\frac{2k-2}{2k-1} \cdots \frac{2}{3} I_1$$
$$I_{2k} = \frac{2k-1}{2k}\frac{2k-3}{2k-2} \cdots \frac{1}{2} I_0$$
$I_1$ and $I_0$ can be directly evaluated to be $1$ and $\frac{\pi}{2}$ respectively and hence,
$$I_{2k+1} = \frac{2k}{2k+1}\frac{2k-2}{2k-1} \cdots \frac{2}{3}$$
$$I_{2k} = \frac{2k-1}{2k}\frac{2k-3}{2k-2} \cdots \frac{1}{2} \frac{\pi}{2}$$
$\newcommand{\+}{^{\dagger}}% \newcommand{\angles}[1]{\left\langle #1 \right\rangle}% \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% \newcommand{\dd}{{\rm d}}% \newcommand{\isdiv}{\,\left.\right\vert\,}% \newcommand{\ds}[1]{\displaystyle{#1}}% \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% \newcommand{\expo}[1]{\,{\rm e}^{#1}\,}% \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}% \newcommand{\ic}{{\rm i}}% \newcommand{\imp}{\Longrightarrow}% \newcommand{\ket}[1]{\left\vert #1\right\rangle}% \newcommand{\pars}[1]{\left( #1 \right)}% \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}}% \newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}% \newcommand{\sech}{\,{\rm sech}}% \newcommand{\sgn}{\,{\rm sgn}}% \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}}% \newcommand{\verts}[1]{\left\vert #1 \right\vert}% \newcommand{\yy}{\Longleftrightarrow}$
\begin{align}{\large% \int_{-\infty}^{\infty}{\sin^{2}\pars{x} \over x^{2}}\,\dd x} &= \left.-\,{\sin^{2}\pars{x} \over x}\right\vert_{-\infty}^{\infty} + \int_{-\infty}^{\infty}{2\sin\pars{x}\cos\pars{x} \over x}\,\dd x = \int_{-\infty}^{\infty}{\sin\pars{2x} \over x}\,\dd x \\[3mm]&={\large% \int_{-\infty}^{\infty}{\sin\pars{x} \over x}\,\dd x} \end{align}
Integrating by parts is the how one discovers the adjoint of a differential operator, and thus becomes the foundation for the marvelous spectral theory of differential operators. This has always seemed to me to be both elementary and profound at the same time.
This is one of many integration by-parts applications/derivation I like.
And here is one:
A random variable is said to have a gamma distribution with parameters ($\alpha,\lambda),~\lambda\gt 0,~\alpha\gt 0,~$ if its density function is given by the following
$$ f(x)= \begin{cases} \frac{\lambda e^{-\lambda\:x}(\lambda x)^{\alpha-1}}{\Gamma(\alpha)}~~~\text{for }~x\ge 0 \\ \\ 0 \hspace{1.09in} {\text{for }}~x\lt 0 \end{cases} $$
where $\Gamma(\alpha),$ called the gamma function is defined as
$$ \Gamma(\alpha) = \int_{0}^{\infty} \! e^{-y} y^{\alpha-1}\, \mathrm{d}y $$
Integration of $\Gamma(\alpha)$ yields the following
$$ \begin{array}{ll} \Gamma(\alpha) &=\; -e^{-y} y^{\alpha-1} \Bigg|_{0}^{\infty}~+~\int_{0}^{\infty} \! e^{-y} (\alpha-1)y^{\alpha-2}\,\mathrm{d}y \\ \\ \;&=\; (\alpha-1) \int_{0}^{\infty} \! e^{-y} y^{\alpha-2}\,\mathrm{d}y ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(1) \\ \\ \;&=\; (\alpha-1) \Gamma(\alpha-1) \end{array} $$
For integral values of $\alpha,$ let's say, $\alpha=n,$ we will obtain, by applying Equation ($1$) repeatedly,
\[ \begin{array}{llll} \Gamma(n)&=(n-1)\Gamma(n-1) \\ &=(n-1)(n-2)\Gamma(n-2) \\ &=\ldots \\ &=(n-1)(n-2)\ldots3~\cdot~2\Gamma(1) \end{array} \]
Since $\Gamma(1)=\int_{0}^{\infty} \! e^{-x}~\mathrm{d}x=1,$ it follows that, for integral values of n,
\[ \Gamma(n)=(n-1)! \]
Hope you enjoy reading $\ldots$ :)
Lowbrow: $\int\sin(x)\cos(x)dx=\sin^2x-\int\sin(x)\cos(x)dx+C$.
Finding the unknown integral again after integrating by parts is an interesting case. Solving the resulting equation immediately gives the result $\int\sin(x)\cos(x)dx=\dfrac12\sin^2x$
There are a couple of applications in PDEs that I am quite fond of. As well as verifying that the Laplace operator $-\Delta$ is positive on $L^2$, I like the application of integration by parts in the energy method to prove uniqueness.
Suppose $U$ is an open, bounded and connected subset of $\mathbb{R}^n$. Introduce the BVP \begin{equation*} -\Delta u=f~\text{in}~U \end{equation*} with initial position $f$ on the boundary $\partial U$. Suppose $v\in C^2(\overline{U})$ and set $w:=u-v$ such that we can establish a homogeneous form of our equation. Then an application of integration by parts gives us \begin{equation*} 0=-\int_U w\Delta wdx=\int_U \nabla w\cdot \nabla wdx-\int_{\partial U}w\frac{\partial w}{\partial\nu}dS=\int_U|\nabla w|^2dx \end{equation*} with outward normal $\nu$ of the set $U$. By establishing that $\nabla w=0$, we can then conclude uniqueness of the solution in $U$.
Lowbrow: $\int e^x\sin x\ dx$ and its ilk.
Really simple but nice:
$\int \log (x) dx = \int 1 \cdot \log(x)dx = x \log(x) - \int x d(\log(x))=x (\log(x)-1) $
also:
$ \int \frac{\log^k(x)}{x}dx = \int \log^k(x)d \log(x)=\frac{\log^{k+1}(x)}{k+1} $
Integration by parts shows that (modulo a constant) the Fourier transform interchanges differentiation and multiplication by the variable:
$\begin{align*} f'(x) \rightarrow \widehat{f'}(\xi) & = \int_{\mathbb{R}} f'(x)e^{-2 \pi i x \xi}dx\\ & = f(x)e^{-2 \pi i x \xi}|_{-\infty}^{\infty} - \int_{\mathbb{R}} f(x) e^{-2 \pi i x \xi} (-2 \pi i \xi) dx \\ & = (2\pi i \xi) \widehat{f}(\xi) \end{align*}$
where $f(x)e^{-2 \pi i x \xi}|_{-\infty}^{\infty}$ vanishes if $f$ decays fast.