291

Okay, so everyone knows the usual methods of solving integrals, namely u-substitution, integration by parts, partial fractions, trig substitutions, and reduction formulas. But what else is there? Every time I search for "Advanced Techniques of Symbolic Integration" or "Super Advanced Integration Techniques", I get the same results which end up only talking about the methods mentioned above. Are there any super obscure and interesting techniques for solving integrals?

As an example of something that might be obscure, the formula for "general integration by parts " for $n$ functions $f_j, \ j = 1,\cdots,n$ is given by $$ \int{f_1'(x)\prod_{j=2}^n{f_j(x)}dx} = \prod_{i=1}^n{f_i(x)} - \sum_{i=2}^n{\int{f_i'(x)\prod_{\substack{j=1 \\ j \neq i}}^n{f_j(x)}dx}} $$ which is not necessarily useful nor difficult to derive, but is interesting nonetheless.

So out of curiosity, are there any crazy unknown symbolic integration techniques?

Harry Peter
  • 7,551
  • 1
  • 20
  • 50
user3002473
  • 8,449
  • 6
  • 29
  • 60
  • 8
    This seems a bit relevant: http://web.williams.edu/Mathematics/lg5/Feynman.pdf – skrub Sep 23 '14 at 00:41
  • 2
    Sometimes converting integrals into laplace transform type problems is useful – ClassicStyle Sep 23 '14 at 04:59
  • 1
    Using power series expansions of the integrand can be funny – Avitus Sep 23 '14 at 11:04
  • Related (duplicate?): http://math.stackexchange.com/questions/70974/lesser-known-integration-tricks – Hans Lundmark Sep 23 '14 at 20:32
  • 4
    _Irresistible Integrals_ by Boros and Moll has already been cited, but here's a great book hot off the presses: _Inside Interesting Integrals_ by Paul J. Nahin (Springer, 2015). Nahin is a retired electrical/computer engineer who has written some great books that acknowledge and show concern for a mathematician's desire for rigor. – PolyaPal Sep 23 '14 at 21:32
  • 2
    Feynnman's method of integration is also good. – Aditya Kumar Dec 03 '15 at 04:47

21 Answers21

234

Here are a few. The first one is included because it's not very well known and is not general, though the ones that follow are very general and very useful.


  • A great but not very well known way to find the primitive of $f^{-1}$ in terms of the primitive of $f$, $F$, is (very easy to prove: just differentiate both sides and use the chain rule): $$ \int f^{-1}(x)\, dx = x \cdot f^{-1}(x)-(F \circ f^{-1})(x)+C. $$

Examples:

$$ \begin{aligned} \displaystyle \int \arcsin(x)\, dx &= x \cdot \arcsin(x)- (-\cos\circ \arcsin)(x)+C \\ &=x \cdot \arcsin(x)+\sqrt{1-x^2}+C. \end{aligned} $$

$$ \begin{aligned} \int \log(x)\, dx &= x \cdot \log(x)-(\exp \circ \log)(x) + C \\ &= x \cdot \left( \log(x)-1 \right) + C. \end{aligned} $$


  • This one is more well known, and extremely powerful, it's called differentiating under the integral sign. It requires ingenuity most of the time to know when to apply, and how to apply it, but that only makes it more interesting. The technique uses the simple fact that $$ \frac{\mathrm d}{\mathrm d x} \int_a^b f \left({x, y}\right) \mathrm d y = \int_a^b \frac{\partial f}{\partial x} \left({x, y}\right) \mathrm d y. $$

Example:

We want to calculate the integral $\int_{0}^{\infty} \frac{\sin(x)}{x} dx$. To do that, we unintuitively consider the more complicated integral $\int_{0}^{\infty} e^{-tx} \frac{\sin(x)}{x} dx$ instead.

Let $$ I(t)=\int_{0}^{\infty} e^{-tx} \frac{\sin(x)}{x} dx,$$ then $$ I'(t)=-\int_{0}^{\infty} e^{-tx} \sin(x) dx=\frac{e^{-t x} (t \sin (x)+\cos (x))}{t^2+1}\bigg|_0^{\infty}=\frac{-1}{1+t^2}.$$

Since both $I(t)$ and $-\arctan(t)$ are primitives of $\frac{-1}{1+t^2}$, they must differ only by a constant, so that $I(t)+\arctan(t)=C$. Let $t\to \infty$, then $I(t) \to 0$ and $-\arctan(t) \to -\pi/2$, and hence $C=\pi/2$, and $I(t)=\frac{\pi}{2}-\arctan(t)$.

Finally, $$ \int_{0}^{\infty} \frac{\sin(x)}{x} dx = I(0) = \frac{\pi}{2}-\arctan(0) = \boxed{\frac{\pi}{2}}. $$


  • This one is probably the most commonly used "advanced integration technique", and for good reasons. It's referred to as the "residue theorem" and it states that if $\gamma$ is a counterclockwise simple closed curve, then $\displaystyle \int_\gamma f(z) dz = 2\pi i \sum_{k=1}^n \operatorname{Res} ( f, a_k )$ . It will be difficult for you to understand this one without knowledge in complex analysis, but you can get the gist of it with the wiki article. Example:

    We want to compute $\int_{-\infty}^{\infty} \frac{x^2}{1+x^4} dx$. The poles of our function $f(z)=\frac{x^2}{1+x^4}$ in the upper half plane are $a_1=e^{i \frac{\pi}{4}}$ and $a_2=e^{i \frac{3\pi}{4}}$. The residues of our function at those points are $$\operatorname{Res}(f,a_1)=\lim_{z\to a_1} (z-a_1)f(z)=\frac{e^{i \frac{-\pi}{4}}}{4},$$ and $$\operatorname{Res}(f,a_2)=\lim_{z\to a_2} (z-a_2)f(z)=\frac{e^{i \frac{-3\pi}{4}}}{4}.$$ Let $\gamma$ be the closed path around the boundary of the semicircle of radius $R>1$ on the upper half plane, traversed in the counter-clockwise direction. Then the residue theorem gives us ${1 \over 2\pi i} \int_\gamma f(z)\,dz=\operatorname{Res}(f,a_1)+\operatorname{Res}(f,a_2)={1 \over 4}\left({1-i \over \sqrt{2}}+{-1-i \over \sqrt{2}}\right)={-i \over 2 \sqrt{2}}$ and $ \int_\gamma f(z)\,dz= {\pi \over \sqrt{2}}$. Now, by the definition of $\gamma$, we have: $$\int_\gamma f(z)\,dz = \int_{-R}^R \frac{x^2}{1+x^4} dx + \int_0^\pi {i (R e^{it})^3 \over 1+(R e^{it})^4} dz = {\pi \over \sqrt{2}}.$$ For the integral on the semicircle $$ \int_0^\pi {i (R e^{it})^3 \over 1+(R e^{it})^4} dz, $$ we have $$ \begin{aligned} \left| \int_0^\pi {i (R e^{it})^3 \over 1+(R e^{it})^4} dz \right| &\leq \int_0^\pi \left| {i (R e^{it})^3 \over 1+(R e^{it})^4} \right| dz \\ &\leq \int_0^\pi {R^3 \over R^4-1} dz={\pi R^3 \over R^4-1}. \end{aligned} $$ Hence, as $R\to \infty$, we have ${\pi R^3 \over R^4-1} \to 0$, and hence $\int_0^\pi {i (R e^{it})^3 \over 1+(R e^{it})^4} dz \to 0$. Finally, $$ \begin{aligned} \int_{-\infty}^\infty \frac{x^2}{1+x^4} dx &= \lim_{R\to \infty} \int_{-R}^R \frac{x^2}{1+x^4} dx \\ &= \lim_{R\to \infty} {\pi \over \sqrt{2}}-\int_0^\pi {i (R e^{it})^3 \over 1+(R e^{it})^4} dz =\boxed{{\pi \over \sqrt{2}}}. \end{aligned} $$


  • My final "technique" is the use of the mean value property for complex analytic functions, or Cauchy's integral formula in other words: $$ \begin{aligned} f(a) &= \frac{1}{2\pi i} \int_\gamma \frac{f(z)}{z-a}\, dz \\ &= \frac{1}{2\pi} \int_{0}^{2\pi} f\left(a+e^{ix}\right) dx. \end{aligned} $$

Example:

We want to compute the very messy looking integral $\int_0^{2\pi} \cos (\cos (x)+1) \cosh (\sin (x)) dx$. We first notice that $$ \begin{aligned} &\hphantom{=} \cos [\cos (x)+1] \cosh [\sin (x)] \\ &=\Re\left\{ \cos [\cos (x)+1] \cosh [\sin (x)] -i\sin [\cos (x)+1] \sinh [\sin (x)] \right\} \\ &= \Re \left[ \cos \left( 1+e^{i x} \right) \right]. \end{aligned} $$ Then, we have $$ \begin{aligned} \int_0^{2\pi} \cos [\cos (x)+1] \cosh [\sin (x)] dx &= \int_0^{2\pi} \Re \left[ \cos \left( 1+e^{i x} \right) \right] dx \\ &= \Re \left[ \int_0^{2\pi} \cos \left( 1+e^{i x} \right) dx \right] \\ &= \Re \left( \cos(1) \cdot 2 \pi \right)= \boxed{2 \pi \cos(1)}. \end{aligned} $$

GDumphart
  • 2,180
  • 13
  • 27
Fujoyaki
  • 2,805
  • 1
  • 9
  • 16
  • 1
    +1, how about monte-carlo integration or sth along these lines as well, this could go (and applied) a long way and provide results not easily derive by other methods (if derived at all)? Plus how about some integration tricks used in Quantum Mechanics (eg Feynman had a couple of tricks) – Nikos M. Sep 23 '14 at 22:37
  • 6
    @NikosM. Monte Carlo does numerical integration, not symbolic. – Jack M Sep 25 '14 at 08:37
  • 1
    @JackM, of course, however it relates a (definite) integral to random variable, i think this counts as an advanced technique, which can derive results not easily derived otherwise. Mpreove the comment was also about techniques used mostly in physics (e.g differentiation by a paramater, some of Feynman's tricks etc..) – Nikos M. Sep 25 '14 at 23:55
  • 2
    @NikosM. I assumed that OP wanted analytic solutions to his integrals, not numerical approximations. Although, feel free to make a post about it in this thread, numerical analysis is of course a very interesting subject, just not sure if it's what the OP wants :-). – Fujoyaki Sep 25 '14 at 23:59
172

You can do integration by inverting the matrix representation of the differentiation operator with respect to a clever choice of a basis and then apply the inverse of the operator to function you wish to integrate.

For example, consider the basis $\mathcal{B} = \{e^{ax}\cos bx, e^{ax}\sin bx \}$. Differentiating with respect to $x$ gives \begin{align*} \frac{d}{dx}e^{ax} \cos bx &= ae^{ax} \cos bx - be^{ax} \sin bx\\ \frac{d}{dx} e^{ax} \sin bx &= ae^{ax} \sin bx + be^{ax} \cos bx \end{align*}

and the matrix representation of the linear operator is

$$T = \begin{bmatrix} a & b\\ -b & a \end{bmatrix}$$

To then solve something like $\int e^{ax}\cos bx\operatorname{d}\!x$, this is equivalent to calculating

$$T^{-1}\begin{bmatrix} 1\\ 0 \end{bmatrix}_{\mathcal{B}} = \frac{1}{a^{2} + b^{2}}\begin{bmatrix} a\\ b \end{bmatrix}_{\mathcal{B}}.$$

That is,

$$\int e^{ax}\cos bx\operatorname{d}\!x = \frac{a}{a^{2}+b^{2}}e^{ax}\cos bx + \frac{b}{a^{2} + b^{2}}e^{ax}\sin bx$$

JessicaK
  • 7,261
  • 5
  • 22
  • 40
  • 6
    This is actually really cool! I didn't quite understand it when you first posted it, but now I see just how awesome it truly is! – user3002473 Dec 17 '14 at 02:41
  • @JessicaK Out of interest under what inner product are the basis vectors perpendicular ? I guess the fact the basis is linearly independent (required by definition) has something to do with the fact we can use matrix inversion in place of integration. – gone Jul 04 '21 at 14:01
  • We must teach it at high school. https://math.stackexchange.com/questions/1003358/how-do-you-write-a-differential-operator-as-a-matrix – Brian Cannard Mar 25 '22 at 11:12
50

Another option is converting the value under the integral to a summation. For example,

$$ \int{\frac{1}{1 + x^2}dx} = \int\sum_{i = 0}^\infty{(-1)^ix^{2i}}dx = \sum_{i = 0}^\infty(-1)^i\int{x^{2i}}dx = \sum_{i = 0}^\infty \frac{(-1)^ix^{2i+1}}{2i + 1}.$$

You might then make use of the fact that,

$$\sum_{i = 0}^\infty \frac{(-1)^ix^{2i+1}}{2i + 1} = \tan^{-1}{x}.$$

Of course, you need to be familiar with many different series, which comes with practise. In fact, most derivations of $\arctan(x)$ as a series actually use the method I just used. However, it still serves as an example of the technique.

Another example of this comes through the Riemann zeta function:

Let $u=kx$,

$$\begin{align}\int_0^\infty\frac{x^s}{e^x-1}\ dx&=\int_0^\infty x^se^{-x}\left(\frac1{1-e^{-x}}\right)\ dx\\&=\int_0^\infty x^se^{-x}\sum_{k=0}^\infty e^{-kx}\ dx\\&=\sum_{k=1}^\infty\int_0^\infty x^se^{-kx}\ dx\\&=\sum_{k=1}^\infty\frac1{k^{s+1}}\int_0^\infty u^se^{-u}\ du\\&=\zeta(s+1)\Gamma(s+1)\end{align}$$

A beautiful non-trivial example of expansion and solving through series.

Simply Beautiful Art
  • 71,916
  • 11
  • 112
  • 250
  • 4
    What's the name of the theorem that alows you to interchange sum and integral like that? – zerosofthezeta Sep 24 '14 at 04:16
  • 7
    See [here](http://math.stackexchange.com/questions/83721/when-can-a-sum-and-integral-be-interchanged). Of course, there's always other ways of showing it. –  Sep 24 '14 at 04:49
  • 2
    @ChantryCargill I've given you a much less trivial example of how this can be useful ;) In general, you can try looking at the following integral: $$\int_0^\infty\frac{x^s}{e^x+a}\ dx$$ – Simply Beautiful Art Feb 20 '17 at 14:57
  • @SimplyBeautifulArt Thank you for you addition. Looking back, I probably should have provided a non-trivial example, so I am glad you have done so. –  Feb 21 '17 at 14:49
  • Yeah, they aren't the easiest things to think of :D – Simply Beautiful Art Feb 21 '17 at 14:51
49

For a really advanced technique, you may want to read about Risch's algorithm for indefinite integration, which is implemented in the major symbolic mathematics programs.

lhf
  • 208,399
  • 15
  • 224
  • 525
  • 1
    I assume this is not something to do by hand? – Simply Beautiful Art Feb 17 '17 at 21:11
  • 2
    @SimplyBeautifulArt define "by hand". There are also multiple levels of complexity within the Risch Algorithm... We can get more basic results pretty easily, but even most major Mathematical softwares don't implement the whole Risch Algorithm – Brevan Ellefsen Feb 19 '17 at 18:46
47

If $f(x)$ is a continuous function on $(-\infty, +\infty)$ and $\displaystyle\int_{-\infty}^{\infty} f(x)\,dx$ exists, then we can use the following property

\begin{align} \int_{-\infty}^\infty f\left(x\right)\,dx=\int_{-\infty}^\infty f\left(x-\frac{a}{x}\right)\,dx\qquad,\qquad\text{for }\, a>0. \end{align}

Example:

Using the property above for $a=1$, we can solve the following integral \begin{align} \int_{-\infty}^\infty \exp\left(-\frac{(x^2-qx-1)^2}{px^2}\right)\ dx &=\int_{-\infty}^\infty \exp\left(-\frac{(x-x^{-1}-q)^2}{p}\right)\ dx\\ &=\int_{-\infty}^\infty \exp\left(-\frac{(x-q)^2}{p}\right)\ dx\\ &=\int_{-\infty}^\infty \exp\left(-\frac{y^2}{p}\right)\ dy\\ &=\sqrt{p}\int_{-\infty}^\infty \exp\left(-z^2\right)\ dz\\ &=\sqrt{p\pi} \end{align} where $p>0$ and $q\in\mathbb{R}$.

Another technique, we may refer to Dirichlet integral, especially the double improper integral method.

Anastasiya-Romanova 秀
  • 18,909
  • 8
  • 71
  • 147
31

Here is a book for most advanced techniques

Advanced integration techniques

Here are some of the methods included in the book.

Laplace Integration

$$\int^{\infty}_0 \frac{f(t)}{t}\,dt=\, \int^{\infty}_0 \, \mathcal{L}(f (t))\, ds$$

Can be used to show that

$$\int^{\infty}_0 \frac{\sin(t)}{t}\,dt = \frac{\pi}{2}$$

proof

$$\int^{\infty}_0 \frac{\sin(t)}{t}\,dt= \int^{\infty}_{0}\mathcal{L} (\sin(t))\,ds$$

We know that

$$\mathcal{L}(\sin(t)) = \frac{1}{s^2+1}$$

Just substitute in our integral

$$\int^{\infty}_0 \frac{ds}{1+s^2}= \tan^{-1}(s)|_{s=\infty}-\tan^{-1} (s)|_{s=0}=\frac{\pi}{2}$$ Convolution

$$\mathcal{L}\left((f * g)(t)\right)= \mathcal{L}(f(t)) \mathcal{L}(g (t)) $$

to show that

$$\beta(x+1,y+1)=\int^{1}_{0}t^{x}\, (1-t)^{y}\,dt= \frac{\Gamma(x+1)\Gamma {(y+1)}}{\Gamma{(x+y+2)}}$$

proof

Let us choose some functions $f(t) = t^{x} \,\, , \, g(t) = t^y$

Hence we get

$$(t^x*t^y)= \int^{t}_0 s^{x}(t-s)^{y}\,ds $$

So by definition we have

$$\mathcal{L}\left(t^x*t^y\right)= \mathcal{L}(t^x) \mathcal{L}(t^y ) $$

We can now use the laplace of the power

$$\mathcal{L}\left(t^x*t^y\right)= \frac{x!\cdot y!}{s^{x+y+2}}$$

Notice that we need to find the inverse of Laplace $\mathcal{L}^{-1}$

$$\mathcal{L}^{-1}\left(\mathcal{L}(t^x*t^y)\right)=\mathcal{L}^{- 1}\left( \frac{x!\cdot y!}{s^{x+y+2}}\right)=t^{x+y+1}\frac{x!\cdot y!} {(x+y+1)!}$$

So we have the following

$$(t^x*t^y) =t^{x+y+1}\frac{x!\cdot y!}{(x+y+1)!}$$

By definition we have

$$t^{x+y+1}\frac{x!\cdot y!}{(x+y+1)!} = \int^{t}_0 s^{x}(t-s)^{y}\,ds $$

This looks good , put $t=1$ we get

$$\frac{x!\cdot y!}{(x+y+1)!} = \int^{1}_0 s^{x}(1-s)^{y}\,ds$$

By using that $n! = \Gamma{(n+1)}$

We arrive happily to our formula

$$ \int^{1}_0 s^{x}(1-s)^{y}\,ds= \frac{\Gamma(x+1)\Gamma{(y+1)}}{\Gamma {(x+y+2)}}$$

which can be written as

$$ \int^{1}_0 s^{x-1}(1-s)^{y-1}\,ds= \frac{\Gamma(x)\Gamma{(y)}}{\Gamma {(x+y+1)}}$$

Series expansion

$$\int^z_0 f(x) dx = \sum_{k\geq 0} a_k \frac{z^{k+1}}{k+1}$$

Can be used to show that

$$\int^\infty_0\frac{t^{s-1}}{e^t-1}dt = \Gamma(s) \zeta(s)$$

proof

Using the power expansion

$$\frac{1}{1-e^{-t}} = \sum_{n=0}^\infty e^{-nt}$$

Hence we have

$$\int^\infty_0\,e^{-t}t^{s-1}\left(\sum_{n=0}^\infty e^{-nt}\right)\,dt$$

By swapping the series and integral

$$\sum_{n=0}^\infty\int^\infty_0\,t^{s-1}e^{-(n+1)t}\,dt = \Gamma(n) \sum_{n=0}^\infty \frac{1}{(n+1)^s}=\Gamma(s)\zeta(s)\,\,$$

Taking Limits

$$\lim_{s \to 0} \int^z_0 t^{s-1} f(t) \,dt = \int^z_0 \frac{f(t)}{t}dt$$

Can be used to prove

$$ \psi(a) = \int^{\infty}_0 \frac{e^{-z}-(1+z)^{-a}}{z}\,dz $$

proof

Introduce the variable

$$\lim_{s \to 0} \int^{\infty}_0 (z^{s-1}e^{-z}-z^{s-1}(1+z)^{-a})\,dz $$

$$\Gamma(s)-\frac{\Gamma(s)\Gamma(a-s)}{\Gamma(a)} = \frac{\Gamma(s+1)}{\Gamma(a)}\left\{\frac{\Gamma(a)-\Gamma(a-s)}{s}\right\}$$

By taking the limit

$$\frac{1}{\Gamma(a)}\lim_{s \to 0}\frac{\Gamma(a)-\Gamma(a-s)}{s} =\frac{1}{\Gamma(a)}\lim_{s \to 0}\frac{\Gamma(a+s)-\Gamma(a)}{s} = \frac{\Gamma'(a)}{\Gamma(a)} = \psi(a)$$

Zaid Alyafeai
  • 13,825
  • 2
  • 36
  • 84
22

Odd/even function properties, symmetry of the function about a certain line.

$$\int^a_0f(x) dx=\int^a_0f(a-x) dx$$.

There are probably a couple of others that I have forgotten.

Edit: Never mind, I didn't notice that this asked for 'indefinite' integration- my apologies.

A neat trick is to sometimes multiply the integral by a factor of one (as is the case for integrating the secant and cosecant function).

qwr
  • 9,834
  • 4
  • 38
  • 70
Sherlock Holmes
  • 1,848
  • 10
  • 13
  • 1
    I wish he hadn't specified indefinite to be honest. –  Sep 23 '14 at 01:24
  • 8
    @Chantry Cargill, Yeah, you know what, indefinite is a little restricting isn't it? I'll edit the question to make it more general. – user3002473 Sep 23 '14 at 01:36
18

There are many integration techniques ranging from exact analytical methods like Contour Integration, change of variable, convolution techniques, stochastic integration... to approximate analytic methods using asymptotic expansions, continued franctions, Laplace's method ... but there's even more. A good detailed coverage of this material can be found in Daniel Zwillinger's The Handbook of Integration. If you find the latter to be too technical -- which may be the case if you didn't do any complex analysis -- then you may try George Boros / Victor Moll - Irresistible Integrals which would be a much friendlier read.

Hakim
  • 9,613
  • 7
  • 37
  • 57
16

According to Wikipedia, the "tangent half-angle substitution" (a.k.a. the Weierstrauss substitution) is the "world's sneakiest substitution". It consists of subbing $x=\arctan(2t)$, which allows you to write $\sin(x) = 2t/(1+t^2)$, $\cos(x) = (1-t^2)/(1+t^2)$, and $dt = 2/(1+t^2) dx$. An example (poached from Wikipedia) is the following: $$ \begin{eqnarray} \int \csc(x) dx & = & \int \frac{dx}{\sin(x)} \\ & = & \int \left(\frac{2t}{1+t^2}\right)^{-1} \frac{2}{1+t^2} dt \\ & = & \int \frac{dt}{t} \\ & = & \ln(t) +C\\ & = & \ln(\tan(x/2)) + C \end{eqnarray} $$


Also, it's worth mentioning that for integrands which possess a symmetry, one can often evaluate the integral using ideas from representation theory. A simple example of this is the fact that the integral of an odd function over a domain symmetric about the origin vanishes. The symmetry here is parity.

A much more sophisticated example is when the integrand consists of functions with symmetry under a representation of the rotation group--e.g. spherical harmonics. The analysis of such integrals goes under the name "Wigner-Eckart Theorem", and is extremely important in physics.

The most fruitful application of this technique is usually to show that an integral is in fact $0$. In cases where an integral is non-zero, symmetry may tell you some information but not a complete answer. E.g. for an integral like $\int_{-1}^1 f(x) dx$ where $f(x)$ is even, you can't say what the exact value of the integral is, but you can say that it equals $2\int_0^1 f(x) dx$.

Yly
  • 14,399
  • 4
  • 29
  • 70
12

The use of residues to evaluate real integrals. There are many theorems that can be applied to various cases. If I get to it, I'll post some examples from this website. Later.

OK, here is an example. First a theorem.

If $p(x) \ \rm{and} \ q(x) $ are real polynomials such that the degree of $q(x)$ is at least 2 more than the degree of $p(x),$ and if $q(x)=0$ has no real roots, then

$$ \int_{- \infty}^{\infty} \frac{p(x)}{q(x)}dx = 2 \pi i \sum{ \rm{residues \ of \ } }p(z)/q(z) \ \rm{at \ its \ poles \ in \ the \ upper \ half \ plane.}$$

Now apply this to find $$\int_{- \infty}^{\infty} \frac{x^2}{\left(x^2+a^2 \right) \left(x^2+b^2 \right) } dx$$

The only poles of $$\frac{z^2}{\left(z^2+a^2 \right) \left(z^2+b^2 \right) } $$

are at $z= \pm ai, z= \pm bi.$ Only $ai$ and $bi$ are in the upper half plane. For $z=ai$ the residue is $$ \lim\limits_{z \to ai} (z-ai) \frac{z^2}{\left(z-ai \right)\left(z+ai \right) \left(z^2+b^2 \right)}=\frac{-a^2}{2ai \left(-a^2+b^2 \right) }=\frac{a}{2i \left(a^2-b^2 \right)}$$

Similarly, the other residue is $$ \frac{b}{2i \left(b^2-a^2 \right)}$$

Therefore, the value of the integral is

$$ 2 \pi i \left[ \frac{a}{2i \left(a^2-b^2 \right) } + \frac{b}{2i \left(b^2-a^2 \right)} \right] =\frac{\pi}{a+b}$$

$$ \ $$

Reference: $ Advanced \ Engineering \ Mathematics, \rm{Wylie \ and \ Barrett.}$

soakley
  • 1,554
  • 2
  • 16
  • 17
11

I personally like the tactic of differentiation under the integral sign. Allow me to provide an example.

$$\frac{d}{du}\int_a^b f(u,x)\text{ dx}=\int_a^b \frac{\partial}{\partial u}f(u,x)\text{ dx}$$

This can be used to solve some extremely strange integrals, one of the easiest might be:

Prove $$I=\int_0^1 x\log x\text{ dx}=-\frac14$$

One solve this first by noticing for $a>1$

$$I(a):=\int_0^1 x^{a-1}\text{ dx}=a^{-1}.$$

We can note from this that $$I'(a)=\int_0^1 x^{a-1}\log x \text{ dx}=\left[a^{-1}\right]'=-a^{-2}$$

Evaluating $I'(a)$ at $a=2$ yields the upper result.

8

How does one integrate $f(x) = \frac 1x \sin( \frac 1{x^3})$? It certainly isn't Lebesgue integrable. For effect, here is a graph of $f(x)$ on $[-0.6,0.6]$:

enter image description here

Integrating $f(x)$ requires knowledge of the Henstock-Kurzweil integral, AKA the gauge integral. It is more general (though less well-behaved) than the Lebesgue integral. You can read about it here and here. The first source states it best, so I will quote it:

Traditional Riemann integration, while powerful, leaves us with much to be desired. The class of functions that can be evaluated using Riemann’s technique, for example, is very small. Another problem is that a convergent sequence of Riemann integrable functions (we will denote this class of functions as R-integrable) does not necessarily converge to an $R$-integrable function, and furthermore the fundamental theorem of calculus is not general- that is to say when integrating a function $f$ we often find a function $F$ such that $f' =F$ and $\int_a^b f = F(b) − F(a)$ to evaluate.

The problem with the Riemann technique is that $f$ may have a primitive $F$ but that does not guarantee that it is $R$-integrable which prevents us from applying the above equation. There have been many steps to cover and fix the holes left by Riemann. Lebesgue, Perron, and Denjoy all made major advancements in the theory of integration; the later two generalized the fundamental theorem of calculus, fixing the latter problem. The techniques they used, however, were inaccessible and complicated.

In the 1960’s Kurzweil and Henstock came up with a new integration technique that is so powerful it includes every function the others can integrate. The technique had the added advantage of being simple, requiring only slightly more effort to learn than the Riemann integral. There was in fact a (failed) movement to replace the teaching of the Riemann integral with that of the Kurzweil-Henstock integral (also called generalized Riemann integral and gauge integral).

8

Differentiating under the integral. Albeit not technically "indefinite" integral technique, but you can still use it for those purposes.

Here is a link of how it is used:

http://ocw.mit.edu/courses/mathematics/18-304-undergraduate-seminar-in-discrete-mathematics-spring-2006/projects/integratnfeynman.pdf

Enigma
  • 3,723
  • 13
  • 31
7

This help page lists the methods that the computer algebra system Maple uses: http://www.maplesoft.com/support/help/Maple/view.aspx?path=int/methods (full disclosure: I work for them). One interesting method that I, for one, didn't learn in university, is the Meijer G method. The idea here is that essentially all elementary and special functions can be expressed as so-called Meijer G functions, by using appropriate parameters. And there are relatively straightforward rules for finding the antiderivative of a Meijer G function. So that gives you your symbolic result in terms of Meijer G functions. The tricky part, then, is to express this in terms of the familiar standard functions and special functions.

Erik P.
  • 521
  • 4
  • 3
6

I'm not an expert in Integration but I have learnt a few tricks which works quite well in a few situations.

$1.$

If $f(x)$ is a continuous function satisfying $f(x+\pi)=f(x-\pi)=f(x)$ in the interval $0\le x\lt \infty$, then

$$\int_0^\infty \frac{\sin^2x}{x^2}f(x)\mathrm dx=\int_0^\infty \frac{\sin x}{x}f(x)\mathrm dx=\int_0^{\pi/2} f(x)\mathrm dx$$, whose proof can be found here.

$2.$

$$\beta(s)\Gamma(s)=\int_0^\infty \frac{x^{s-1}e^{-x}}{1+e^{-2x}}\mathrm dx$$, which I proved in this answer. $$\eta(s)\Gamma(s)=\int_0^\infty \frac{x^{s-1}}{e^x+1}\mathrm dx$$ $$\zeta(s)\Gamma(s)=\int_0^\infty \frac{x^{s-1}}{e^x-1}\mathrm dx$$

An example :

\begin{align}\int_0^\infty \frac{\sin x\cos^2 x}{x}\mathrm dx&=\int_0^{\pi/2} \cos^2 x\mathrm dx\\&=\frac{\pi}{4}\end{align}

6

A nice telescoping technique with an auxiliary function, which you can use for reducing integration to differentiation.

The technique allows chipping off the integration evaluation in case you have a good estimate over the major asymptotic part of it, i.e. something substantial to start from.

For known and chosen $g(x)$ (typically it has to be a good guess so differentials would start diminishing):

$$ \left ( g(x)\sum_{n=0}^{+\infty}{f_n(x)} \right )' = g(x)\sum_{n=0}^{+\infty}{f_n'(x)} + g'(x)\sum_{n=0}^{+\infty}{f_n(x)} $$

Now take:

$$f_{n+1}(x) = - f_n'(x)\frac{g(x)}{g'(x)}=- \frac{f_n'(x)}{\ln'(g(x))}, g'(x)f_0(x)=f(x)$$

and you would telescopically cancel it all:

$$g(x)\sum_{n=0}^{+\infty}{f_n'(x)} + g'(x)\sum_{n=0}^{+\infty}{f_n(x)} = g(x)\sum_{n=0}^{+\infty}{f_n'(x)} + g'(x)\sum_{n=1}^{+\infty}{f_n(x)} + g'(x)f_0(x) = $$ $$g(x)\sum_{n=0}^{+\infty}{f_n'(x)} - g(x)\sum_{n=0}^{+\infty}{f_n'(x)} + g'(x)f_0(x) = f(x)$$

So it makes

$$ \left ( g(x)\sum_{n=0}^{+\infty}{f_n(x)} \right )' = f(x) $$

or

$$ \int f(x) \, dx = g(x) \sum_{n=0}^{+\infty}{f_n(x)}$$

repeating the recursion:

$$f_0(x)=\frac{f(x)}{g'(x)}, f_{n+1}(x) = - \frac{f_n'(x)}{\ln'(g(x))}$$

Here the choice of $g(x)$ is essential. You can adjust this schema into something more suitable for some specific functions as the steps and choice of auxiliary function are quite arbitrary.

In order not to repeat myself here are two examples (by the same author):

https://math.stackexchange.com/a/2625348/944732 https://math.stackexchange.com/a/4183421/944732

Applied to $\int e^{x^2} dx$ it gives essentially the asymptotic expansion of Dawson integral.

For asymptotic purposes you are trying your best to get

$f_{n+1}(x)=o(f_{n}(x))$,$\lim\limits_{x \to \infty} f_n(x) = 0, \left ( g(x) \sum\limits_{k=0}^{N}{f_k(x)} \right )' = O(f(x)), n > \text{ some small }N$

Alex Peter
  • 137
  • 1
  • 2
5

There is one more useful integral relation from Fourier analysis; it's Plancherel's Theorem.

Let $f,g\in L^p$ and $\mathcal{F}_p$ the generalization of the Fourier transformation $\mathcal{F}$ onto $L^p$ ($\mathcal{F}$ is usually only defined for $L^1$ or the Schwartz-space). Then, for any $p \ge 1$ Plancherel's identity

$\langle \mathcal{F}_p[f],\mathcal{F}_p[g]\rangle = \langle f, g\rangle$

holds.

Example

It is

$$\mathcal{F}[1/(\lambda+i\kappa)](x) = \sqrt{2\pi}\Theta(x)e^{-\kappa x},\\ \mathcal{F}[e^{-\lambda^2/2}](x) = e^{-x^2/2}.$$

Therefore we find

$$\int_{-\infty}^\infty \frac{e^{-\lambda^2/2}}{\lambda + i \kappa} = \sqrt{2\pi} \int_0^\infty e^{-x^2/2 - \kappa x}.$$

The latter integral is readily solved by completing the square.

In general Plancherel's Theorem is usefull when computing the product of two seemingly unrelated functions (e.g. a rational and an exponential function), which have simplier representations in Fourierspace.

manthano
  • 452
  • 3
  • 9
3

For instance, horseshoe integration. I am not familiar with the technique but it allows to derive that

$$\int_0^\infty\frac{x-\sinh x}{x^2 \sinh x}=-\ln 2$$

Here is the link to the derivation: https://qr.ae/TSkU0u

Anixx
  • 7,543
  • 1
  • 25
  • 46
2

If we have $$u(x)=\prod_{i=1}^nf_i(x)$$ Then, through the product rule: $$u'(x)=\sum_{i=1}^{n}\prod_{j=1}^{n}f_j^{(\delta_{ij})}(x)$$ Where $\delta_{ij}$ is the Kronecker delta, $f^{(0)}(x)=f(x)$, and $f^{(1)}(x)=f'(x)$.

Thus, $$\int\sum_{i=1}^{n}\prod_{j=1}^{n}f_j^{(\delta_{ij})}(x)dx=\prod_{i=1}^nf_i(x)+C$$ It's really specific, and I haven't used it yet, but it's still a pretty neat formula (especially with that product notation).

Here's another. Assume $I=\{1,2,\dots,n\}$ for any $n\in\Bbb N$, and $\forall i,j\in I, i\neq j\iff \alpha_i\neq \alpha_j$ $$G=\int\prod_{i\in I}\frac{1}{x-\alpha_i}dx$$ Using partial fraction decomposition, we get $$G=\sum_{i\in I}\beta_{ij}\int\frac{dx}{x-\alpha_i}$$ Where $$\beta_{ij}=\prod_{i\neq j\in I}\frac{1}{\alpha_i-\alpha_j}$$ Then of course, $$G=\sum_{i\in I}\beta_{ij}\ln|x-\alpha_i|$$ Which is also pretty fun to look at (but is also little more useful).

clathratus
  • 15,243
  • 3
  • 14
  • 70
0

Taylor

Never forget the power of Taylor

$$ \int f(x) \, dx = \int f(a)+\sum_{n=1}^{+\infty}\frac{f^{(n)}(a)}{n!}(x-a)^n \, dx = c + xf(a)+\sum_{n=1}^{+\infty}\frac{f^{(n)}(a)}{(n+1)!}(x-a)^{n+1}$$

Since $c$ is arbitrary take $c=-af(a)$ and have:

$$ \int f(x) \, dx = \sum_{n=1}^{+\infty}\frac{f^{(n-1)}(a)}{n!}(x-a)^{n}$$

A fully symmetrical form.

Useful in more than one situation, especially for numerical calculation.

Particularly for symbolic evaluation, if $P_n$ is some polynomial of degree $n$ then this is suitable for Taylor:

$$\int \frac{P_n(x)}{(x − a)^{n+1}} \, dx$$

Chebyshev

Integral of rational exponents $m,n,p$

$$ \int x^{m}(a+bx^n)^p \, dx $$

can be reduced to the integral of a rational function in these cases only:

$p$ is an integer: Set $x = z^K$ where K is the least denominator of the fractions $m$ and $n$

$ \frac{m+1}{n}$ is an integer: Set $a+bx^n = z^K$ where K is the denominator of fraction $p$

$ \frac{m+1}{n}+p$ is an integer: Set $ax^{-n}+b = z^K$ where K is the denominator of fraction $p$

Alex Peter
  • 137
  • 1
  • 2
  • It seems like you answered after an answer was accepted. – Tyma Gaidash Jul 04 '21 at 13:55
  • 3
    @TymaGaidash I did not understand this question as the one with an accepted answer. There are too many interesting techniques to have all of them in just one answer. – Alex Peter Jul 04 '21 at 13:59
0

One of the expressions that is appropriate for definite integration is to plug in an equidistributed sequence into the general Riemann definition. Although this method resembles Monte-Carlo, the difference is that you do not need a random sequencer. To simplify things, I will assume that we ask for a definite integral between two integers. It is a trivial task to make it completely generic by substitution. First we split into integer segments:

$$\int_{0}^{N} f(x) dx = \sum_{k=0}^{N-1} \int_{0}^{1} f(k+\{x\}) dx $$

And now, since all we need is an equidistributed sequence take any irrational number $\alpha$ and plug it in, since we know that $\{n\alpha\}$ is equidistributed modulo $1$.

$$\int_{0}^{N} f(x) dx = \lim_{M \to \infty}\frac1{M}\sum_{k=0}^{N-1}\sum_{n=1}^{M} f(k+\{n\alpha\}) dx $$

Example:

$$\int_{0}^{10} x^x dx \approx \frac1{100}\sum_{k=0}^{9}\sum_{n=1}^{100} (k+\{n\sqrt{2}\})^{k+\{n\sqrt{2}\}} dx $$

The integral is about $3.05 \cdot 10^9$ while the sum is about $3.02 \cdot 10^9$