16

If a univariate function like $f(x)$ is differentiable, we denote its derivative by $\frac{\mathrm{d} }{\mathrm{d} x}f(x)$ and its integral by $\int f(x)\mathrm{d} x$. If the function happens to be multivariate we denote its "Partial derivative" by $\frac{\partial }{\partial x_i}f(x_1,\cdots ,x_i,\cdots ,x_n)$ and its total derivative by $\frac{\mathrm{d} }{\mathrm{d} x_i}f(x_1,\cdots ,x_i,\cdots ,x_n)$.

Now this is my question:

What are the inverse operations of "Partial derivative" and "Total derivative" of a multivariate function? Do we have such things as "Partial Integral" or "Total integral" of a multivariate function? And if this is true, what do we call such "Partial Integral" and "Total integral" of a multivariate function, and what are the agreed-upon notations for them?

Hamed Begloo
  • 613
  • 6
  • 13
  • 1
    Related: https://math.stackexchange.com/questions/606679/is-there-such-a-thing-as-partial-integration https://en.wikipedia.org/wiki/Partial_derivative#Antiderivative_analogue – Biggs May 26 '17 at 20:56
  • @Programmer2134: The crux of the issue, in my opinion, is a very fundamental one, along the lines of "what do $x$ and $y$ actually mean?", which doesn't get a satisfactory treatment (or, really, much notice at all) in the context of the language learned in introductory calculus. Derivatives and integrals, particularly in multi-variable calculus, just happens to be the first time for most people that this deficiency starts to cause big problems. –  May 27 '17 at 05:25

3 Answers3

9

Assume for the moment that $x$, $y$,$z$ are independent variables such that $(x,y,z)$ ranges in an open convex set of ${\mathbb R}^3$.

${\bf 1.\ }$ If you are given a function $(x,y,z)\mapsto f(x,y,z)$ then the indefinite integral $$\int f(x,y,z)\>dx\tag{1}$$ denotes the set of all functions $(x,y,z)\mapsto F(x,y,z)$ satisfying the condition (or PDE) $${\partial F\over\partial x}(x,y,z)=f(x,y,z)\ .\tag{2}$$ If $F_0$ is a solution of $(2)$, found by guessing or by formally integrating $(1)$ with respect to $x$, then the general solution of $(2)$ is given by $$F(x,y,z)=F_0(x,y,z)+G(y,z)\ ,$$ whereby $G$ is an arbitrary (sufficiently smooth) function of its variables $y$ and $z$.

${\bf 2.\ }$ If $(x,y,z)\mapsto F(x,y,z)$ is a scalar function then ${d\over dx}F(x,y,z)$ makes no sense. There is the derivative or differential $dF(x,y,z)$ of $f$, which is for each ${\bf p}=(x,y,z)$ in the domain of $F$ a linear functional on the tangent space $T_{\bf p}$. The geometric representation of this functional is the gradient vector $$\nabla F({\bf p})=\left({\partial F\over\partial x},{\partial F\over\partial y},{\partial F\over\partial z}\right)_{\bf p}\ .$$ Note that $(x,y,z)\mapsto \nabla F(x,y,z)$ constitutes a vector field on the domain in question.

${\bf 3.}\ $Reversing the operation of taking the gradient vector means finding for a given vector field $${\bf f}(x,y,z)=\bigl(u(x,y,z),v(x,y,z),w(x,y,z)\bigr)$$ a scalar function $F$ such that $$\nabla F(x,y,z)={\bf f}(x,y,z)\ .\tag{3}$$ This is not always possible. A necessary condition is that ${\rm curl}({\bf f})\equiv{\bf 0}$. If this condition is fulfilled then you can find an $F$ satisfying $(3)$ either by a recursive scheme ("nested integrals") involving the problem described in ${\bf 1}$, or by computing line integrals as follows: $$F({\bf p})=F({\bf 0})+\int_{\bf 0}^{\bf p}{\bf f}({\bf x})\cdot d{\bf x}\ ,$$ whereby $F({\bf 0})$ is arbitrary.

${\bf 4.\ }$ A "function $f\bigl(x,y(x)\bigr)$", where $x\mapsto y(x)$ is in principle given is a function $$\phi(x):=f\bigl(x,y(x)\bigr)$$ of one variable $x$, and the usual rules of Calculus 101 plus the multivariable chain rule apply. E.g., $$\phi'(x)=f_{.1}\bigl(x,y(x)\bigr)\cdot 1+f_{.2}\bigl(x,y(x)\bigr)\cdot y'(x)\ ,$$ and $$\int f\bigl(x,y(x)\bigr)\>dx=\Phi(x)+C\ ,$$ whereby $\Phi'(x)=\phi(x)$.

Christian Blatter
  • 216,873
  • 13
  • 166
  • 425
1

Let $f: \mathbb{R^n} \longrightarrow \mathbb{R^p}$ .You can obviously find some integral involving the function $f$ where you want to only consider some of the $(x_1, \cdots, x_n)$ and if that is the case, you assume that the $x_i$ not being used are constants.

For instance let $f(x, y, z) = xyz$. I may want to integrate $f$ with respect to $x$, with $x$ from $0$ to $a$ while keeping $y, z$ constant at, say, $1$. Then I want to compute

$$\int_{0}^{a} xyz\ dx = yz\int_{0}^a x\ dx = \frac{a^2}{2}yz = \frac{a^2}{2}$$

You can also both try to find a function $F$ such that $F\prime = f$ and some function such that $\frac{\partial F}{\partial x_i} = f$ and if you are given the function $f$, that would mean you would be finding a "global" primitive of $f$ or a partial primitive (with respect to $x_i$) of $f$ and thus those operations are well-defined.

For instance finding $F$ such that $\frac{\partial F}{\partial x} = xyz$ would mean finding the primitive with respect to $x$ and in this case is $F = \frac{x^2yz}{2}$

If you want to find $F$ such that $F\prime = f$ you assume that the coordinate functions of $f$ are the partial derivatives of $F$, integrate each one of them, and try to "glue" them together.

Say $f(x, y) = (x^2, y^2)$. Then this means $\frac{\partial F}{\partial x} = x^2$, $\frac{\partial F}{\partial y} = y^2$. Integrating both of them with respect to the right variable you get

$$\frac{\partial F}{\partial x} = x^2 \iff F = \frac{x^3}{3} + \omega_1(y)$$ for some function $\omega_1$ that depends only of $y$.

$$\frac{\partial F}{\partial y} = y^2 \iff F = \frac{y^3}{3} + \omega_2(x)$$ for some function $\omega_2$ that depends only of $x$.

Gluing everything together you get

$$\frac{y^3}{3} + \omega_2(x) = \frac{x^3}{3} + \omega_1(y) \iff \omega_1(y) = \frac{y^3}{3} \wedge \omega_2(x) = \frac{x^3}{3} \rightarrow F(x, y) = \frac{x^3}{3} + \frac{y^3}{3}$$

and getting that

$$\nabla F = f$$ (Please note that I am just saying you can perform such operations, I am not giving you any nomenclature for them)

RGS
  • 9,645
  • 2
  • 18
  • 34
0

You can interpret the inverse of an operation as solving an equation (as in, the inverse of $x \mapsto x^2$ is solving $x^2 = a$,) and apply this to differentiation of functions in $\mathbb{R^n} \to \mathbb{R}$ (or to broader generalizations — e.g. $\mathbb{R^n} \to \mathbb{R^m}$ — at the cost of having to deal with more details.)

Inverse of partial differentiation

The inverse of partial differentiation $f \mapsto \frac {\partial f} {\partial x_i}$ of functions in $\mathbb{R^n} \to \mathbb{R}$ is solving the equation

$$\frac {\partial f} {\partial x_i} = f_{x_i} \tag 1$$

for $f$, given $f_{x_i}: \mathbb{R^n} \to \mathbb{R}$. Despite the notation on the LHS, you can treat (1) essentially as a first-order ordinary differential equation, as each $x_{j \ne i}$ is treated as a constant, hence, for the purpose of solving the equation, $f_{x_i}$ and $f$ can be treated as parameterized univariate functions.

As (any or all) solutions to the univariate case, if any exist, are casually denoted by the indefinite integral $\int f(x) dx$, one may be tempted to (re-/ab-)use the same notation for solutions of (1) as well and write $\int f(\mathbf x) dx_i$, though any such use out of context is questionable. The antiderivatives of a univariate function differ by a plain constant, which happens to be mostly irrelevant where they are used, so keeping them all together under the indefinite integral is convenient and mostly harmless. In contrast, solutions to (1) in general will differ by

$$g(\mathbf x) = h(x_1, ..., x_{i - 1}, x_{i + 1}, ..., x_n)$$

where $h$ is just any function in $\mathbb{R^{n - 1}} \to \mathbb{R}$.

Inverse of "total differentiation"

Defining total differentiation as the mapping of a function to its total derivative, i.e.

$$f: \mathbb{R^n} \to \mathbb{R} \quad \mapsto \quad \frac {df} {dx_i} = \sum {\frac {\partial f} {\partial x_j} \frac {dx_j} {dx_i}}$$

then its inverse is solving the equation

$$\sum {\frac {\partial f} {\partial x_j} \frac {dx_j} {dx_i}} = f_{x_i} \tag 2$$

for $f$, given $f_{x_i}: \mathbb{R^n} \to \mathbb{R}$ and each of $\frac {dx_j} {dx_i} : \mathbb R \to \mathbb R$ for $j \ne i$. Note that when $\frac {dx_j} {dx_i} = 0$ for all $j \ne i$, which can be seen as a (rather ambiguous) way of stating that $x_j$ and $x_i$ are independent, then (2) is reduced to (1).

This is a first-order partial differential equation, and there is no standard notation for its solutions, possibly owing to its solution space being even less orderly than that of (1), and thus even less likely to be of any use when considered as a whole, under a single denominator.1

A family of implicit solutions of (2) of a certain form are commonly2 referred to as its complete integral, for which there is no standard notation, either.


1 Another plausible explanation may be that PDE-s at large are considered a first-class royal zoo, poorly understood except in a handful of special cases, and usually avoided at all costs.

2 To the extent to which commonly applies in PDE-related contexts.

shinobi
  • 281
  • 1
  • 3