What is the Jacobian matrix?
What are its applications?
What is its physical and geometrical meaning?
Can someone please explain with examples?
What is the Jacobian matrix?
What are its applications?
What is its physical and geometrical meaning?
Can someone please explain with examples?
The Jacobian $df_p$ of a differentiable function $f : \mathbb{R}^n \to \mathbb{R}^m$ at a point $p$ is its best linear approximation at $p$, in the sense that $f(p + h) = f(p) + df_p(h) + o(|h|)$ for small $h$. This is the "correct" generalization of the derivative of a function $f : \mathbb{R} \to \mathbb{R}$, and everything we can do with derivatives we can also do with Jacobians.
In particular, when $n = m$, the determinant of the Jacobian at a point $p$ is the factor by which $f$ locally dilates volumes around $p$ (since $f$ acts locally like the linear transformation $df_p$, which dilates volumes by $\det df_p$). This is the reason that the Jacobian appears in the change of variables formula for multivariate integrals, which is perhaps the basic reason to care about the Jacobian. For example this is how one changes an integral in rectangular coordinates to cylindrical or spherical coordinates.
The Jacobian specializes to the most important constructions in multivariable calculus. It immediately specializes to the gradient, for example. When $n = m$ its trace is the divergence. And a more complicated construction gives the curl. The rank of the Jacobian is also an important local invariant of $f$; it roughly measures how "degenerate" or "singular" $f$ is at $p$. This is the reason the Jacobian appears in the statement of the implicit function theorem, which is a fundamental result with applications everywhere.
Here is an example. Suppose you have two implicit differentiable functions
$$F(x,y,z,u,v)=0,\qquad G(x,y,z,u,v)=0$$
and the functions, also differentiable, $u=f(x,y,z)$ and $v=g(x,y,z)$ such that
$$F(x,y,z,f(x,y,z),g(x,y,z))=0,\qquad G(x,y,z,f(x,y,z),g(x,y,z))=0.$$
If you differentiate $F$ and $G$, you get
\begin{eqnarray*} \frac{\partial F}{\partial x}+\frac{\partial F}{\partial u}\frac{\partial u}{ \partial x}+\frac{\partial F}{\partial v}\frac{\partial v}{\partial x} &=&0\qquad \\ \frac{\partial G}{\partial x}+\frac{\partial G}{\partial u}\frac{\partial u}{ \partial x}+\frac{\partial G}{\partial v}\frac{\partial v}{\partial x} &=&0. \end{eqnarray*}
Solving this system you obtain
$$\frac{\partial u}{\partial x}=-\frac{\det \begin{pmatrix} \frac{\partial F}{\partial x} & \frac{\partial F}{\partial v} \\ \frac{\partial G}{\partial x} & \frac{\partial G}{\partial v} \end{pmatrix}}{\det \begin{pmatrix} \frac{\partial F}{\partial u} & \frac{\partial F}{\partial v} \\ \frac{\partial G}{\partial u} & \frac{\partial G}{\partial v} \end{pmatrix}}$$
and similar for $\dfrac{\partial u}{\partial y}$, $\dfrac{\partial u}{\partial z}$, $\dfrac{\partial v}{\partial x}$, $\dfrac{\partial v}{\partial y}$, $% \dfrac{\partial v}{\partial z}$. The compact notation for the denominator is
$$\frac{\partial (F,G)}{\partial (u,v)}=\det \begin{pmatrix} \frac{\partial F}{\partial u} & \frac{\partial F}{\partial v} \\ \frac{\partial G}{\partial u} & \frac{\partial G}{\partial v} \end{pmatrix}$$
and similar for the numerator. Then
$$\dfrac{\partial u}{\partial x}=-\dfrac{\dfrac{\partial (F,G)}{\partial (x,v)}}{% \dfrac{\partial (F,G)}{\partial (u,v)}}$$
where $\dfrac{\partial (F,G)}{\partial (x,y)},\dfrac{\partial (F,G)}{\partial (u,v)}$ are Jacobians (after the 19th century German mathematician Carl Jacobi).
The absolute value of the Jacobian of a coordinate system transformation is also used to convert a multiple integral from one system into another. In $\mathbb{R}^2$ it measures how much the unit area is distorted by the given transformation, and in $\mathbb{R}^3$ this factor measures the unit volume distortion, etc.
Another example: the following coordinate transformation (due to Beukers, Calabi and Kolk)
$$x=\frac{\sin u}{\cos v}$$
$$y=\frac{\sin v}{\cos u}$$
transforms (see this question of mine) the square domain $0\lt x\lt 1$ and $0\lt y\lt 1$ into the triangle domain $u,v>0,u+v<\pi /2$ (in Proofs from the BOOK by M. Aigner and G. Ziegler).
For this transformation you get (see Proof 2 in this collection of proofs by Robin Chapman)
$$\dfrac{\partial (x,y)}{\partial (u,v)}=1-x^2y^{2}.$$
Jacobian sign and orientation of closed curves. Assume you have two small closed curves, one around $(x_0,y_0)$ and another around $u_0,v_0$, this one being the image of the first under the mapping $u=f(x,y),v=g(x,y)$. If the sign of $\dfrac{\partial (x,y)}{\partial (u,v)}$ is positive, then both curves will be travelled in the same sense. If the sign is negative, they will have opposite senses. (See Oriented Regions and their Orientation.)
In single variable calculus, if $f:\mathbb R \to \mathbb R$, then \begin{equation} f'(x) = \lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x}. \end{equation} A very useful way to think about $f'(x)$ is this: \begin{equation} \tag{$\spadesuit$} f(x + \Delta x) \approx f(x) + f'(x) \Delta x. \end{equation}
One of the advantages of equation $(\spadesuit)$ is that it still makes perfect sense in the case where $f:\mathbb R^n \to \mathbb R^m$:
\begin{equation} f(\underbrace{x}_{n \times 1} + \underbrace{\Delta x}_{n\times 1}) \approx \underbrace{f(x)}_{m \times 1} + \underbrace{f'(x)}_{?} \underbrace{\Delta x}_{n \times 1}. \end{equation} You see, if $f'(x)$ is now an $m \times n$ matrix, then this equation makes perfect sense. So, with this idea, we can extend the idea of the derivative to the case where $f:\mathbb R^n \to \mathbb R^m$. This is the first step towards developing calculus in a multivariable setting. The matrix $f'(x)$ is called the "Jacobian" of $f$ at $x$, but maybe it's more clear to simply call $f'(x)$ the derivative of $f$ at $x$.
The matrix $f'(x)$ allows us to approximate $f$ locally by a linear function (or, technically, an "affine" function). Linear functions are simple enough that we can understand them well (using linear algebra), and often understanding the local linear approximation to $f$ at $x$ allows us to draw conclusions about $f$ itself.
(I know this is slightly late, but I think the OP may appreciate this)
As an application, in the field of control engineering the use of Jacobian matrices allows the local (approximate) linearisation of non-linear systems around a given equilibrium point and so allows the use of linear systems techniques, such as the calculation of eigenvalues (and thus allows an indication of the type of the equilibrium point).
Jacobians are also used in the estimation of the internal states of non-linear systems in the construction of the extended Kalman filter, and also if the extended Kalman filter is to be used to provide joint state and parameter estimates for a linear system (since this is a non-linear system analysis due to the products of what are then effectively inputs and outputs of the system).
I found the most beautiful usage of jacobian matrices in studying differential geometry, when one abandons the idea that analysis can be done "only on balls of $\mathbb{R}^n$". The definition of tangent space in a point $p$ of a manifold $M$ can be given via the kernel of the jacobian of a suitable submersion, or via the image of the differential of a suitable immersion from an open set $U\subseteq\mathbb{R}^{\dim M}$. Quite a simple example, but when I was an undergrad four years ago it gave me the "right" idea of what a linear transformation does in a differential (analytical) framework.
This is not a rigorous explanation, but here is the best intuitive explanation/motivation for the Jacobian Matrix. Start with an interval $[x_1,x_2] \subset \mathbb{R}$. What is a common measurement of space for this interval ? It is length. To find the length of $[x_1,x_2]$, take $x_2-x_1$. Now suppose I define an invertible linear transformation $T:\mathbb{R} \rightarrow \mathbb{R}$, where $$T(x)=\begin{bmatrix}a\end{bmatrix}x,$$ where $\begin{bmatrix}a\end{bmatrix}$ is a $1\times 1$ matrix with a nonzero entry $a$. The image of $[x_1,x_2]$ under $T$ is the interval $[ax_1,ax_2]$, and the length of this new interval is $ax_2-ax_1=a(x_2-x_1)$. Now we ask ourselves this question. How does the length of the new interval relate to the length of the old interval ? The length of the $[ax_1,ax_2]$ is $|a|$ times the length of $[x_1,x_2]$. But notice that: $$|a|=\left |\det\begin{bmatrix}a\end{bmatrix}\right |.$$ Now suppose you are doing u substitution to evaluate an integral in the form $$\int_{S} f(x) dx.$$ We define $x=x(u)$ and the differential $dx$ becomes $\frac{dx}{du}du$. If you view $dx$ and $du$ as vectors in $\mathbb{R}$, you get $$dx=\begin{bmatrix}\frac{dx}{du}\end{bmatrix}du.$$ The determinant of $\begin{bmatrix}\frac{dx}{du}\end{bmatrix}$ plays the same role as $a$ in that it is a scaling factor between different "infinitesimal" interval lengths.
The higher dimensional analogue of the interval in $\mathbb{R}$ is a parallelepiped in $\mathbb{R}^n$. Measurement of space in $\mathbb{R}^n$ is the $n$-dimensional volume. If you define an invertible linear transformation $T:\mathbb{R}^n \rightarrow \mathbb{R}^n$, and if you write $T(x)=Ax$, where $A$ is an $n \times n$ matrix, the absolute value of $\det A$, scales the volume of a parallelepiped. Similarly, if you are dealing with the multidimensional integral: $$\int_{S}f(x_1,...,x_n)dx_1...dx_n$$ and wish to use change of variables: $$x_i=x_i(u_1,...,u_n),1 \leq i \leq n$$ you can regard $dx=(dx_1,...,dx_n),du=(du_1,...,du_n)$ as vectors in $\mathbb{R}^n$ and relate them by $$dx=\begin{bmatrix}\frac{\partial x_i}{\partial u_j}\end{bmatrix}_{ij}du.$$ The Jacobian Matrix here is: $$\begin{bmatrix}\frac{\partial x_i}{\partial u_j}\end{bmatrix}_{ij},$$ and the notation means the $i$th row and $j$th column entry is $\frac{\partial x_i}{\partial u_j}$. The absolute value of the determinant of the Jacobian Matrix is a scaling factor between different "infinitesimal" parallelepiped volumes. Again, this explanation is merely intuitive. It is not rigorous as one would present it in a real analysis course.
A very short contribution for the applicability question: it is a matrix of partial derivatives. One of the applications is to find local solutions of a system of nonlinear equations. When you have a system of nonlinear equation, the x`s that are solutions of the system are not easy to find, because it is difficult to invert a the matrix of nonlinear coefficients of the system. However, you can take the partial derivative of the equations, find the local linear approximation near some value, and then solve the system. Because the system becomes locally linear, you can solve it using linear algebra.
I don't know much about this, but I know is used in programming robotics for transforming between two frame of references. The equations become very simple. So moving from one frame to another to another is just the matrix product of Jacobian matrix.
The simplest answer I can give is - Jacobian Matrix is used when there is a change of variable requirement in the greater than one dimensional space.
One of the explanations above explains it simplistically in the single variable concept.