The implicit function theorem gives sufficient conditions to solve a given equation for one or more of the variables as functions of the remaining variables. The basic form of the theorem is that of an existence theorem. However, the contraction mapping proof of the theorem provides an error estimate for a sequence of approximating maps. Sometimes it is also termed the implicit mapping theorem. See http://en.wikipedia.org/wiki/Implicit_function_theorem

The implicit function theorem provides sufficient conditions to solve an equation $G(x,y)=k$ near a point $(a,b)$ for which $G(a,b)=k$ for the $y$-variables as functions of the $x$-variables. In particular, the theorem implies the existence of a function $f$ such that $G(x,f(x))=k$ for $x$ near $a$. The basic idea is simply that if we have $n$-equations in $(m+n)$-unknowns then we may solve for $n$ of the unknowns as dependent variables on the remaining $m$-variables.

Perhaps the most common application is that if $F(x_1, x_2, \dots , x_n )=k$ then we can solve for $x_j$ as a function of $x_1, \dots , x_{j-1},x_{j+1}, \dots , x_n$ near $p\in \mathbb{R}^n$ provided $\frac{\partial F}{\partial x_j}(p)$ is nonzero and $F$ is continuously differentiable ($F \in C^1(p)$) near $p$.

Let us expanding on the general case in more explicit notation. If $G: \mathbb{R}^m \times \mathbb{R}^n \rightarrow \mathbb{R}^n$ is continuously differentiable near $(a,b)$ where $a \in \mathbb{R}^m$ and $b \in \mathbb{R}^n$ and $G(a,b)=k$ and the $n \times n$-submatrix of the Jacobian of $G$ corresponding to $y$-derivatives of $G$ is invertible at $(a,b)$ then there exists a function $f: dom(f) \subseteq \mathbb{R}^m \rightarrow \mathbb{R}^n$ which is continuously differentiable near $x=a$ and the solution for $G(x,y)=k$ near $(a,b)$ is given by the graph $y=f(x)$.

An improved version of the implicit function theorem provides a constructive method for which the implicit solution is found as the limit function of a sequence of functions formed by linearizing the equation $G(x,y)=k$ near the initial point $(a,b)$. As typical of such arguments, a fixed point argument is made in concert with the contraction mapping technique. See C.H. Edward's Advanced Calculus of Several Variables for a reasonably complete account of constructive version of the theorem. In particular, Theorem 3.4 of Edward's text states:

Let $G: dom(G) \subseteq \mathbb{R}^m \times \mathbb{R}^n \rightarrow \mathbb{R}^n$ be continuously differentiable in an open ball about the point $(a,b)$ where $G(a,b)=k$ (a constant vector in $\mathbb{R}^n$). If the matrix $\tfrac{ \partial G}{\partial y}(a,b)$ is invertible then there exists an open ball $U$ containing $a$ in $\mathbb{R}^m$ and an open ball $W$ containing $(a,b)$ in $\mathbb{R}^m \times \mathbb{R}^n$ and a continuously differentiable mapping $h: U \rightarrow \mathbb{R}^n$ such that $G(x,y)=k$ iff $y=h(x)$ for all $(x,y) \in W$. Moreover, the mapping $h$ is the limit of the sequence of successive approximations defined inductively below $$ h_o(x)=b, \ \ h_{n+1} = h_n(x)-[\tfrac{ \partial G}{\partial y}(a,b)]^{-1}G(x,h_n(x)) \qquad \text{for all $x \in U$.} $$

The implicit function theorem may be used to justify the inverse function theorem and both can be understood as special cases of the more general constant rank theorem. One may consult this Wikipedia article for further examples and discussion.