Lets take the $\mathbb{R}^3$ space as example. Any point in the $\mathbb{R}^3$ space can be represented by 3 linearly independent vectors that need not be orthogonal to each other. What is that special quality of orthogonal basis (extending to orthonormal) that we choose them over nonorthogonal basis?
3 Answers
If $\{v_1, v_2, v_3\}$ is a basis for $\mathbb{R}^3$, we can write any $v \in \mathbb{R}^3$ as a linear combination of $v_1, v_2,$ and $v_3$ in a unique way; that is $v = x_1v_2 + x_2v_2+x_3v_3$ where $x_1, x_2, x_3 \in \mathbb{R}$. While we know that $x_1, x_2, x_3$ are unique, we don't have a way of finding them without doing some explicit calculations.
If $\{w_1, w_2, w_3\}$ is an orthonormal basis for $\mathbb{R}^3$, we can write any $v \in \mathbb{R}^3$ as $$v = (v\cdot w_1)w_1 + (v\cdot w_2)w_2 + (v\cdot w_3)w_3.$$ In this case, we have an explicit formula for the unique coefficients in the linear combination.
Furthermore, the above formula is very useful when dealing with projections onto subspaces.
Added Later: Note, if you have an orthogonal basis, you can divide each vector by its length and the basis becomes orthonormal. If you have a basis, and you want to turn it into an orthonormal basis, you need to use the GramSchmidt process (which follows from the above formula).
By the way, none of this is restricted to $\mathbb{R}^3$, it works for any $\mathbb{R}^n$, you just need to have $n$ vectors in a basis. More generally still, it applies to any inner product space.
 90,765
 20
 171
 423
Short version: An orthonormal basis is one for which the associated coordinate representations not only faithfully preserve the linear properties of the vectors, but also the metric properties.
Long version:
A basis gives a (linear) coordinate system: if $(v_1,\dotsc,v_n)$ is a basis for $\mathbb{R}^n$ then we can write any $x\in\mathbb{R}^n$ as a linear combination $$ x = \alpha_1v_1 + \dotsb + \alpha_nv_n $$ in exactly one way. The numbers $\alpha_i$ are the coordinates of $x$ wrt the basis. Thus we associate the vector $x$ with a tuple of its coordinates: $$ x \leftrightarrow \left[\begin{matrix} \alpha_1 \\ \vdots \\ \alpha_n\end{matrix}\right] $$
We can perform some operations on vectors by performing the same operation on their coordinate representations. For example, if we know the coordinates of $x$ as above, then the coordinates of a scalar multiple of $x$ can be computed by scaling the coordinates: $$ \lambda x \leftrightarrow \left[\begin{matrix} \lambda\alpha_1 \\ \vdots \\ \lambda\alpha_n\end{matrix}\right] $$ In other words, $$ \lambda x = (\lambda\alpha_1) v_1 + \dotsb + (\lambda\alpha_n) v_n $$ For another example, if we know the coordinates of two vectors, say $x$ as above and $$ y = \beta_1v_1 + \dotsb + \beta_nv_n $$ then the coordinates of their sum $x+y$ can be computed by adding the respective coordinates: $$ x+y \leftrightarrow \left[\begin{matrix} \alpha_1+\beta_1 \\ \vdots \\ \alpha_n+\beta_n\end{matrix}\right] $$ In other words, $$ x+y = (\alpha_1+\beta_1)v_1 + \dotsb + (\alpha_n+\beta_n)v_n $$ So, as far as the basic vector operations (scalar multiplication and vector addition) are concerned, the coordinate representations are perfectly good substitutes for the vectors themselves. We can even identify the vectors with their coordinate representations, in contexts where only these basic vector operations are relevant.
But for other operations, the coordinate representation isn't a substitute for the original vector. For example, you can't necessarily compute the norm of $x$ by computing the norm of its coordinate tuple: $$ \x\ = \sqrt{\alpha_1^2+\dotsb+\alpha_n^2}\qquad\text{might not hold.} $$ For another example, you can't necessarily compute the dot product of $x$ and $y$ by computing the dot product of their respective coordinate tuples: $$ x\bullet y = \alpha_1\beta_1+\dotsb+\alpha_n\beta_n\qquad\text{might not hold.} $$ So in contexts where these operations are relevant, coordinate representations wrt an arbitrary basis are not perfectly good substitutes for the actual vectors.
The special thing about an orthonormal basis is that it makes those last two equalities hold. With an orthonormal basis, the coordinate representations have the same lengths as the original vectors, and make the same angles with each other.

1I know the answer is old but I have a question. With the equalities to not hold you mean that if we don't have orthonormal basis, the norm breaks, i.e it isn't a norm anymore, it doesn't satisfy the properties of a norm? And this is the same for the scalar product too? And about the last paragraph, you say that the coordinate reps have the same length as the original vectors. Which original vectors are you referring to? Is vector original if it's represented in an orthonormal basis? – LearningMath May 24 '18 at 18:15
The important thing about orthogonal vectors is that a set of orthogonal vectors of cardinality(number of elements of a set) equal to dimension of space is guaranteed to span the space and be linearly independent. If you have not covered this fact in class, you soon will.
As far as your second question goes, there are no prerequisites for linear algebra, apart from elementary mathematics you learn in high school.
Added later: The main thing is that orthogonality guarantees linear independence, which is rather convenient.
 1
 2

6What you say about orthogonal vectors would be true of any set of linearly independent vectors. The question is about what the additional assumption of orthogonality adds to linear independence. – Jonas Meyer Oct 08 '13 at 06:45