Other answers have provided great responses elaborating on the intuitive meaning of conditional dependence. Here, I won't add to that; instead I want to address your question about "what it does for us," focusing on computational implications.

There are three events/propositions/random variables in play, $A$, $B$, and $C$. They have a joint probability, $P(A,B,C)$. In general, a joint probability for three events can be factored in many different ways:
\begin{align}
P(A,B,C)
&= P(A)P(B,C|A)\\
&= P(A)P(B|A)P(C|A,B) \;=\; P(A)P(C|A)P(B|A,C)\\
&= P(B)P(A,C|B)\\
&= P(B)P(A|B)P(C|A,B) \;=\; P(B)P(C|B)P(A|B,C)\\
&= P(C)P(A,B|C)\\
&= P(C)P(A|C)P(B|A,C) \;=\; P(C)P(B|C)P(A|B,C)\\
\end{align}
Something to notice here is that *every expression on the RHS includes a factor with three variables*.

Now suppose our information about the problem tells us that $A$ and $B$ are **conditionally independent given $C$**. A conventional notation for this is:
$$
A \perp\!\!\!\perp B \,|\, C,
$$
which means (among other implications),
$$
P(A|B,C) = P(A|C).
$$
This means that the last of the many expressions I displayed for $P(A,B,C)$ above can be written,
$$
P(A,B,C) = P(C)P(B|C)P(A|C).
$$
From a computational perspective, the key thing to note is that conditional dependence here means **we can write the 3-variable function $P(A,B,C)$ in terms of 1-variable and 2-variable functions**. In a nutshell, conditional independence means that joint distributions are simpler than they might have been. When there are *lots* of variables, conditional independence can imply *grand* simplifications of joint probabilities. And if (as is often the case) you have to sum or integrate over some of the variables, conditional independence can let you pull some factors through a sum/integral, simplifying the summand/integrand.

This can be very important for computational implementation of Bayesian inference. When we want to quantify how strongly some observed data, $D$, support rival hypotheses $H_i$ (with $i$ a label distinguishing the hypotheses), you are probably used to seeing Bayes's theorem (BT) in its "posterior $\propto$ prior times likelihood" form:
$$
P(H_i|D) = \frac{P(H_i)P(D|H_i)}{P(D)},
$$
where the terms in the numerator are the prior probability for $H_i$ and the sampling (or conditional predictive) probability for $D$ (aka, the likelihood for $H_i$), and the term in the denominator is the prior predictive probability for $D$ (aka the marginal likelihood, since it is the marginal of $P(D,H_i)$). But recall that $P(H_i,D) = P(H_i)P(D|H_i)$ (in fact, one typically derives BT using this, and equating it to the alternative factorization). So BT can be written as
$$
P(H_i|D) = \frac{P(H_i,D)}{P(D)},
$$
or, in words,
$$
\mbox{Posterior} = \frac{\mbox{Joint for everything}}{\mbox{Marginal for observations}}.
$$
In models with complex dependence structures, this turns out to be the easiest way to think of modeling: The modeler expresses the joint probability for the data and all hypotheses (possibly including latent parameters for things you don't know but need to know in order to predict the data). From the joint, you compute the marginal for the data, to normalize the joint to give you the posterior (you may not even need to do this, e.g., if you use MCMC methods that don't depend on normalization constants).

Now you can see the value of conditional independence. Since the starting point of computation is the joint for everything, anything you can do to simplify the expression for the joint (and its sums/integrals) can be a great help to computation. Probabilistic programming languages (e.g., BUGS, JAGS, and to some degree Stan) use graphical representations of conditional dependence assumptions to organize and simplify computations.