You are *already* using calculus when you are performing gradient search in the first place. At some point, you have to stop calculating derivatives and start descending! :-)

In all seriousness, though: what you are describing is *exact line search*. That is, you actually want to find the minimizing value of $\gamma$,
$$\gamma_{\text{best}} = \mathop{\textrm{arg min}}_\gamma F(a+\gamma v), \quad v = -\nabla F(a).$$
It is a very rare, and probably manufactured, case that allows you to efficiently compute $\gamma_{\text{best}}$ analytically. It is far more likely that you will have to perform some sort of gradient or Newton descent on $\gamma$ itself to find $\gamma_{\text{best}}$.

The problem is, if you do the math on this, you will end up *having to compute the gradient $\nabla F$ at every iteration of this line search*. After all:
$$\frac{d}{d\gamma} F(a+\gamma v) = \langle \nabla F(a+\gamma v), v \rangle$$
Look carefully: the gradient $\nabla F$ has to be evaluated at each value of $\gamma$ you try.

That's an inefficient use of what is likely to be the most expensive computation in your algorithm! If you're computing the gradient *anyway*, the best thing to do is use it to move in the direction it tells you to move---not stay stuck along a line.

What you want in practice is a *cheap* way to compute an *acceptable* $\gamma$. The common way to do this is a backtracking line search. With this strategy, you start with an initial step size $\gamma$---usually a small increase on the last step size you settled on. Then you check to see if that point $a+\gamma v$ is of good quality. A common test is the Armijo-Goldstein condition
$$F(a+\gamma v) \leq F(a) - c \gamma \|\nabla F(a)\|_2^2$$
for some $c<1$. If the step passes this test, *go ahead and take it*---don't waste any time trying to tweak your step size further. If the step is too large---for instance, if $F(a+\gamma v)>F(a)$---then this test will fail, and you should cut your step size down (say, in half) and try again.

This is generally a lot cheaper than doing an exact line search.

I have encountered a couple of specific cases where an exact line search could be computed more cheaply than what is described above. This involved constructing a simplified formula for $F(a+\gamma v)$ , allowing the derivatives $\tfrac{d}{d\gamma}F(a+\gamma v)$ to be computed more cheaply than the full gradient $\nabla F$. One specific instance is when computing the analytic center of a linear matrix inequality. But even in that case, it was generally better overall to just do backtracking.