2

I was going through below link enter link description here and going through answers I wanted to calculate time complexity of below suggested code. I played with quite a few values and number of steps are hovering between 23 (even for small values) and say 50 for real big values. How should I go about calculating time complexity for below code - Any pointers?

float val, low, high, mid, oldmid, midsqr;
// Set initial bounds and print heading.
low = 0; high = mid = val; oldmid = -1;

// Keep going until accurate enough.
while (fabs(oldmid - mid) >= 0.00001) 
{
    oldmid = mid;
    // Get midpoint and see if we need lower or higher.
    mid = (high + low) / 2;
    midsqr = mid * mid;
    if (mid * mid > val) 
    {
        high = mid;
        printf("- too high\n");
    }
    else 
    {
        low = mid;
        printf("- too low\n");
    }
}
Community
  • 1
  • 1
thedreamer
  • 169
  • 4
  • 12
  • Look at the algorithm. It repeats this procedure: given a range, consider the midpoint of that range, and based on the result reduce the range under consideration to either the top half or the bottom half of the original one. That should be very familiar to anyone studying complexity analysis. – John Bollinger Nov 24 '15 at 16:41
  • 3
    It's a binary search over (0, high), its complexity is log_2(high * 100000) = O(log_2(high) - log_2(precision)). – akappa Nov 24 '15 at 16:42

2 Answers2

8

In terms of determining time complexity, think of how many "steps" your algorithm will take to terminate.

In this case, we are essentially binary searching to find the square root. Thus the number of steps we need to consider, is how many comparisons your algorithm makes. Because it is binary search, we know it is in the realm of O(log(n)), as you can think of binary search as halving the searchable space each time.

So now we need to figure out what n is. We are searching over the range (low, high), which is from (0, val). But because we are searching over floats, and the precision you care about is up to 0.00001, we can effectively multiply the range by 100000, to allow us to think of the problem on ints.

Then we will have a time complexity of O(log(100000 * val)) which is in O(log(val)) (unless precision is not constant).

Clark
  • 1,339
  • 1
  • 7
  • 18
  • got it for the wonderful explanation - it was precision constant that was stumping me - now i understand it ..thanks – thedreamer Nov 24 '15 at 16:50
1

You should recognize the algorithm you present as a binary search. Inasmuch as you have undertaken a complexity analysis in the first place, I presume you know that the complexity of binary search is O(log N).

The main potential complication is the question of what N applies to this problem. You might be tempted to think it's the value whose square root you are trying to determine (i.e. the square), but that would be only approximately correct. Rather, it is the number of distinct points in the search space. That's governed by the bounds of the search space and your criterion for numbers within it to be distinct (in this case that they differ by more than 0.00001).

Because representable floating-point numbers are not uniformly distributed, it's not as simple as (upper_bound - lower_bound) / 0.00001, but you can take that as a rough approximation. If, furthermore, you use 0 as the lower bound and a fixed multiple (maybe 1) of the square as the upper bound then that approximation gives you O(log square) as the overall complexity.

Consider now that because the algorithm scales logarithmically, doubling the size of the search space produces a fixed increment in the maximum number of steps in the search. Since it's specifically a binary search, a difference of 27 steps between one run and another should correspond to a factor of about 227 (roughly 127,000,000) in the size of the search space.

John Bollinger
  • 121,924
  • 8
  • 64
  • 118