-1

I have written a function that computes pow(a,b) in O(logb).

double pow(double a, int b){
  double res=1;
    while(b>0){
        if (b%2==1){
            res=res*a;
        }
      b=b>>1;
      a=a*a;

    }
    return res;
}

I have stumbled upon the question if it would be possible to write a function pow(double a, double b) in O(1) time. Yet I have not found an answer.

Rory Daulton
  • 19,351
  • 5
  • 34
  • 45
  • 6
    If it was possible, don't you think standard implementations would use such a method? – DeiDei Sep 29 '18 at 10:44
  • 5
    Assuming IEEE754, we have 64 bit for double, i. e. 2^64 values, two parameters, 2^65 combinations. As we have a finite set of input, we *always* are O(1)... – Aconcagua Sep 29 '18 at 11:04
  • 3
    To say anything meaningful about computational complexity, you need to define what are the basic operations you're counting, and in terms of what variable. – aschepler Sep 29 '18 at 11:08
  • As Aconcagua noted you have a finite set of inputs. So you could use a (rather large!) lookup table. – dmuir Sep 29 '18 at 11:14
  • When `a` is known at compile time you can just compute it at compile time. – Cheers and hth. - Alf Sep 29 '18 at 11:18
  • 3
    You must choose between `double b` and `int b`; these are different problems. – Yves Daoust Sep 29 '18 at 11:33
  • @Aconcagua Well, (2^64)^2 is 2^128 for two parameters, but we have a lot of repeated NaNs and INFs.. . ;) – Bob__ Sep 29 '18 at 14:59
  • @Bob__ Huh, of course it is - seems as if I haven't been fully awake when writing the comment! Luckily, the actual value is not of importance for the statement itself... – Aconcagua Sep 30 '18 at 00:07

3 Answers3

2

If you don't allow yourself the use of the standard pow/exp/log functions nor precomputed tables, but allow floating-point multiplies, then your solution is optimal (constant time is impossible).

Yves Daoust
  • 48,767
  • 8
  • 39
  • 84
1

If you allow a few restrictions on the parameters (a positive, a and b doubles) and on the result (a double) you could use

exp(b * log(a))

This will often not be exact, even when an exact result is possible. In my Borland Delphi days, I coded a similar routine but used base 2 rather than the base e that is often used. That improved both the accuracy and the speed of the code, since the CPU works in base 2. But I have no idea how to do that in C++.

Rory Daulton
  • 19,351
  • 5
  • 34
  • 45
  • 1
    That is not O(1) though... Is it? – Ash Sep 29 '18 at 10:43
  • @Ash: Since `a` and `b` are doubles, the `exp` and `log` functions are (mostly) handled by the CPU and use a limited number of CPU cycles, and thus are order one. The advantage of your code is that it more often returns an exact answer. – Rory Daulton Sep 29 '18 at 10:44
  • @Ash: I should add that your code also allows `a` to be zero or negative. – Rory Daulton Sep 29 '18 at 10:51
  • I am not OP ;) But I'm very skeptical about your explanation... Can you please add some references? Maybe we can get close to `O(1)` but theoretically, I think that the higher bound would be logarithmic... – Ash Sep 29 '18 at 10:54
  • Okay, this post seems rather convincing: https://stackoverflow.com/questions/7317414/what-is-the-complexity-of-the-log-function , it indicaters thay it won't be `O(1)` for arbitray precision... But it is to be expected I guess, and OP's question isn't about that... Leaving it here in case someone has the same question as me. – Ash Sep 29 '18 at 10:55
  • 3
    @Ash: You are correct that the log function is not O(1) for *arbitrary precision*... but that is why I emphasized that your parameters are *doubles*. (I just added that to my main answer, copied from my previous comment.) For the limited number of bits, the log and exp approximation operations become O(1). – Rory Daulton Sep 29 '18 at 11:02
  • @Ash: from an Intel manual you can know the maximum number of cycles, which should be on the order of 100. – Yves Daoust Sep 30 '18 at 11:52
0

Yes, you can write one with precompute. Compute all possible power into a table and loop up when necessary. Indeed this method works for many problems.

Now, assume that you have found a new algorithm other than the table look up and the complexity of the new algorithm is O(1). Now a result;

  • The complexity of addition is O(1) by x^b * x^c = x^(a+c). calculate and take log base b, especially if base 2.

So you can reduce almost every arithmetic operation by using your new algorithm.

as Yvies said this is optimal.

kelalaka
  • 4,046
  • 4
  • 22
  • 39