125

I feel like I must just be unable to find it. Is there any reason that the C++ pow function does not implement the "power" function for anything except floats and doubles?

I know the implementation is trivial, I just feel like I'm doing work that should be in a standard library. A robust power function (i.e. handles overflow in some consistent, explicit way) is not fun to write.

phuclv
  • 27,258
  • 11
  • 104
  • 360
Dan O
  • 4,000
  • 5
  • 26
  • 43
  • 4
    This is a good question, and I don't think the answers make a lot of sense. Negative exponents don't work? Take unsigned ints as exponents. Most inputs cause it to overflow? The same is true for exp and double pow, I don't see anyone complaining. So why isn't this function standard? – static_rtti Jul 06 '11 at 18:48
  • 2
    @static_rtti: "The same is true for exp and double pow" is totally false. I will elaborate in my answer. – Stephen Canon Jul 06 '11 at 19:13
  • 11
    The standard C++ library has `double pow(int base, int exponent)` since C++11 (§26.8[c.math]/11 bullet point 2) – Cubbi Jun 13 '12 at 02:26
  • You need to make up your mind between 'the implementation is trivial' and 'not fun to write'. – user207421 Sep 05 '19 at 01:56

11 Answers11

71

As of C++11, special cases were added to the suite of power functions (and others). C++11 [c.math] /11 states, after listing all the float/double/long double overloads (my emphasis, and paraphrased):

Moreover, there shall be additional overloads sufficient to ensure that, if any argument corresponding to a double parameter has type double or an integer type, then all arguments corresponding to double parameters are effectively cast to double.

So, basically, integer parameters will be upgraded to doubles to perform the operation.


Prior to C++11 (which was when your question was asked), no integer overloads existed.

Since I was neither closely associated with the creators of C nor C++ in the days of their creation (though I am rather old), nor part of the ANSI/ISO committees that created the standards, this is necessarily opinion on my part. I'd like to think it's informed opinion but, as my wife will tell you (frequently and without much encouragement needed), I've been wrong before :-)

Supposition, for what it's worth, follows.

I suspect that the reason the original pre-ANSI C didn't have this feature is because it was totally unnecessary. First, there was already a perfectly good way of doing integer powers (with doubles and then simply converting back to an integer, checking for integer overflow and underflow before converting).

Second, another thing you have to remember is that the original intent of C was as a systems programming language, and it's questionable whether floating point is desirable in that arena at all.

Since one of its initial use cases was to code up UNIX, the floating point would have been next to useless. BCPL, on which C was based, also had no use for powers (it didn't have floating point at all, from memory).

As an aside, an integral power operator would probably have been a binary operator rather than a library call. You don't add two integers with x = add (y, z) but with x = y + z - part of the language proper rather than the library.

Third, since the implementation of integral power is relatively trivial, it's almost certain that the developers of the language would better use their time providing more useful stuff (see below comments on opportunity cost).

That's also relevant for the original C++. Since the original implementation was effectively just a translator which produced C code, it carried over many of the attributes of C. Its original intent was C-with-classes, not C-with-classes-plus-a-little-bit-of-extra-math-stuff.

As to why it was never added to the standards before C++11, you have to remember that the standards-setting bodies have specific guidelines to follow. For example, ANSI C was specifically tasked to codify existing practice, not to create a new language. Otherwise, they could have gone crazy and given us Ada :-)

Later iterations of that standard also have specific guidelines and can be found in the rationale documents (rationale as to why the committee made certain decisions, not rationale for the language itself).

For example the C99 rationale document specifically carries forward two of the C89 guiding principles which limit what can be added:

  • Keep the language small and simple.
  • Provide only one way to do an operation.

Guidelines (not necessarily those specific ones) are laid down for the individual working groups and hence limit the C++ committees (and all other ISO groups) as well.

In addition, the standards-setting bodies realise that there is an opportunity cost (an economic term meaning what you have to forego for a decision made) to every decision they make. For example, the opportunity cost of buying that $10,000 uber-gaming machine is cordial relations (or probably all relations) with your other half for about six months.

Eric Gunnerson explains this well with his -100 points explanation as to why things aren't always added to Microsoft products- basically a feature starts 100 points in the hole so it has to add quite a bit of value to be even considered.

In other words, would you rather have a integral power operator (which, honestly, any half-decent coder could whip up in ten minutes) or multi-threading added to the standard? For myself, I'd prefer to have the latter and not have to muck about with the differing implementations under UNIX and Windows.

I would like to also see thousands and thousands of collection the standard library (hashes, btrees, red-black trees, dictionary, arbitrary maps and so forth) as well but, as the rationale states:

A standard is a treaty between implementer and programmer.

And the number of implementers on the standards bodies far outweigh the number of programmers (or at least those programmers that don't understand opportunity cost). If all that stuff was added, the next standard C++ would be C++215x and would probably be fully implemented by compiler developers three hundred years after that.

Anyway, that's my (rather voluminous) thoughts on the matter. If only votes were handed out based on quantity rather than quality, I'd soon blow everyone else out of the water. Thanks for listening :-)

paxdiablo
  • 772,407
  • 210
  • 1,477
  • 1,841
  • 3
    FWIW, I don't think C++ follows "Provide only one way to do an operation" as a constraint. Rightly so, because for example `to_string` and lambdas are both conveniences for things you could do already. I suppose one could interpret "only one way to do an operation" *very loosely* to allow both of those, and at the same time to allow almost any duplication of functionality that one can imagine, by saying "aha! no! because the convenience makes it a subtly different operation from the precisely-equivalent but more long-winded alternative!". Which is certainly true of lambdas. – Steve Jessop Sep 24 '12 at 10:50
  • @Steve, yes, that was badly worded on my part. It's more accurate to say that there are guidelines for each committee rather than all committees follow the same guidelines. Adjusted answer to clarifyl – paxdiablo Dec 07 '12 at 22:56
  • 5
    Just one point (out of a few): "any code monkey could whip up in ten minutes". Sure, and if 100 code monkeys (nice insulting term, BTW) do that each year (probably a low estimate), we have 1000 minutes wasted. Very efficient, don't you think? – Jürgen A. Erhard Jan 04 '13 at 08:51
  • 1
    @Jürgen , it wasn't meant to be insulting (since I didn't actually ascribe the label to anyone specific), it was just an indication that `pow` doesn't really require much skill. Certainly I'd rather have the standard provide something which _would_ require a lot of skill, and result in far more wasted minutes if the effort had to be duplicated. – paxdiablo Jan 05 '13 at 00:43
  • Why was the "code monkey" part removed from the answer? When I read @JürgenA.Erhard's comment and the response I went back to read that paragraph but could not see it.. : ( I don't think it is an insulting term...BTW – eharo2 May 20 '19 at 16:30
  • @JürgenA.Erhard: Worse than that, everyone who inspects a call to some custom-written function to do such a thing will need to inspect the function to see if it actually behaves in a fashion that will meet the caller's needs. – supercat Sep 08 '19 at 22:19
  • 2
    @eharo2, just replace the "half decent coder" in the current text with "code monkey". I didn't think it was insulting either but I thought it best to be cautious and, to be honest, the current wording gets across the same idea. – paxdiablo Sep 09 '19 at 03:52
42

For any fixed-width integral type, nearly all of the possible input pairs overflow the type, anyway. What's the use of standardizing a function that doesn't give a useful result for vast majority of its possible inputs?

You pretty much need to have an big integer type in order to make the function useful, and most big integer libraries provide the function.


Edit: In a comment on the question, static_rtti writes "Most inputs cause it to overflow? The same is true for exp and double pow, I don't see anyone complaining." This is incorrect.

Let's leave aside exp, because that's beside the point (though it would actually make my case stronger), and focus on double pow(double x, double y). For what portion of (x,y) pairs does this function do something useful (i.e., not simply overflow or underflow)?

I'm actually going to focus only on a small portion of the input pairs for which pow makes sense, because that will be sufficient to prove my point: if x is positive and |y| <= 1, then pow does not overflow or underflow. This comprises nearly one-quarter of all floating-point pairs (exactly half of non-NaN floating-point numbers are positive, and just less than half of non-NaN floating-point numbers have magnitude less than 1). Obviously, there are a lot of other input pairs for which pow produces useful results, but we've ascertained that it's at least one-quarter of all inputs.

Now let's look at a fixed-width (i.e. non-bignum) integer power function. For what portion inputs does it not simply overflow? To maximize the number of meaningful input pairs, the base should be signed and the exponent unsigned. Suppose that the base and exponent are both n bits wide. We can easily get a bound on the portion of inputs that are meaningful:

  • If the exponent 0 or 1, then any base is meaningful.
  • If the exponent is 2 or greater, then no base larger than 2^(n/2) produces a meaningful result.

Thus, of the 2^(2n) input pairs, less than 2^(n+1) + 2^(3n/2) produce meaningful results. If we look at what is likely the most common usage, 32-bit integers, this means that something on the order of 1/1000th of one percent of input pairs do not simply overflow.

Stephen Canon
  • 97,302
  • 18
  • 172
  • 256
  • I'm pretty sure pow underflows (ie. result is too small to be accurately represented as a double) for a while range of inputs with x positive and |y| <= 1. – static_rtti Jul 06 '11 at 20:13
  • 11
    Anyway all of this is moot. Just because a function isn't valid for some or a lot of inputs doesn't make it less useful. – static_rtti Jul 06 '11 at 20:13
  • 2
    @static_rtti: `pow(x,y)` does not underflow to zero for any x if |y| <= 1. There is a *very* narrow band of inputs (large x, y very nearly -1) for which underflow occurs, but the result is still meaningful in that range. – Stephen Canon Jul 06 '11 at 20:19
  • 2
    Having given it more thought, I agree on the underflow. I still think this isn't relevant to the question, though. – static_rtti Jul 06 '11 at 20:45
  • 2
    @static: It is relevant. If I had to decide whether to include this function or not, this is exactly the reason I would *not* include it. – Yakov Galka Jul 07 '11 at 10:15
  • 7
    @ybungalobill: Why would you chose that as a reason? Personnaly, I'd favor usefulness for a large number of problems and programmers, possibility to make harware optimized versions that are faster than the naive implementation most programmers will probably write, and so on. Your criterion seems completely arbitrary, and, to be frank, quite pointless. – static_rtti Jul 07 '11 at 10:21
  • @static_rtti: to be fair, I don't really need this function. I doubt it's useful at all. The only base I need to exponentiate is 2, which can be done via bit-shift. Others (e.g. 10) occur when you perform some formatting, then you do the exponentiation iteratively anyway. This is strongly connected to the issue discussed here. If you *do* need `pow`, then you almost surely do some numerical computations, so you will be using floating-point anyway. Then why bother with not-so-well defined, hard to verify, integer `pow`? – Yakov Galka Jul 07 '11 at 11:55
  • @static: also efficiency is irrelevant for integer pow because for large exponent you're going to overflow anyway. – Yakov Galka Jul 07 '11 at 11:57
  • 6
    @StephenCanon: On the bright side, your argument shows that the obviously-correct-and-optimal implementation of integer `pow` is simply a tiny lookup table. :-) – R.. GitHub STOP HELPING ICE Dec 16 '13 at 09:00
  • @ybungalobill I found this question because I needed to use the constant 36⁹. I ended up calculating it and using the precomputed version, but the compiler should have been able to do that for me and my code is less readable for saying `101559956668416`. This is one use case, where it admittedly isn’t particularly useful, but it show’s it’s “useful at all”. I’ve had other, more useful, uses for it, but they were less recent and I don’t remember the details. Just because you don’t use something doesn’t mean it’s less useful. – Daniel H Feb 03 '17 at 15:35
  • I like the idea of a BigInteger data type... It will help... Now we will have the problem of overflow with most cases of pow (BigInteger, BigInteger)..... – eharo2 May 20 '19 at 16:34
  • "What's the use of standardizing a function that doesn't give a useful result for vast majority of its possible inputs?": this is true for the pow for doubles as well. – Yves Daoust Oct 28 '20 at 09:36
12

Because there's no way to represent all integer powers in an int anyways:

>>> print 2**-4
0.0625
Ignacio Vazquez-Abrams
  • 699,552
  • 132
  • 1,235
  • 1,283
  • 3
    For a finite sized numeric type, there's no way to represent all powers of that type within that type due to overflow. But your point about negative powers is more valid. – Chris Lutz Mar 07 '10 at 23:31
  • 1
    I see negative exponents as something a standard implementation could handle, either by taking an unsigned int as the exponent or returning zero when a negative exponent is provied as input and an int is the expected output. – Dan O Mar 08 '10 at 00:08
  • 3
    or have seperate `int pow(int base, unsigned int exponent)` and `float pow(int base, int exponent)` – Ponkadoodle Mar 08 '10 at 00:13
  • 4
    They could just declare it as undefined behavior to pass a negative integer. – Johannes Schaub - litb Mar 08 '10 at 00:34
  • 2
    On all modern implementations, anything beyond `int pow(int base, unsigned char exponent)` is somewhat useless anyway. Either the base is 0, or 1, and the exponent doesn't matter, it's -1, in which case only the last bit of exponent matters, or `base >1 || base< -1` in which case `exponent<256` on penalty of overflow. – MSalters Mar 08 '10 at 14:55
  • Besides all other excellent posts and all the philosophical discussions about overflow, etc, this is the best reason why there is no int = pow (int, int). "there's no way to represent all integer powers in an int " – eharo2 May 20 '19 at 16:40
9

That's actually an interesting question. One argument I haven't found in the discussion is the simple lack of obvious return values for the arguments. Let's count the ways the hypthetical int pow_int(int, int) function could fail.

  1. Overflow
  2. Result undefined pow_int(0,0)
  3. Result can't be represented pow_int(2,-1)

The function has at least 2 failure modes. Integers can't represent these values, the behaviour of the function in these cases would need to be defined by the standard - and programmers would need to be aware of how exactly the function handles these cases.

Overall leaving the function out seems like the only sensible option. The programmer can use the floating point version with all the error reporting available instead.

phoku
  • 2,034
  • 1
  • 18
  • 16
  • 1
    But wouldn't the first two cases apply to a `pow` between floats as well? Take two large floats, raise one to the power of the other and you have an Overflow. And `pow(0.0, 0.0)` would cause the same problem as your 2nd point. Your 3rd point is the only real difference between implementing a power function for integers vs floats. – numbermaniac Nov 30 '19 at 14:15
7

Short answer:

A specialisation of pow(x, n) to where n is a natural number is often useful for time performance. But the standard library's generic pow() still works pretty (surprisingly!) well for this purpose and it is absolutely critical to include as little as possible in the standard C library so it can be made as portable and as easy to implement as possible. On the other hand, that doesn't stop it at all from being in the C++ standard library or the STL, which I'm pretty sure nobody is planning on using in some kind of embedded platform.

Now, for the long answer.

pow(x, n) can be made much faster in many cases by specialising n to a natural number. I have had to use my own implementation of this function for almost every program I write (but I write a lot of mathematical programs in C). The specialised operation can be done in O(log(n)) time, but when n is small, a simpler linear version can be faster. Here are implementations of both:


    // Computes x^n, where n is a natural number.
    double pown(double x, unsigned n)
    {
        double y = 1;
        // n = 2*d + r. x^n = (x^2)^d * x^r.
        unsigned d = n >> 1;
        unsigned r = n & 1;
        double x_2_d = d == 0? 1 : pown(x*x, d);
        double x_r = r == 0? 1 : x;
        return x_2_d*x_r;
    }
    // The linear implementation.
    double pown_l(double x, unsigned n)
    {
        double y = 1;
        for (unsigned i = 0; i < n; i++)
            y *= x;
        return y;
    }

(I left x and the return value as doubles because the result of pow(double x, unsigned n) will fit in a double about as often as pow(double, double) will.)

(Yes, pown is recursive, but breaking the stack is absolutely impossible since the maximum stack size will roughly equal log_2(n) and n is an integer. If n is a 64-bit integer, that gives you a maximum stack size of about 64. No hardware has such extreme memory limitations, except for some dodgy PICs with hardware stacks that only go 3 to 8 function calls deep.)

As for performance, you'll be surprised by what a garden variety pow(double, double) is capable of. I tested a hundred million iterations on my 5-year-old IBM Thinkpad with x equal to the iteration number and n equal to 10. In this scenario, pown_l won. glibc pow() took 12.0 user seconds, pown took 7.4 user seconds, and pown_l took only 6.5 user seconds. So that's not too surprising. We were more or less expecting this.

Then, I let x be constant (I set it to 2.5), and I looped n from 0 to 19 a hundred million times. This time, quite unexpectedly, glibc pow won, and by a landslide! It took only 2.0 user seconds. My pown took 9.6 seconds, and pown_l took 12.2 seconds. What happened here? I did another test to find out.

I did the same thing as above only with x equal to a million. This time, pown won at 9.6s. pown_l took 12.2s and glibc pow took 16.3s. Now, it's clear! glibc pow performs better than the three when x is low, but worst when x is high. When x is high, pown_l performs best when n is low, and pown performs best when x is high.

So here are three different algorithms, each capable of performing better than the others under the right circumstances. So, ultimately, which to use most likely depends on how you're planning on using pow, but using the right version is worth it, and having all of the versions is nice. In fact, you could even automate the choice of algorithm with a function like this:

double pown_auto(double x, unsigned n, double x_expected, unsigned n_expected) {
    if (x_expected < x_threshold)
        return pow(x, n);
    if (n_expected < n_threshold)
        return pown_l(x, n);
    return pown(x, n);
}

As long as x_expected and n_expected are constants decided at compile time, along with possibly some other caveats, an optimising compiler worth its salt will automatically remove the entire pown_auto function call and replace it with the appropriate choice of the three algorithms. (Now, if you are actually going to attempt to use this, you'll probably have to toy with it a little, because I didn't exactly try compiling what I'd written above. ;))

On the other hand, glibc pow does work and glibc is big enough already. The C standard is supposed to be portable, including to various embedded devices (in fact embedded developers everywhere generally agree that glibc is already too big for them), and it can't be portable if for every simple math function it needs to include every alternative algorithm that might be of use. So, that's why it isn't in the C standard.

footnote: In the time performance testing, I gave my functions relatively generous optimisation flags (-s -O2) that are likely to be comparable to, if not worse than, what was likely used to compile glibc on my system (archlinux), so the results are probably fair. For a more rigorous test, I'd have to compile glibc myself and I reeeally don't feel like doing that. I used to use Gentoo, so I remember how long it takes, even when the task is automated. The results are conclusive (or rather inconclusive) enough for me. You're of course welcome to do this yourself.

Bonus round: A specialisation of pow(x, n) to all integers is instrumental if an exact integer output is required, which does happen. Consider allocating memory for an N-dimensional array with p^N elements. Getting p^N off even by one will result in a possibly randomly occurring segfault.

enigmaticPhysicist
  • 1,059
  • 11
  • 15
  • I guess if you get rid of the recursion, you will save the time required for the stack allocation. And yes, we had a situation where pow was slowing everything down and we have to implement our own pow. – Sambatyon May 26 '14 at 15:31
  • "Nobody has such extreme memory limitations" is false. PIC have often a limited call stack for a maximum of 3 (example is PIC10F200) to 8 (example is 16F722A) calls (PIC use a hardware stack for function calls). – 12431234123412341234123 Feb 27 '17 at 13:27
  • oh, man that's brutal lol. OK, so it won't work on those PICs. – enigmaticPhysicist Feb 28 '17 at 16:47
  • For an integer base as well as power, like the question is asking about, compilers (gcc and clang) will easily produce a branchless loop from an iterative (instead of recursive) implementation. This avoids branch mispredicts from each bit of `n`. https://godbolt.org/z/L9Kb98. gcc and clang fail to optimize your recursive definition into a simple loop, and actually do branch on each bit of `n`. (For `pown_iter(double,unsigned)` they still branch, but a branchless SSE2 or SSE4.1 implementation should be possible in x86 asm or with C intrinsics. But even that's better than recursion) – Peter Cordes Nov 16 '18 at 18:10
  • Crap, now I have to do the benchmarks all over again with a loop-based version just to be sure. I'll think about it. – enigmaticPhysicist Nov 17 '18 at 22:50
6

One reason for C++ to not have additional overloads is to be compatible with C.

C++98 has functions like double pow(double, int), but these have been removed in C++11 with the argument that C99 didn't include them.

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3286.html#550

Getting a slightly more accurate result also means getting a slightly different result.

Bo Persson
  • 86,087
  • 31
  • 138
  • 198
3

The World is constantly evolving and so are the programming languages. The fourth part of the C decimal TR¹ adds some more functions to <math.h>. Two families of these functions may be of interest for this question:

  • The pown functions, that takes a floating point number and an intmax_t exponent.
  • The powr functions, that takes two floating points numbers (x and y) and compute x to the power y with the formula exp(y*log(x)).

It seems that the standard guys eventually deemed these features useful enough to be integrated in the standard library. However, the rational is that these functions are recommended by the ISO/IEC/IEEE 60559:2011 standard for binary and decimal floating point numbers. I can't say for sure what "standard" was followed at the time of C89, but the future evolutions of <math.h> will probably be heavily influenced by the future evolutions of the ISO/IEC/IEEE 60559 standard.

Note that the fourth part of the decimal TR won't be included in C2x (the next major C revision), and will probably be included later as an optional feature. There hasn't been any intent I know of to include this part of the TR in a future C++ revision.


¹ You can find some work-in-progress documentation here.

Morwenn
  • 19,202
  • 10
  • 89
  • 142
  • Are there any plausible implementations in which using `pown` with an exponent greater than `LONG_MAX` should ever yield a value different from using `LONG_MAX`, or where a value less than `LONG_MIN` should yield a value different from `LONG_MIN`? I wonder what benefit is obtained from using `intmax_t` for an exponent? – supercat Sep 16 '15 at 18:31
  • @supercat No idea, sorry. – Morwenn Sep 16 '15 at 18:34
  • It might be worthwhile to mention that, looking at the Standard, it seems to also define an optional "crpown" function which would, if defined, be a correctly-rounded version of "pown"; the Standard otherwise does not specify the required degree of accuracy. Implementing a fast and moderately-precise "pown" is easy, but ensuring correct rounding in all cases is apt to be much more expensive. – supercat Sep 16 '15 at 18:50
2

Perhaps because the processor's ALU didn't implement such a function for integers, but there is such an FPU instruction (as Stephen points out, it's actually a pair). So it was actually faster to cast to double, call pow with doubles, then test for overflow and cast back, than to implement it using integer arithmetic.

(for one thing, logarithms reduce powers to multiplication, but logarithms of integers lose a lot of accuracy for most inputs)

Stephen is right that on modern processors this is no longer true, but the C standard when the math functions were selected (C++ just used the C functions) is now what, 20 years old?

Ben Voigt
  • 260,885
  • 36
  • 380
  • 671
  • 5
    I don't know of any current architecture with a FPU instruction for `pow`. x86 has a `y log2 x` instruction (`fyl2x`) that can be used as the first part of a `pow` function, but a `pow` function written that way takes hundreds of cycles to execute on current hardware; a well written integer exponentiation routine is several times faster. – Stephen Canon Mar 08 '10 at 00:15
  • I don't know that "hundreds" is accurate, seems to be around 150 cycles for fyl2x then f2xm1 on most modern CPUs and that gets pipelined with other instructions. But you're right that a well-tuned integer implementation should be much faster (these days) since IMUL has been sped up a lot more than the floating-point instructions. Back when the C standard was written, though, IMUL was pretty expensive and using it in a loop probably did take longer than using the FPU. – Ben Voigt Mar 08 '10 at 00:43
  • 2
    Changed my vote in light of the correction; still, keep in mind (a) that the C standard underwent a major revision (including a large expansion of the math library) in 1999, and (b) that the C standard isn't written to any specific processor architecture -- the presence or absence of FPU instructions on x86 has essentially nothing to do with what functionality the C committee choses to standardize. – Stephen Canon Mar 08 '10 at 01:06
  • It's not tied to any architecture, true, but the relative cost of a lookup table interpolation (generally used for the floating point implementation) compared to integer multiply has changed pretty much equally for all architectures I would guess. – Ben Voigt Mar 08 '10 at 02:31
1

Here's a really simple O(log(n)) implementation of pow() that works for any numeric types, including integers:

template<typename T>
static constexpr inline T pown(T x, unsigned p) {
    T result = 1;

    while (p) {
        if (p & 0x1) {
            result *= x;
        }
        x *= x;
        p >>= 1;
    }

    return result;
}

It's better than enigmaticPhysicist's O(log(n)) implementation because it doesn't use recursion.

It's also almost always faster than his linear implementation (as long as p > ~3) because:

  • it doesn't require any extra memory
  • it only does ~1.5x more operations per loop
  • it only does ~1.25x more memory updates per loop
serg06
  • 356
  • 3
  • 12
-3

As a matter of fact, it does.

Since C++11 there is a templated implementation of pow(int, int) --- and even more general cases, see (7) in http://en.cppreference.com/w/cpp/numeric/math/pow


EDIT: purists may argue this is not correct, as there is actually "promoted" typing used. One way or another, one gets a correct int result, or an error, on int parameters.

Dima Pasechnik
  • 306
  • 4
  • 16
  • 2
    this is incorrect. The (7) overload is `pow ( Arithmetic1 base, Arithmetic2 exp )` which will be cast to `double` or `long double` if you've read the description: *"7) A set of overloads or a function template for all combinations of arguments of arithmetic type not covered by 1-3). If any argument has integral type, it is cast to double. If any argument is long double, then the return type Promoted is also long double, otherwise the return type is always double."* – phuclv May 21 '19 at 01:25
  • what is incorrect here? I merely said that nowadays (since C++11) a templated pow(_,_) is in the standard library, which was not the case in 2010. – Dima Pasechnik May 21 '19 at 07:18
  • 5
    No it doesnt't. Templeates promote these types to double or long double. So it works on doubles underneath. – Trismegistos Jul 09 '19 at 08:56
  • 1
    @Trismegistos It still allows int parameters. If this template were not there, passing int parameters causes it to interpret the bits in the int as a float, causing arbitrary unexpected results. The same happens with mixed input values. e.g. `pow(1.5f, 3)` = `1072693280` but `pow(1.5f, float(3))` = `3.375` – Mark Jeronimus Aug 02 '19 at 13:04
  • 2
    The OP asked for `int pow(int, int)`, but C++ 11 only provides `double pow(int, int)`. See @phuclv 's explanation. – xuhdev Sep 04 '19 at 22:50
  • whether or not the implementation does casting to double is more of an implementation detail - as long as a correct answer may be guaranteed. – Dima Pasechnik Sep 06 '19 at 08:43
-4

A very simple reason:

5^-2 = 1/25

Everything in the STL library is based on the most accurate, robust stuff imaginable. Sure, the int would return to a zero (from 1/25) but this would be an inaccurate answer.

I agree, it's weird in some cases.

Jason A.
  • 73
  • 1
  • 6