-1

Is it possible to create a function that takes a double and raises it to the power of another double in pure C? Such as (3.5 2.7). I was told this is only possible if assembly is used to write the function.

johnfound
  • 6,751
  • 2
  • 25
  • 53
chillpenguin
  • 2,639
  • 2
  • 12
  • 15
  • possible duplicate of [Where is pow function defined and implemented in C?](http://stackoverflow.com/questions/1699694/where-is-pow-function-defined-and-implemented-in-c) – Pascal Cuoq Aug 03 '13 at 06:49
  • `exp( exp(log(b) + log(log(a)) )` – P0W Aug 03 '13 at 06:49
  • 5
    You shouldn't believe everything you hear, and you should look a little bit for the answer before asking on StackOverflow. – Pascal Cuoq Aug 03 '13 at 06:50
  • Well I don't know for sure if you answered my question because I don't know if "log(a)" is a function that uses assembly or not... I imagine it does... – chillpenguin Aug 03 '13 at 06:50
  • @chillpenguin Realistically speaking, *everything* is gonna compile down to assembly. – Mysticial Aug 03 '13 at 06:51
  • I did look and I didn't find any code that doesn't use assembly. So I am starting to think what I heard was right. – chillpenguin Aug 03 '13 at 06:51
  • The performance of assembly code may be better; it may be able to use CPU-specific instructions which are not accessible except to assembly language code. However, it is possible to implement it without using assembly language code. – Jonathan Leffler Aug 03 '13 at 06:53
  • @Mysticial I know but that doesn't change my question. Can you write code using only C without any assembly in your source file that can compute double x to the double y? – chillpenguin Aug 03 '13 at 06:53
  • @JonathanLeffler can you explain how? – chillpenguin Aug 03 '13 at 06:54
  • Yes: Mysticial showed you how. Invoking `exp()`, `log()` and `*` does not involve assembler at the source level. What goes on behind the scenes is immaterial; all C code is compiled to assembler and then object code. – Jonathan Leffler Aug 03 '13 at 06:54
  • I believe `pow()` is part of the C standard anyway... – Mysticial Aug 03 '13 at 06:55
  • @P0W: is there an advantage to your formula with extra `log` and `exp` calls compared to Mysticial's code? Or is your answer a humourous commentary that there's more than one way to do it? – Jonathan Leffler Aug 03 '13 at 06:58
  • Where is the source code for the log function that doesn't use assembly? It's not that I don't believe you I just can't find it. – chillpenguin Aug 03 '13 at 06:59
  • @chillpenguin FWIW, `pow()`, `exp()`, `log()` are all part of the C standard library. If you want to go below those, then dig out those Taylor series formulas. (among a bunch of other methods) – Mysticial Aug 03 '13 at 07:01
  • @JonathanLeffler well both :D – P0W Aug 03 '13 at 07:01
  • @Mysticial yeah I was planning to expand the Taylor's series for `log` and `exp` – P0W Aug 03 '13 at 07:02
  • So the function definitions for exp() and log() don't use assembly? – chillpenguin Aug 03 '13 at 07:28
  • 2
    It doesn't matter how the functions are actually implemented, the point is that it is possible to implement them using regular math, numerical methods, table look-ups, and interpolation. – jxh Aug 03 '13 at 07:32
  • 1
    Fun Fact: Both Assembly and C are Turing Complete. So anything you can do in one, you can do in the other. :) – Mysticial Aug 03 '13 at 07:35
  • 1
    Say 1000 times: "CPU's only execute binary instruction codes'. – Martin James Aug 03 '13 at 07:54

3 Answers3

5

Whoever told you that the pow() function can only possibly be implemented if assembly is used was wrong. This is clearly wrong because the mathematical concept of exponentiation predates computers by, oh, over 2000 years.1 Logarithms were invented about 400 years ago (as a way to simplify calculations involving exponentiation), and the slide rule immediately followed.2 The slide rule was the dominant tool used for computing arithmetical expressions until the invention of the digital calculator.

You may have had a math lesson that involved using interpolation on logarithm and exponentiation table entries to perform calculations.3 If you remember such a lesson, this is a hint that it is possible to perform these calculations using regular math. In any case, this is also where the properties of logarithms and exponents are taught.

In calculus, there is a lesson on Taylor Series, and how to use a Taylor Polynomial to approximate a function.4 Since the Maclaurin Series (a Taylor Series centered at 0) for logarithm only converges if the argument is inside the interval (-1, 1), you can use regular math to scale the argument down, and use math to adjust the computed answer to get the desired answer. For example, to compute ln(2.7), you could compute ln(2.7) - ln(22) + ln(22) = ln(2.7/22) + ln(22) = ln(0.675) + 2×ln(2). Using the Maclaurin Series on the first term, and a table look-up for ln(2), you get to your answer.

Whether the the pow() function is implemented in the library with assembly or not, it is only computing what a human being told it to compute. There is no magic being done in the hardware that cannot be accomplished in software or on a piece of paper.


  1. Both Euclid and Archimedes were both well versed in the concept of exponents.
  2. John Napier published his theory of logarithms in 1614. A few years later, the concept was improved by Henry Briggs, who also published the first common logarithm table. William Oughtred is credited with inventing the slide rule in 1622.
  3. Sad to say, math by tables is probably being removed from modern curriculum since it does not help improve standardized test scores, and using a calculator is so much easier.
  4. Taylor Series were generated for several different functions as early as 300 years before Brook Taylor derived a mathematical method for their creation.
jxh
  • 64,506
  • 7
  • 96
  • 165
  • When using a C compiler which does not support an 80-bit long double, is there any way to compute x^y which is accurate to within 0.51LSB and is as fast as using the x87 to compute the base-2 log of x (to 80-bit precision), multiply by y, and raise 2 to the resulting exponent? To be sure, maybe the right answer isn't "use assembly" but rather "use a C compiler with decent floating-point math", but sometimes that doesn't seem to be an option. – supercat Oct 19 '14 at 19:14
  • @supercat: This answer is not "I am anti-assembly". This answer is "Please do not be a computer programmer that is ignorant of basic mathematical concepts." To your question, in this scenario, it would not be possible to even use basic mathematical operators, let alone exponentiation. But, if your program requires 80-bit precision, then go for it. – jxh Oct 20 '14 at 05:47
  • When adding together a group of `double` values, there are approaches like Kahan summation which will yield results within one LSB without needing any higher-precision intermediate values. Implementing a double-precision exponentiation operation which is accurate even to within 0.75LSB without using higher-precision intermediate results, however, is much harder. The purpose of 80-bit precision on the exponentiation isn't to yield a result that's precise to 80 bits, but rather to ensure that the result is accurate to 64. – supercat Oct 20 '14 at 15:02
  • @supercat: Sounds like a requirement to me. If I had that requirement, I would first find or implement a correct version in C before implementing it in assembly. – jxh Oct 20 '14 at 15:21
  • Implementing a bit-accurate exponentiation operator, or even proving the correctness of an implementation, is *hard*, and performance would be grossly inferior to what could be achieved with less than a dozen lines of assembly code. Of course, the fundamental problem is that compiler floating-point support has slipped backward since the 1980s. – supercat Oct 20 '14 at 15:45
  • @supercat: Since I rarely write assembly, a ready made high precision library would be my preference to prototype, and to compare answers against. If it takes as few lines as you say, it would be worth asking the for an updated compiler. If I was proficient in compilers, I would offer a patch to the compiler itself, if it was open source. Notice none of this is "anti-assembly", and neither you nor I are claiming it can only be done in assembly, as claimed in the question. – jxh Oct 20 '14 at 16:22
0

There are some things that can only be done in assembly, because the instrucitons are not accessible otherwise. For normal algorithms, like caluclations, this is no the case though. What woul dbe so special about a pow function that you can not write it in C? It might be faster, if you write it in assembly, but that is not a hinderance for implementing it.

Devolus
  • 20,356
  • 11
  • 56
  • 104
-1

You can calculate anything without assembly (the same way you might calculate it by hand), but it will be much slower; that's why no one does these things without using special hardware support.

user541686
  • 189,354
  • 112
  • 476
  • 821
  • 2
    What? Mathematical function are typically implemented with techniques like look-up tables and polynomial interpolation. Any respectable C compiler does as good a job at this than the typical human. This library is pure C: http://lipforge.ens-lyon.fr/www/crlibm/ . It is almost as fast as any faithful implementation and it only uses “special hardware” in that it uses +, -, * / and sqrt. – Pascal Cuoq Aug 03 '13 at 06:54
  • @PascalCuoq: If they could be implemented as fast in hardware as in software, then the hardware implementations would be pointless. – user541686 Aug 03 '13 at 06:59
  • 2
    Could you point to a single implementation of a faithful `pow()` function from an existing math library that relies on what you call “special hardware support” (which I take to be something other than the 5 basic IEEE 754 instructions)? I claim that writing these functions in C is the normal way to do it. I have pointed out a pure C implementation and I think I could dig up two or three more. – Pascal Cuoq Aug 03 '13 at 07:04
  • @PascalCuoq: I said it's much slower without hardware support, I didn't say it's impossible... I'm not sure what you're arguing against. – user541686 Aug 03 '13 at 07:07
  • @PascalCuoq Somewhat OT, but does the x87 FPU `F2XM1` ever get used for `pow()` computations? – Mysticial Aug 03 '13 at 07:08
  • @Mysticial I take it that the x87 instructions that pretend to compute mathematical functions are relics from the 1980s. I have never seen anyone use any of them for a libm, but they might still be called directly by a programmer who knows (s)he only needs the accuracy they provide on the range where they work. – Pascal Cuoq Aug 03 '13 at 07:18
  • 2
    Mehrdad, I am arguing against the entire claim of your answer, that “You can calculate anything without assembly (the same way you might calculate it by hand), but it will be much slower; that's why no one does these things without using special hardware support”. It is not just that the fastest libms are written in C; it is that all libms that I have ever seen were written in C (including some where speed was obviously a primary concern). – Pascal Cuoq Aug 03 '13 at 07:20
  • @PascalCuoq Woah. "only needs the accuracy they provide on the range where they work" - that implies that they don't provide fully precise results (not even to 53-bit precision - let alone 64)? If so, that's new to me! – Mysticial Aug 03 '13 at 07:21
  • @Mysticial http://software.intel.com/en-us/forums/topic/289702 (I don't know specifically about F2XM1) – Pascal Cuoq Aug 03 '13 at 07:22
  • 3
    For the record, Microsoft's implementations of `pow` and `powf` ["use SSE2"](http://msdn.microsoft.com/en-us/library/dt5dakze(v=vs.71).aspx). I'm not sure what that means but I'd tend to believe they're written in assembly, or use XMM intrisics (and it would be cheating to call that C). – zneak Aug 03 '13 at 07:24
  • @PascalCuoq Oh neat link. Thanks! – Mysticial Aug 03 '13 at 07:25
  • @Mysticial: Regarding hardware not giving precise results, [it shouldn't be new to you](http://stackoverflow.com/questions/8733178#comment10933401_8739784)! :) – user541686 Aug 03 '13 at 07:28
  • @PascalCuoq: And *"the fastest libms"* you speak of are ***faster*** than hardware implementations? If so, can you give me an example of one that's faster than hardware? And if not, how does that go against what I wrote? – user541686 Aug 03 '13 at 07:28
  • @zneak SSE2 floating-point instructions essentially comprise scalar and vector versions of +, -, *, / and sqrt. – Pascal Cuoq Aug 03 '13 at 07:29
  • @Mehrdad I was mainly referring to where an instruction gives significantly less precision than the datatype that it works on. [An example would be the SSE single-precision invsqrt which only gives 11 bits.](http://stackoverflow.com/a/1528751/922184) – Mysticial Aug 03 '13 at 07:30
  • @Mysticial: Ahh I see. – user541686 Aug 03 '13 at 07:30
  • Mehrdad Again, what hardware implementation of a faithful function `pow()` are we talking about here? Could you point to a single implementation of a faithful pow() function from an existing math library that relies on what you call “special hardware support”? – Pascal Cuoq Aug 03 '13 at 07:33
  • @PascalCuoq: I'm talking about x86's power instruction. What software math library do you know of whose implementation of the same operation is faster? – user541686 Aug 03 '13 at 07:35
  • I am not familiar with that instruction. What is its exact opcode? Is it one from this list? http://en.wikipedia.org/wiki/X86_instruction_listings#x87_floating-point_instructions – Pascal Cuoq Aug 03 '13 at 07:45
  • @PascalCuoq: My bad, it's a handful of instructions, not a single one. It's FYL2X to calculate y log x, followed by F2XM1 and FSCALE. Which of the math libraries you mentioned handle it faster than the x86 FPU implementation? – user541686 Aug 03 '13 at 07:59
  • The math libraries I cited do not do the same thing as this sequence of instructions. They compute a faithful (i.e. to within one ULP) approximation of `pow(x, y)`. – Pascal Cuoq Aug 03 '13 at 08:07
  • @PascalCuoq: If you're looking for something for which a hardware implementation for them *does not exist*, it obviously *must* be implemented in software, you don't have a choice -- there is no hardware to compare *against*. How does that contradict the fact I mentioned, that hardware implementations are faster than software implementations? – user541686 Aug 03 '13 at 08:09
  • Oh, we agree then. It's just I thought you said that no one implemented pow() without using special hardware support. – Pascal Cuoq Aug 03 '13 at 08:14
  • @PascalCuoq: Well, it depends on what you mean by "`pow`". I meant the one defined in C, which is provided with compilers (and all compilers I've heard of use hardware instructions for it). But you were looking for a *different* kind of `pow`, a more accurate version, which isn't what people call `pow` in C. Obviously, if you're looking for a mathematical operation whose equivalent *doesn't exist* in hardware, there's nothing to compare to, so there's nothing to argue about. For what *does*, however, hardware will be faster, for the simple reason that if it weren't, then it wouldn't be there. – user541686 Aug 03 '13 at 08:20
  • 1
    As a Microsoft intern, I can confirm that the math functions of the MS C runtime are **not** implemented in assembly. Whatever the advantages of hardware are, Microsoft thinks they're not good enough. – zneak Aug 05 '13 at 17:22
  • 1
    I take that back. Some functions wrap math intrinsics (functions that expand into a very short sequence of specialized math instructions), so it would be cheating to not call that assembly. Some do and some don't. – zneak Aug 05 '13 at 17:36