32

How slow (how many cycles) is calculating a square root? This came up in a molecular dynamics course where efficiency is important and taking unnecessary square roots had a noticeable impact on the running time of the algorithms.

Anonymous
  • 3,136
  • 2
  • 33
  • 47

3 Answers3

25

From Agner Fog's Instruction Tables:

On Core2 65nm, FSQRT takes 9 to 69 cc's (with almost equal reciprocal throughput), depending on the value and precision bits. For comparison, FDIV takes 9 to 38 cc's (with almost equal reciprocal throughput), FMUL takes 5 (recipthroughput = 2) and FADD takes 3 (recipthroughput = 1). SSE performance is about equal, but looks faster because it can't do 80bit math. SSE has a super fast approximate reciprocal and approximate reciprocal sqrt though.

On Core2 45nm, division and square root got faster; FSQRT takes 6 to 20 cc's, FDIV takes 6 to 21 cc's, FADD and FMUL haven't changed. Once again SSE performance is about the same.

You can get the documents with this information from his website.

SiegeX
  • 120,826
  • 20
  • 133
  • 152
harold
  • 53,069
  • 5
  • 75
  • 140
  • Right but the whole point was to compare not just raw clock cycles but to see how it actually affects performance. However, this is a good answer. – Anonymous Oct 13 '11 at 02:20
  • @DougTreadwell well it's pretty bad, especially because of the ultra low throughput, it can completely kill the performance of a loop – harold Oct 13 '11 at 15:18
12

Square root is about 4 times slower than addition using -O2, or about 13 times slower without using -O2. Elsewhere on the net I found estimates of 50-100 cycles which may be true, but it's not a relative measure of cost that is very useful, so I threw together the code below to make a relative measurement. Let me know if you see any problems with the test code.

The code below was run on an Intel Core i3 under Windows 7 operating system and was compiled in DevC++ (which uses GCC). Your mileage may vary.

#include <cstdlib>
#include <iostream>
#include <cmath>

/*
Output using -O2:

1 billion square roots running time: 14738ms

1 billion additions running time   : 3719ms

Press any key to continue . . .

Output without -O2:

10 million square roots running time: 870ms

10 million additions running time   : 66ms

Press any key to continue . . .

Results:

Square root is about 4 times slower than addition using -O2,
            or about 13 times slower without using -O2
*/

int main(int argc, char *argv[]) {

    const int cycles = 100000;
    const int subcycles = 10000;

    double squares[cycles];

    for ( int i = 0; i < cycles; ++i ) {
        squares[i] = rand();
    }

    std::clock_t start = std::clock();

    for ( int i = 0; i < cycles; ++i ) {
        for ( int j = 0; j < subcycles; ++j ) {
            squares[i] = sqrt(squares[i]);
        }
    }

    double time_ms = ( ( std::clock() - start ) / (double) CLOCKS_PER_SEC ) * 1000;

    std::cout << "1 billion square roots running time: " << time_ms << "ms" << std::endl;

    start = std::clock();

    for ( int i = 0; i < cycles; ++i ) {
        for ( int j = 0; j < subcycles; ++j ) {
            squares[i] = squares[i] + squares[i];
        }
    }

    time_ms = ( ( std::clock() - start ) / (double) CLOCKS_PER_SEC ) * 1000;

    std::cout << "1 billion additions running time   : " << time_ms << "ms" << std::endl;

    system("PAUSE");
    return EXIT_SUCCESS;
}
phuclv
  • 27,258
  • 11
  • 104
  • 360
Anonymous
  • 3,136
  • 2
  • 33
  • 47
  • 1
    @ZacharyKraus: if you look at the edit history you'll see that my comment applied to the original version of the question, which lacked important detail as to CPU, compiler, platform, etc. Douglas was kind enough to subsequently update the answer so that it now includes all the relevant details, which makes the answer a lot more useful. I'll happily delete my comment as it is no longer relevant to the current version of the answer. – Paul R Mar 17 '14 at 23:57
  • Sorry about that I didn't understand what the comment was for I will delete mine as well since I am not sure it adds anything to the post either. – Zachary Kraus Mar 19 '14 at 00:48
  • 1
    You're not measuring speed of sqrt(), but speed of sqrt() + loop management + memory access. Loop management alone is an addition, a compare and a jump. So, statement that "sqrt is 4 times slower than addition" is a wrong conclusion. – kaalus Feb 01 '17 at 15:17
7

Square root takes several cycles, but it takes orders of magnitude more to access memory if it is not in cache. Therefore, trying to avoid computations by fetching pre-computed results from memory may actually be detrimental to performance.

It's difficult to say in the abstract whether you might gain or not, so if you want to know for sure, try and benchmark both approaches.

Here's a great talk on the matter by Eric Brummer, Compiler Developer on MSVC: http://channel9.msdn.com/Events/Build/2013/4-329

Asik
  • 19,392
  • 5
  • 60
  • 119
  • 1
    Ah, but fetching pre-computed results from memory has saved a hardware demo in at least one case. – Hot Licks Mar 19 '14 at 00:54
  • Can you give an example of fetching a pre-computed result. I don't understand what you mean. But i will definitely check that slide show out later. It looks exciting. – Zachary Kraus Mar 19 '14 at 03:00
  • 3
    I simply mean avoiding having to compute something (like a square root) on the fly by computing it beforehand, putting the result somewhere in memory (in an array, hashtable, whatever), and accessing that result when you need it in your actual computation. The access might actually be much slower than the actual square root. – Asik Mar 19 '14 at 14:05
  • 3
    @Asik It really depends on the scenario. First of all, even if you don't store it pre-computed, you need to get the original value from somewhere. There's a memory access involved there, but you could store the sqrt-ed value alongside (or instead of) the original. If memory size is an issue, you can also replace the original value with the sqrt-ed value, since calculating the square is much cheaper than the square root. It all just depends on the scenario. – Aidiakapi Feb 15 '15 at 13:57
  • @ZacharyKraus remember that reading something that is not in cache, and thus must be fetched from memory, can be up to 50 or 100 times slower. A cache read can take for example 2 o 3 cycles, which is basically free, but from memory can be up to 100 or 200 cycles. – Peregring-lk Apr 01 '20 at 15:06