3

I am using the time.h lib in c to find the time taken to run an algorithm. The code structure is somewhat as follows :-

#include <time.h>

int main()
{
  time_t start,end,diff;

  start = clock();
    //ALGORITHM COMPUTATIONS
  end = clock();
  diff = end - start;
  printf("%d",diff);
  return 0;
}

The values for start and end are always zero. Is it that the clock() function does't work? Please help. Thanks in advance.

Jens
  • 61,963
  • 14
  • 104
  • 160
AAB
  • 87
  • 1
  • 2
  • 6
  • see [SO-link](http://stackoverflow.com/questions/588307/c-obtaining-milliseconds-time-on-linux-clock-doesnt-seem-to-work-properly/588377#588377) for gettimeofday example – Fredrik Pihl Sep 16 '11 at 22:43
  • What platform is this? If it's an x86 platform, you can/should use the TSC. – David Schwartz Sep 16 '11 at 22:47
  • Are you trying to find real time (i.e. how many seconds, measured from your stopwatch) or clock cycles (how many cpu operations)? – Foo Bah Sep 16 '11 at 22:51

4 Answers4

2

Not that it doesn't work. In fact, it does. But it is not the right way to measure time as the clock () function returns an approximation of processor time used by the program. I am not sure about other platforms, but on Linux you should use clock_gettime () with CLOCK_MONOTONIC flag - that will give you the real wall time elapsed. Also, you can read TSC, but be aware that it won't work if you have a multi-processor system and your process is not pinned to a particular core. If you want to analyze and optimize your algorithm, I'd recommend you use some performance measurement tools. I've been using Intel's vTune for a while and am quite happy. It will show you not only what part uses the most cycles, but highlight memory problems, possible parallelism issues etc. You may be very surprised with the results. For example, most of the CPU cycles might be spent waiting for memory bus. Hope it helps!

UPDATE: Actually, if you run later versions of Linux, it might provide CLOCK_MONOTONIC_RAW, which is a hardware-based clock that is not a subject to NTP adjustments. Here is a small piece of code you can use:

  • Your answer suffers the "All the world's a Linux/i386" (historically "All the world's a VAX"). The poster showed pure C89 code. We should assume he wants a portable solution unless told otherwise. I welcome your remark about the accuracy of clock(); but getting system specific with TSC, multicores et al is likely not helpful to someone apparently making first steps in C (like a forgotten :-) – Jens Sep 17 '11 at 10:07
  • @Jens: You are not right on this one. clock_gettime () is as portable as Linux (and conforms to SUSv2 and POSIX.1-2001), Windows has QueryPerformanceCounter and QueryPerformanceFrequency, Mac has `mach_absolute_time`. There is no easy-to-use available on all platforms nanosecond precision clock. And `clock ()` function is pretty much useless. –  Sep 17 '11 at 18:05
  • 1
    Please understand that you make a lot of assumptions. *Everything* you mention except for `clock()` is OS specific: Linux, clock_gettime, TSC, vTune. What if the OP is on Solaris/Sparc? AIX/RS6000? ARM? PowerPC? MacOS? Then large parts of your answer are a waste of time for both of you. It's wise to first ask the OP state his requirements before making assumptions. Or at least state your assumptions clearly. That's all. – Jens Sep 17 '11 at 18:18
1

Note that clock() returns the execution time in clock ticks, as opposed to wall clock time. Divide a difference of two clock_t values by CLOCKS_PER_SEC to convert the difference to seconds. The actual value of CLOCKS_PER_SEC is a quality-of-implementation issue. If it is low (say, 50), your process would have to run for 20ms to cause a nonzero return value from clock(). Make sure your code runs long enough to see clock() increasing.

Jens
  • 61,963
  • 14
  • 104
  • 160
  • CLOCKS_PER_SEC is equal to `1000000` and does not depend on CPU frequency, as defined by POSIX. –  Sep 16 '11 at 23:05
  • This is only true if the OS conforms to the POSIX XSI extension. ISO C99 makes no such statement about the value of `CLOCKS_PER_SEC` and the poster did not specify any OS. – Jens Sep 16 '11 at 23:25
0

I usually do it this way:

clock_t start = clock();
clock_t end;

//algo

end = clock();
printf("%f", (double)(end - start));
  • Then you're most likely not executing enough instructions to get the execution time high enough. Try placing a long running for/while (I'd do a simple for (int i = 0; i<= 100000; ++i) { //would be a good idea to execute something here too } –  Sep 16 '11 at 22:32
  • The gettimeofday function works ..... atleast gives a non zero value. I am trying to get the time taken to sort an array.....the function clock() works fine by increasing the size of the array to be sorted. – AAB Sep 17 '11 at 02:10
0

Consider the code below:

#include <stdio.h>
#include <time.h>

int main()
{
    clock_t t1, t2;
    t1 = t2 = clock();

    // loop until t2 gets a different value
    while(t1 == t2)
        t2 = clock();

    // print resolution of clock()
    printf("%f ms\n", (double)(t2 - t1) / CLOCKS_PER_SEC * 1000);

    return 0;
}

Output:

$ ./a.out 
10.000000 ms

Might be that your algorithm runs for a shorter amount of time than that. Use gettimeofday for higher resolution timer.

Fredrik Pihl
  • 41,002
  • 6
  • 73
  • 121