124

I have to compute execution time of a C++ code snippet in seconds. It must be working either on Windows or Unix machines.

I use code the following code to do this. (import before)

clock_t startTime = clock();
// some code here
// to compute its execution duration in runtime
cout << double( clock() - startTime ) / (double)CLOCKS_PER_SEC<< " seconds." << endl;

However for small inputs or short statements such as a = a + 1, I get "0 seconds" result. I think it must be something like 0.0000001 seconds or something like that.

I remember that System.nanoTime() in Java works pretty well in this case. However I can't get same exact functionality from clock() function of C++.

Do you have a solution?

Yu Hao
  • 111,229
  • 40
  • 211
  • 267
AhmetB - Google
  • 35,086
  • 32
  • 117
  • 191
  • 30
    Keep in mind that any time-difference based comparison may well be inaccurate due to the fact that the OS may well not run your thread from start to finish. It may interrupt it and run other threads interlaced with yours, which will have a significant impact on actual time taken to complete your operation. You can run multiple times, and average out the results; you can minimize the number of other processes running. But none of these will eliminate the thread suspension effect entirely. – Mordachai Dec 07 '09 at 17:07
  • 15
    Mordachi, why would you want to eliminate it? You want to see how your function performs in a real world environment, not in a magical realm where threads don't get interrupted ever. As long as you run it several times and make an average it will be very accurate. – Thomas Bonini Dec 07 '09 at 17:37
  • Yes I run it a few times and avg out results. – AhmetB - Google Dec 07 '09 at 17:54
  • 14
    Andreas, Mordachai's comment is relevant if the OP would like to compare the performance of his code to a different algorithm. For example, if he runs several clock tests this afternoon and then tests a different algorithm tomorrow morning, his comparison may not be reliable as he may be sharing resources with many more processes in the afternoon than in the morning. Or maybe one set of code will cause the OS to give it less processing time. There are numerous reasons why this type of performance measurement is unreliable if he wants to perform a time-based comparison. – weberc2 Aug 30 '12 at 13:42
  • Remember to add `#include ` or a similar import to the file – chessofnerd May 24 '13 at 13:07
  • 4
    @Mordachai I know I am replying to an old comment, but for whoever stumbles on this as I did - to time performance of algorithims you want to take the minimum of a few runs, not the average. This is the one that had the least interruptions by the OS and so is timing mostly your code. – Baruch Aug 20 '14 at 07:52
  • More snippets for benchmaking here: http://stackoverflow.com/questions/22387586/c-measuring-execution-time-of-a-function – user9869932 Sep 02 '15 at 20:16

18 Answers18

117

You can use this function I wrote. You call GetTimeMs64(), and it returns the number of milliseconds elapsed since the unix epoch using the system clock - the just like time(NULL), except in milliseconds.

It works on both windows and linux; it is thread safe.

Note that the granularity is 15 ms on windows; on linux it is implementation dependent, but it usually 15 ms as well.

#ifdef _WIN32
#include <Windows.h>
#else
#include <sys/time.h>
#include <ctime>
#endif

/* Remove if already defined */
typedef long long int64; typedef unsigned long long uint64;

/* Returns the amount of milliseconds elapsed since the UNIX epoch. Works on both
 * windows and linux. */

uint64 GetTimeMs64()
{
#ifdef _WIN32
 /* Windows */
 FILETIME ft;
 LARGE_INTEGER li;

 /* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it
  * to a LARGE_INTEGER structure. */
 GetSystemTimeAsFileTime(&ft);
 li.LowPart = ft.dwLowDateTime;
 li.HighPart = ft.dwHighDateTime;

 uint64 ret = li.QuadPart;
 ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */
 ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */

 return ret;
#else
 /* Linux */
 struct timeval tv;

 gettimeofday(&tv, NULL);

 uint64 ret = tv.tv_usec;
 /* Convert from micro seconds (10^-6) to milliseconds (10^-3) */
 ret /= 1000;

 /* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */
 ret += (tv.tv_sec * 1000);

 return ret;
#endif
}
Thomas Bonini
  • 40,716
  • 28
  • 117
  • 153
  • 1
    For future reference: I just throw it into a header file and use it. Glad to have it. – Daniel Handojo Feb 24 '16 at 03:29
  • 1
    I believe the method `gettimeofday` can give an unintended result if the system clock is changed. If this would be a problem for you, you might want to look at `clock_gettime` instead. – Azmisov Mar 09 '16 at 19:38
  • Does this method for Windows have any advantages over `GetTickCount`? – MicroVirus Apr 20 '16 at 10:58
  • Does not compile using `gcc -std=c99` – Assimilater Feb 06 '17 at 23:10
  • @MicroVirus: yes, `GetTickCount` is the time elapsed since the system was started, while my function returns the time since the UNIX epoch which means you can use it for dates and times. If you are only interested in time elapsed between two events mine is still a better choice because it's an int64; GetTickCount is an int32 and overflows every 50 days meaning you can get weird results if the two events you registered are in between the overflow. – Thomas Bonini May 03 '17 at 22:59
43

I have another working example that uses microseconds (UNIX, POSIX, etc).

    #include <sys/time.h>
    typedef unsigned long long timestamp_t;

    static timestamp_t
    get_timestamp ()
    {
      struct timeval now;
      gettimeofday (&now, NULL);
      return  now.tv_usec + (timestamp_t)now.tv_sec * 1000000;
    }

    ...
    timestamp_t t0 = get_timestamp();
    // Process
    timestamp_t t1 = get_timestamp();

    double secs = (t1 - t0) / 1000000.0L;

Here's the file where we coded this:

https://github.com/arhuaco/junkcode/blob/master/emqbit-bench/bench.c

arhuaco
  • 1,555
  • 14
  • 19
41

Here is a simple solution in C++11 which gives you satisfying resolution.

#include <iostream>
#include <chrono>

class Timer
{
public:
    Timer() : beg_(clock_::now()) {}
    void reset() { beg_ = clock_::now(); }
    double elapsed() const { 
        return std::chrono::duration_cast<second_>
            (clock_::now() - beg_).count(); }

private:
    typedef std::chrono::high_resolution_clock clock_;
    typedef std::chrono::duration<double, std::ratio<1> > second_;
    std::chrono::time_point<clock_> beg_;
};

Or on *nix, for c++03

#include <iostream>
#include <ctime>

class Timer
{
public:
    Timer() { clock_gettime(CLOCK_REALTIME, &beg_); }

    double elapsed() {
        clock_gettime(CLOCK_REALTIME, &end_);
        return end_.tv_sec - beg_.tv_sec +
            (end_.tv_nsec - beg_.tv_nsec) / 1000000000.;
    }

    void reset() { clock_gettime(CLOCK_REALTIME, &beg_); }

private:
    timespec beg_, end_;
};

Here is the example usage:

int main()
{
    Timer tmr;
    double t = tmr.elapsed();
    std::cout << t << std::endl;

    tmr.reset();
    t = tmr.elapsed();
    std::cout << t << std::endl;

    return 0;
}

From https://gist.github.com/gongzhitaao/7062087

Alex
  • 640
  • 14
  • 31
gongzhitaao
  • 6,073
  • 3
  • 32
  • 44
  • I am getting this error with your c++11 solution : `/usr/lib/x86_64-linux-gnu/libstdc++.so.6: version GLIBCXX_3.4.19 not found (required by ../cpu_2d/g500)` – user9869932 Sep 02 '15 at 20:13
  • @julianromera what platform you are using? did you install the libstdc++ library and g++? – gongzhitaao Sep 02 '15 at 20:32
  • Its a Slurm grid of Linux ubuntu 12. I just got it fixed. I added -static-libstdc++ at the end of the linker. Thank-you for asking @gongzhitaao – user9869932 Sep 02 '15 at 20:42
18
#include <boost/progress.hpp>

using namespace boost;

int main (int argc, const char * argv[])
{
  progress_timer timer;

  // do stuff, preferably in a 100x loop to make it take longer.

  return 0;
}

When progress_timer goes out of scope it will print out the time elapsed since its creation.

UPDATE: Here's a version that works without Boost (tested on macOS/iOS):

#include <chrono>
#include <string>
#include <iostream>
#include <math.h>
#include <unistd.h>

class NLTimerScoped {
private:
    const std::chrono::steady_clock::time_point start;
    const std::string name;

public:
    NLTimerScoped( const std::string & name ) : name( name ), start( std::chrono::steady_clock::now() ) {
    }


    ~NLTimerScoped() {
        const auto end(std::chrono::steady_clock::now());
        const auto duration_ms = std::chrono::duration_cast<std::chrono::milliseconds>( end - start ).count();

        std::cout << name << " duration: " << duration_ms << "ms" << std::endl;
    }

};

int main(int argc, const char * argv[]) {

    {
        NLTimerScoped timer( "sin sum" );

        float a = 0.0f;

        for ( int i=0; i < 1000000; i++ ) {
            a += sin( (float) i / 100 );
        }

        std::cout << "sin sum = " << a << std::endl;
    }



    {
        NLTimerScoped timer( "sleep( 4 )" );

        sleep( 4 );
    }



    return 0;
}

Tomas Andrle
  • 12,714
  • 13
  • 70
  • 88
  • 2
    This works, but note that progress_timer is deprecated (sometime before boost 1.50) - auto_cpu_timer may be more appropriate. – davidA Sep 21 '12 at 02:58
  • 3
    @meowsqueak hmm, auto_cpu_timer seems to require the Boost system library to be linked, so it's no longer a header-only solution. Too bad... makes the other options more appealing all of a sudden. – Tomas Andrle Sep 28 '12 at 18:08
  • 1
    yes, that's a good point, if you don't already link Boost then it's more trouble than it's worth. But if you already do, it works quite nicely. – davidA Oct 01 '12 at 21:54
  • @meowsqueak Yeah, or for some quick benchmark tests, just get that older version of Boost. – Tomas Andrle Jan 23 '13 at 14:10
  • @TomasAndrle The link does not exist anymore. – Zheng Qu Jan 20 '20 at 23:43
5

Windows provides QueryPerformanceCounter() function, and Unix has gettimeofday() Both functions can measure at least 1 micro-second difference.

Captain Comic
  • 14,130
  • 42
  • 99
  • 140
  • But using windows.h is restricted. The same compiled source must run on both Windows and Unix. How to handle this problem? – AhmetB - Google Dec 07 '09 at 17:02
  • 2
    Then look for some wrapper library http://stackoverflow.com/questions/1487695/c-cross-platform-high-resolution-timer – Captain Comic Dec 07 '09 at 17:04
  • 5
    *the same compiled source* sounds like you want to run the same binary on both systems, which doesn't seem to be the case. if you meant *the same source* then an `#ifdef` must be ok (and it is judging from the answer you have accepted), and then I don't see the problem: `#ifdef WIN32 #include ... #else ... #endif`. – just somebody Feb 05 '10 at 14:20
3

In some programs I wrote I used RDTS for such purpose. RDTSC is not about time but about number of cycles from processor start. You have to calibrate it on your system to get a result in second, but it's really handy when you want to evaluate performance, it's even better to use number of cycles directly without trying to change them back to seconds.

(link above is to a french wikipedia page, but it has C++ code samples, english version is here)

kriss
  • 21,366
  • 15
  • 89
  • 109
2

I suggest using the standard library functions for obtaining time information from the system.

If you want finer resolution, perform more execution iterations. Instead of running the program once and obtaining samples, run it 1000 times or more.

Thomas Matthews
  • 52,985
  • 12
  • 85
  • 144
2

It is better to run the inner loop several times with the performance timing only once and average by dividing inner loop repetitions than to run the whole thing (loop + performance timing) several times and average. This will reduce the overhead of the performance timing code vs your actual profiled section.

Wrap your timer calls for the appropriate system. For Windows, QueryPerformanceCounter is pretty fast and "safe" to use.

You can use "rdtsc" on any modern X86 PC as well but there may be issues on some multicore machines (core hopping may change timer) or if you have speed-step of some sort turned on.

Adisak
  • 6,282
  • 34
  • 45
2

A complete unfailing solution to thread scheduling, which should yield exactly the same times per each test, is to compile your program to be OS independent and boot up your computer so as to run the program in an OS-free environment. Yet, this is largely impractical and would be difficult at best.

A good substitute to going OS-free is just to set the affinity of the current thread to 1 core and the priority to the highest. This alternative should provide consistent-enough results.

Assuming that you use -Ofast (or, at the very least, -O3) on the final production build and ignoring the issue of "dead" code elimination, -Og performs very few optimizations compared to -Ofast; thus -Og can misrepresent the real speed of the code in the final product.

Further, all speed tests (to some extent) perjure: in the final production product compiled with -Ofast, each snippet/section/function of code is not isolated; rather, each snippet of code continuously flows into the next, thus allowing the compiler to potential join, merge, and optimize together pieces of code from all over the place.

At the same time, if you are benchmarking a snippet of code which makes heavy use of realloc(), then the snippet of code might run slower in a production product with high enough memory fragmentation. Hence, the expression "the whole is more than the sum of its parts" applies to this situation because code in the final production build might run noticeably faster or slower than the individual snippet which you are speed testing.

A partial solution that may lessen the incongruity is using -Ofast for speed testing WITH the addition of asm volatile("" :: "r"(var)) to the variables involved in the test for preventing dead code/loop elimination.

Here is an example of how to benchmark square root functions on a Windows computer.

// set USE_ASM_TO_PREVENT_ELIMINATION  to 0 to prevent `asm volatile("" :: "r"(var))`
// set USE_ASM_TO_PREVENT_ELIMINATION  to 1 to enforce `asm volatile("" :: "r"(var))`
#define USE_ASM_TO_PREVENT_ELIMINATION 1

#include <iostream>
#include <iomanip>
#include <cstdio>
#include <chrono>
#include <cmath>
#include <windows.h>
#include <intrin.h>
#pragma intrinsic(__rdtsc)
#include <cstdint>

class Timer {
public:
    Timer() : beg_(clock_::now()) {}
    void reset() { beg_ = clock_::now(); }
    double elapsed() const { 
        return std::chrono::duration_cast<second_>
            (clock_::now() - beg_).count(); }
private:
    typedef std::chrono::high_resolution_clock clock_;
    typedef std::chrono::duration<double, std::ratio<1> > second_;
    std::chrono::time_point<clock_> beg_;
};

unsigned int guess_sqrt32(register unsigned int n) {
    register unsigned int g = 0x8000;
    if(g*g > n) {
        g ^= 0x8000;
    }
    g |= 0x4000;
    if(g*g > n) {
        g ^= 0x4000;
    }
    g |= 0x2000;
    if(g*g > n) {
        g ^= 0x2000;
    }
    g |= 0x1000;
    if(g*g > n) {
        g ^= 0x1000;
    }
    g |= 0x0800;
    if(g*g > n) {
        g ^= 0x0800;
    }
    g |= 0x0400;
    if(g*g > n) {
        g ^= 0x0400;
    }
    g |= 0x0200;
    if(g*g > n) {
        g ^= 0x0200;
    }
    g |= 0x0100;
    if(g*g > n) {
        g ^= 0x0100;
    }
    g |= 0x0080;
    if(g*g > n) {
        g ^= 0x0080;
    }
    g |= 0x0040;
    if(g*g > n) {
        g ^= 0x0040;
    }
    g |= 0x0020;
    if(g*g > n) {
        g ^= 0x0020;
    }
    g |= 0x0010;
    if(g*g > n) {
        g ^= 0x0010;
    }
    g |= 0x0008;
    if(g*g > n) {
        g ^= 0x0008;
    }
    g |= 0x0004;
    if(g*g > n) {
        g ^= 0x0004;
    }
    g |= 0x0002;
    if(g*g > n) {
        g ^= 0x0002;
    }
    g |= 0x0001;
    if(g*g > n) {
        g ^= 0x0001;
    }
    return g;
}

unsigned int empty_function( unsigned int _input ) {
    return _input;
}

unsigned long long empty_ticks=0;
double empty_seconds=0;
Timer my_time;

template<unsigned int benchmark_repetitions>
void benchmark( char* function_name, auto (*function_to_do)( auto ) ) {
    register unsigned int i=benchmark_repetitions;
    register unsigned long long start=0;
    my_time.reset();
    start=__rdtsc();
    while ( i-- ) {
        auto result = (*function_to_do)( i << 7 );
        #if USE_ASM_TO_PREVENT_ELIMINATION == 1
            asm volatile("" :: "r"(
                // There is no data type in C++ that is smaller than a char, so it will
                //  not throw a segmentation fault error to reinterpret any arbitrary
                //  data type as a char. Although, the compiler might not like it.
                result
            ));
        #endif
    }
    if ( function_name == nullptr ) {
        empty_ticks = (__rdtsc()-start);
        empty_seconds = my_time.elapsed();
        std::cout<< "Empty:\n" << empty_ticks
              << " ticks\n" << benchmark_repetitions << " repetitions\n"
               << std::setprecision(15) << empty_seconds
                << " seconds\n\n";
    } else {
        std::cout<< function_name<<":\n" << (__rdtsc()-start-empty_ticks)
              << " ticks\n" << benchmark_repetitions << " repetitions\n"
               << std::setprecision(15) << (my_time.elapsed()-empty_seconds)
                << " seconds\n\n";
    }
}


int main( void ) {
    void* Cur_Thread=   GetCurrentThread();
    void* Cur_Process=  GetCurrentProcess();
    unsigned long long  Current_Affinity;
    unsigned long long  System_Affinity;
    unsigned long long furthest_affinity;
    unsigned long long nearest_affinity;
    
    if( ! SetThreadPriority(Cur_Thread,THREAD_PRIORITY_TIME_CRITICAL) ) {
        SetThreadPriority( Cur_Thread, THREAD_PRIORITY_HIGHEST );
    }
    if( ! SetPriorityClass(Cur_Process,REALTIME_PRIORITY_CLASS) ) {
        SetPriorityClass( Cur_Process, HIGH_PRIORITY_CLASS );
    }
    GetProcessAffinityMask( Cur_Process, &Current_Affinity, &System_Affinity );
    furthest_affinity = 0x8000000000000000ULL>>__builtin_clzll(Current_Affinity);
    nearest_affinity  = 0x0000000000000001ULL<<__builtin_ctzll(Current_Affinity);
    SetProcessAffinityMask( Cur_Process, furthest_affinity );
    SetThreadAffinityMask( Cur_Thread, furthest_affinity );
    
    const int repetitions=524288;
    
    benchmark<repetitions>( nullptr, empty_function );
    benchmark<repetitions>( "Standard Square Root", standard_sqrt );
    benchmark<repetitions>( "Original Guess Square Root", original_guess_sqrt32 );
    benchmark<repetitions>( "New Guess Square Root", new_guess_sqrt32 );
    
    
    SetThreadPriority( Cur_Thread, THREAD_PRIORITY_IDLE );
    SetPriorityClass( Cur_Process, IDLE_PRIORITY_CLASS );
    SetProcessAffinityMask( Cur_Process, nearest_affinity );
    SetThreadAffinityMask( Cur_Thread, nearest_affinity );
    for (;;) { getchar(); }
    
    return 0;
}

Also, credit to Mike Jarvis for his Timer.

Please note (this is very important) that if you are going to be running bigger code snippets, then you really must turn down the number of iterations to prevent your computer from freezing up.

Jack Giffin
  • 3,108
  • 2
  • 26
  • 45
  • 2
    Good answer except for disabling optimization. Benchmarking `-O0` code is a big waste of time because the overhead of `-O0` instead of a normal `-O2` or `-O3 -march=native` varies *wildly* depending on the code and the workload. e.g. extra named tmp vars costs time at `-O0`. There are other ways to avoid having things optimize away, like hiding things from the optimizer with `volatile`, non-inline functions, or empty inline asm statements. **`-O0` is not even close to usable because code has *different* bottlenecks at `-O0`, not the same but worse.** – Peter Cordes Apr 26 '19 at 03:30
  • 1
    Ugh, `-Og` is still not very realistic, depending on the code. At least `-O2`, preferably `-O3` is more realistic. Use `asm volatile("" ::: "+r"(var))` or something to make the compiler materialize a value in a register, and defeat constant propagation through it. – Peter Cordes Apr 27 '19 at 16:37
  • @PeterCordes Thank you again for your insights. I have updated the content with `-O3` and the code snippet with `asm volatile("" ::: "+r"(var))`. – Jack Giffin Apr 28 '19 at 12:47
  • 1
    `asm volatile("" ::: "+r"( i ));` seems unnecessary. In optimized code, there's no reason to force the compiler to materialize `i` as well as `i<<7` inside the loop. You're stopping it from optimizing to `tmp -= 128` instead of shifting every time. Using the result of a function call is good, though, if it's non-`void`. Like `int result = (*function_to_do)( i << 7 );`. You could use an `asm` statement on that result. – Peter Cordes Apr 28 '19 at 14:37
  • @PeterCordes Thank you very much again or your insights. My post now contains the corrections for the return value from `function_to_do` so that `function_to_do` can be inlined without being eliminated. Please let me know if you have any further suggestions. – Jack Giffin Apr 28 '19 at 15:06
  • Does this actually compile? You don't have a definition for `standard_sqrt`, and I don't think `(int) (reinterpret_cast(result))` is an lvalue. (`"+r"` is a read-write register operand; you might only need `asm volatile("" :: "r"(input_var) )` instead of `asm volatile("" : "+r"(compiler_assumes_this_is_modified) )`. And BTW, I had a typo: output operands go after the first colon, clobbers after the third. So `asm("" ::: "memory")` is a memory barrier, but you can't put `"+r"(var)` there. See https://stackoverflow.com/tags/inline-assembly/info – Peter Cordes Apr 28 '19 at 15:41
  • @PeterCordes I have made the edit you suggested, but I do not know whether it compiles or not. I am slightly busy right now, so learning a new programming language like assembly will have to wait for another day. Thank you so much for that link to assembly tutorials: I shall look into it eventually. – Jack Giffin Apr 28 '19 at 15:58
2

(windows specific solution) The current (circa 2017) way to get accurate timings under windows is to use "QueryPerformanceCounter". This approach has the benefit of giving very accurate results and is recommended by MS. Just plop the code blob into a new console app to get a working sample. There is a lengthy discussion here: Acquiring High resolution time stamps

#include <iostream>
#include <tchar.h>
#include <windows.h>

int main()
{
constexpr int MAX_ITER{ 10000 };
constexpr __int64 us_per_hour{ 3600000000ull }; // 3.6e+09
constexpr __int64 us_per_min{ 60000000ull };
constexpr __int64 us_per_sec{ 1000000ull };
constexpr __int64 us_per_ms{ 1000ull };

// easy to work with
__int64 startTick, endTick, ticksPerSecond, totalTicks = 0ull;

QueryPerformanceFrequency((LARGE_INTEGER *)&ticksPerSecond);

for (int iter = 0; iter < MAX_ITER; ++iter) {// start looping
    QueryPerformanceCounter((LARGE_INTEGER *)&startTick); // Get start tick
    // code to be timed
    std::cout << "cur_tick = " << iter << "\n";
    QueryPerformanceCounter((LARGE_INTEGER *)&endTick); // Get end tick
    totalTicks += endTick - startTick; // accumulate time taken
}

// convert to elapsed microseconds
__int64 totalMicroSeconds =  (totalTicks * 1000000ull)/ ticksPerSecond;

__int64 hours = totalMicroSeconds / us_per_hour;
totalMicroSeconds %= us_per_hour;
__int64 minutes = totalMicroSeconds / us_per_min;
totalMicroSeconds %= us_per_min;
__int64 seconds = totalMicroSeconds / us_per_sec;
totalMicroSeconds %= us_per_sec;
__int64 milliseconds = totalMicroSeconds / us_per_ms;
totalMicroSeconds %= us_per_ms;


std::cout << "Total time: " << hours << "h ";
std::cout << minutes << "m " << seconds << "s " << milliseconds << "ms ";
std::cout << totalMicroSeconds << "us\n";

return 0;
}
1

For cases where you want to time the same stretch of code every time it gets executed (e.g. for profiling code that you think might be a bottleneck), here is a wrapper around (a slight modification to) Andreas Bonini's function that I find useful:

#ifdef _WIN32
#include <Windows.h>
#else
#include <sys/time.h>
#endif

/*
 *  A simple timer class to see how long a piece of code takes. 
 *  Usage:
 *
 *  {
 *      static Timer timer("name");
 *
 *      ...
 *
 *      timer.start()
 *      [ The code you want timed ]
 *      timer.stop()
 *
 *      ...
 *  }
 *
 *  At the end of execution, you will get output:
 *
 *  Time for name: XXX seconds
 */
class Timer
{
public:
    Timer(std::string name, bool start_running=false) : 
        _name(name), _accum(0), _running(false)
    {
        if (start_running) start();
    }

    ~Timer() { stop(); report(); }

    void start() {
        if (!_running) {
            _start_time = GetTimeMicroseconds();
            _running = true;
        }
    }
    void stop() {
        if (_running) {
            unsigned long long stop_time = GetTimeMicroseconds();
            _accum += stop_time - _start_time;
            _running = false;
        }
    }
    void report() { 
        std::cout<<"Time for "<<_name<<": " << _accum / 1.e6 << " seconds\n"; 
    }
private:
    // cf. http://stackoverflow.com/questions/1861294/how-to-calculate-execution-time-of-a-code-snippet-in-c
    unsigned long long GetTimeMicroseconds()
    {
#ifdef _WIN32
        /* Windows */
        FILETIME ft;
        LARGE_INTEGER li;

        /* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it
         *   * to a LARGE_INTEGER structure. */
        GetSystemTimeAsFileTime(&ft);
        li.LowPart = ft.dwLowDateTime;
        li.HighPart = ft.dwHighDateTime;

        unsigned long long ret = li.QuadPart;
        ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */
        ret /= 10; /* From 100 nano seconds (10^-7) to 1 microsecond (10^-6) intervals */
#else
        /* Linux */
        struct timeval tv;

        gettimeofday(&tv, NULL);

        unsigned long long ret = tv.tv_usec;
        /* Adds the seconds (10^0) after converting them to microseconds (10^-6) */
        ret += (tv.tv_sec * 1000000);
#endif
        return ret;
    }
    std::string _name;
    long long _accum;
    unsigned long long _start_time;
    bool _running;
};
Mike Jarvis
  • 823
  • 5
  • 17
1

just a simple class that benchmark the codeblock :

using namespace std::chrono;

class benchmark {
  public:
  time_point<high_resolution_clock>  t0, t1;
  unsigned int *d;
  benchmark(unsigned int *res) : d(res) { 
                 t0 = high_resolution_clock::now();
  }
  ~benchmark() { t1 = high_resolution_clock::now();
                  milliseconds dur = duration_cast<milliseconds>(t1 - t0);
                  *d = dur.count();
  }
};
// simple usage 
// unsigned int t;
// { // put the code in a block
//  benchmark bench(&t);
//  // ...
//  // code to benchmark
// }
// HERE the t contains time in milliseconds

// one way to use it can be :
#define BENCH(TITLE,CODEBLOCK) \
  unsigned int __time__##__LINE__ = 0;  \
  { benchmark bench(&__time__##__LINE__); \
      CODEBLOCK \
  } \
  printf("%s took %d ms\n",(TITLE),__time__##__LINE__);


int main(void) {
  BENCH("TITLE",{
    for(int n = 0; n < testcount; n++ )
      int a = n % 3;
  });
  return 0;
}
nullqube
  • 2,359
  • 15
  • 17
0

boost::timer will probably give you as much accuracy as you'll need. It's nowhere near accurate enough to tell you how long a = a+1; will take, but I what reason would you have to time something that takes a couple nanoseconds?

Brendan Long
  • 49,408
  • 17
  • 130
  • 172
0

I created a lambda that calls you function call N times and returns you the average.

double c = BENCHMARK_CNT(25, fillVectorDeque(variable));

You can find the c++11 header here.

burner
  • 343
  • 2
  • 10
0

I created a simple utility for measuring performance of blocks of code, using the chrono library's high_resolution_clock: https://github.com/nfergu/codetimer.

Timings can be recorded against different keys, and an aggregated view of the timings for each key can be displayed.

Usage is as follows:

#include <chrono>
#include <iostream>
#include "codetimer.h"

int main () {
    auto start = std::chrono::high_resolution_clock::now();
    // some code here
    CodeTimer::record("mykey", start);
    CodeTimer::printStats();
    return 0;
}
Neil
  • 1,644
  • 1
  • 17
  • 29
0

You could also look at the [cxx-rtimers][1] on GitHub, which provide some header-only routines for gathering statistics on the run-time of any code-block where you can create a local variable. Those timers have versions that use std::chrono on C++11, or timers from the Boost library, or standard POSIX timer functions. These timers will report the average, maximum & minimum duration spent within a function, as well as the number of times it is called. They can be used as simply as follows:

#include <rtimers/cxx11.hpp>

void expensiveFunction() {
    static rtimers::cxx11::DefaultTimer timer("expensive");
    auto scopedStartStop = timer.scopedStart();
    // Do something costly...
}
rwp
  • 1,456
  • 2
  • 9
  • 21
0

Thats how i do it, not much code, easy to understand, fits my needs:

void bench(std::function<void()> fnBench, std::string name, size_t iterations)
{
    if (iterations == 0)
        return;
    if (fnBench == nullptr)
        return;
    std::chrono::high_resolution_clock::time_point start, end;
    if (iterations == 1)
    {
        start = std::chrono::high_resolution_clock::now();
        fnBench();
        end = std::chrono::high_resolution_clock::now();
    }
    else
    {
        start = std::chrono::high_resolution_clock::now();
        for (size_t i = 0; i < iterations; ++i)
            fnBench();
        end = std::chrono::high_resolution_clock::now();
    }
    printf
    (
        "bench(*, \"%s\", %u) = %4.6lfs\r\n",
        name.c_str(),
        iterations,
        std::chrono::duration_cast<std::chrono::duration<double>>(end - start).count()
    );
}

Usage:

bench
(
    []() -> void // function
    {
        // Put your code here
    },
    "the name of this", // name
    1000000 // iterations
);
0
#include <omp.h>

double start = omp_get_wtime();

// code 

double finish = omp_get_wtime();

double total_time = finish - start;
  • 2
    While this code may solve the question, [including an explanation](//meta.stackexchange.com/q/114762) of how and why this solves the problem would really help to improve the quality of your post, and probably result in more up-votes. Remember that you are answering the question for readers in the future, not just the person asking now. Please [edit] your answer to add explanations and give an indication of what limitations and assumptions apply. – Dharman Jul 22 '20 at 00:25