I have written some C code which I call form MATLAB after I compile it using MEX. Inside the C code, I measure the time of a part of the computation using the following code:
clock_t begin, end;
double time_elapsed;
begin = clock();
/* do stuff... */
end = clock();
time_elapsed = (double) ((double) (end - begin) / (double) CLOCKS_PER_SEC);
Elapsed time should be the execution time in seconds.
I then output the value time_elapsed
to MATLAB (it is properly exported; I checked). Then MATLAB-side I call this C function (after I compile it using MEX) and I measure its execution time using tic
and toc
. What turns out to be a complete absurdity is that the time I compute using tic and toc is 0.0011s (average on 500 runs, st. dev. 1.4e-4) while the time that is returned by the C code is 0.037s (average on 500 runs, st. dev. 0.0016).
Here one may notice two very strange facts:
- The execution time for the whole function is lower than the execution time for a part of the code. Hence, either MATLAB's or C's measurements are strongly inaccurate.
- The execution times measured in the C code are very scattered and exhibit very high st. deviation (coeff. of variation 44%, compared to just 13% for tic-toc).
What is going on with these timers?