I have a small program that implements three sorting algorithms and runs a time-analysis (using the C time library and calling clock()
) of the execution time for each. The program reads in 9999, 19999, and 29999 integers before sorting the arrays. Upon completion, the median of the newly sorted data is found as a testament to the proper sorting of the data.
The problem was that on my personal Windows 7 machine I could not find the median of data sorted via the quicksort algorithm despite having a properly sorted set. However, when ran on the Linux machine all works fine but the timing runs awry from what is expected.
The quicksort algorithm is nlog(n)
average case. So, we would expect the time of execution to increase with larger data sets (which is does on my personal machine). However, when ran on the linux machine I get a 0 execution time for the smallest set of data and equivalent time for the following two.
I could understand the 0 resultant of the calculations associated with the time taken for execution (due to truncation of integer arithmetic), but my personal machine runs much quicker on every other algorithm. The lagging noted for this particular instance has driven me to inquire about the differences.
If you think that the reason behind the discrepancy might be do to the implementation of how I went about reading/sorting then comment as such. However, I am more interested in the differences between how the system time is handled between the two systems and various other things that may be the culprit.
And also, sometimes when the program was run on the linux machine the output execution time would actually be something like 10 0 10
, so naturally it has me interested.