-1

I have a small program that implements three sorting algorithms and runs a time-analysis (using the C time library and calling clock()) of the execution time for each. The program reads in 9999, 19999, and 29999 integers before sorting the arrays. Upon completion, the median of the newly sorted data is found as a testament to the proper sorting of the data.

The problem was that on my personal Windows 7 machine I could not find the median of data sorted via the quicksort algorithm despite having a properly sorted set. However, when ran on the Linux machine all works fine but the timing runs awry from what is expected.

The quicksort algorithm is nlog(n) average case. So, we would expect the time of execution to increase with larger data sets (which is does on my personal machine). However, when ran on the linux machine I get a 0 execution time for the smallest set of data and equivalent time for the following two.

I could understand the 0 resultant of the calculations associated with the time taken for execution (due to truncation of integer arithmetic), but my personal machine runs much quicker on every other algorithm. The lagging noted for this particular instance has driven me to inquire about the differences.

If you think that the reason behind the discrepancy might be do to the implementation of how I went about reading/sorting then comment as such. However, I am more interested in the differences between how the system time is handled between the two systems and various other things that may be the culprit.

enter image description here

And also, sometimes when the program was run on the linux machine the output execution time would actually be something like 10 0 10, so naturally it has me interested.

sherrellbc
  • 4,222
  • 7
  • 37
  • 71
  • You might look through this to see if you have issues with differences in precision: http://stackoverflow.com/questions/8594277/clock-precision-in-time-h – Dweeberly Nov 25 '13 at 00:05
  • 1
    You haven't said anything about the architecture of the machines, or whether you're using 32-bit or 64-bit implementations, or what else the machines might have been doing when you ran these tests. In short, your data is pretty meaningless. –  Nov 25 '13 at 00:05
  • I was asking a general question. I did not consider the dependencies this output may differ across various architectures. I was wondering if there was an immediately obvious difference. I am not asking for a very precise definition of exactly what is happening. – sherrellbc Nov 25 '13 at 00:10

1 Answers1

2

i'm not sure what you're looking for here apart from the obvious, but...

  1. you have a bug in your windows code, hence the bad median

  2. on the unix machine the resolution of the timing information is low so you only see multiples of 10. so any process that lasts between 0 and 10 (whatever the units are) could display either 0 or 10, depending on exactly when it starts and stops. that's consistent with the times you'd expect by scaling the window's results according to the other runs.

andrew cooke
  • 42,329
  • 8
  • 83
  • 138
  • This what I was thinking. All I was looking for was a general possibility. I have little to no knowledge of linux systems so I was really unsure. There interesting problem was that the median was sucessfully calculated on the Linux machine, while on the windows machine the correct median was exactly 1 position before (in the array of sorted integers) what the returned median was. – sherrellbc Nov 25 '13 at 01:54