I am trying to evaluate the performance of two Python methods that sort a list of numbers. The time complexity seems to be n^2 for both of them but empirical data shows one performs better than the other. Any reasons for this?
I wrote two methods, one using nested for loops and another that finds a max and adds the max to a new list (and removes from the old list) iteratively.
Method 1:
def mysort1(l):
i = 0
j = 1
for i in range(0,len(l)-1):
for j in range(i,len(l)):
if l[i] > l[j]:
tmp = l[j]
l[j] = l[i]
l[i] = tmp
return l
Method 2:
def mysort2(l):
nl = []
for i in range(0,len(l)):
m = max(l)
nl.insert(0, m)
l.remove(m)
return nl
Both were tested with a list of 10000 numbers in reverse order. When using profile, Method 1 takes approximately 8 seconds (10000+ calls) and method 2 takes only 0.6 seconds (30000+ calls). Any reason why Method 2 performs so much better than Method 1 even though time complexity for both seems to be the same?