9

I've been profiling some code using Python's multiprocessing module (the 'job' function just squares the number).

data = range(100000000)
n=4
time1 = time.time()
processes = multiprocessing.Pool(processes=n)
results_list = processes.map(func=job, iterable=data, chunksize=10000)
processes.close()
time2 = time.time()
print(time2-time1)
print(results_list[0:10])

One thing I found odd is that the optimal chunksize appears to be around 10k elements - this took 16 seconds on my computer. If I increase the chunksize to 100k or 200k, then it slows to 20 seconds.

Could this difference be due to the amount of time required for pickling being longer for longer lists? A chunksize of 100 elements takes 62 seconds which I'm assuming is due to the extra time required to pass the chunks back and forth between different processes.

Jack Simpson
  • 1,579
  • 3
  • 26
  • 47

1 Answers1

15

About optimal chunksize:

  1. Having tons of small chunks would allow the 4 different workers to distribute the load more efficiently, thus smaller chunks would be desirable.
  2. In the other hand, context changes related to processes add an overhead everytime a new chunk has to be processed, so less amount of context changes and therefore less chunks are desirable.

As both rules want different aproaches, a point in the middle is the way to go, similar to a supply-demand chart.

Filip Haglund
  • 12,351
  • 10
  • 60
  • 102
Adirio
  • 4,609
  • 1
  • 12
  • 22