1

I am using Python 3.7 on Windows.

What I am trying to do: - lock a method of an instance of a class, when another process has acquired that same lock.

Attempts:

I have already successfully done this, but I don't want a global variable here for the lock, but instead one completely internal to the class

from multiprocessing import Lock, freeze_support,Pool
from time import sleep

def do_work(name):
    print(name+' waiting for lock to work...',end='')
    sleep(2)
    with lock:
        print('done!')
        print(name+' doing work...',end='')
        sleep(5)
        print('done!')

def init(olock):
    global lock
    lock = olock

if __name__ == '__main__':
    freeze_support()
    args_list = [('a'),('b'),('c')]
    lock=Lock()
    p=Pool(8,initializer=init,initargs=(lock,))
    p.map_async(do_work,args_list)
    p.close()
    p.join()

When this last chunk of code runs, it takes ~17.3 seconds, because of the lock. Without the lock it takes ~7 seconds.

I have tried to implement this inside a class, but the lock does nothing, and it always runs in ~7 seconds.

class O():
    def __init__(self):
        self.lock=Lock()
    def __getstate__(self): # used to remove multiprocess object(s) from class, so it can be pickled
        self_dict=self.__dict__.copy()
        del self_dict['lock']
        return self_dict
    def __setstate__(self,state): # used to remove multiprocess object(s) from class, so it can be pickled
        self.__dict__.update(state)
    def _do_work(self,name):
        print(name+' waiting for lock to work...',end='')
        sleep(2)
        with self.lock:
            print('done!')
            print(name+' doing work...',end='')
            sleep(5)
            print('done!')

if __name__ == '__main__':
    freeze_support()
    c = O()
    pool = Pool(8)
    pool.apply_async(c._do_work,('a',))
    pool.apply_async(c._do_work,('b',))
    pool.apply_async(c._do_work,('c',))
    pool.close()
    pool.join()

Question: So, what can I do to lock up this class instance while I call a method which interacts with a resource asynchronously through multiprocessing?

Connor
  • 173
  • 3
  • 11

1 Answers1

1

apply_async will pickle function object and send to pool worker process by queue, but as c._do_work is a bound method, the instance will be pickled too, which results in an error. you could wrap it within a plain function:

c = O()
def w(*args):
    return c._do_work(*args)

if __name__ == '__main__':
    pool = Pool(1)
    pool.apply_async(w, ('a',))
    ...

and you should remove __setstate__/__getstate__.

georgexsh
  • 13,907
  • 2
  • 31
  • 54