I'd like to have a set of objects registered by the BaseManager
/SyncManager
which can be exchanged between processes easily without additional effort. For example, let's say that I register a standard Value
object:
from multiprocessing import Process, Lock
from multiprocessing.managers import BaseManager, Value, ValueProxy
from time import sleep
N = 100
class MyManager(BaseManager):
pass
def increase(var, lck):
for i in range(N):
lck.acquire()
value = var.get()
value += 1
var.set(value)
lck.release()
sleep(0.01)
def decrease(var, lck):
for i in range(N):
lck.acquire()
value = var.get()
value -= 1
var.set(value)
lck.release()
sleep(0.01)
if __name__ == '__main__':
MyManager.register('MyValue', Value, ValueProxy)
man = MyManager()
man.start()
lock = Lock()
variable = man.MyValue('i', N)
p1 = Process(target=increase, args=(variable, lock))
p2 = Process(target=decrease, args=(variable, lock))
p1.start()
p2.start()
p1.join()
p2.join()
print(variable) # Value('i', 100)
This script will print Value('i', 100)
. The initial value of the object is N = 100
. The first process increases this value by 1
every 0.01
second. At the same time the second process decreases this value by 1
with the same period. Due the Lock
object one process waits for resources if they are used by the second process. If we comment out lck.aquire()
and lck.release()
lines we will get Value('i', 98)
, Value('i', 101)
etc.
So the Lock
object is necessary to ensure that we won't miss any step. Nevertheless, the above code has a disadvantage. If we implement some function we have to also put a lock into a source code. The better solution seems to embed the Lock
into container directly. For example:
from multiprocessing import Process, Lock
from multiprocessing.managers import BaseManager, Value, ValueProxy
from time import sleep
N = 100
class MyManager(BaseManager):
pass
class MyValue(Value):
def __init__(self, *args, **kwargs):
self._lock = Lock()
super().__init__(*args, **kwargs)
def get(self):
self._lock.acquire()
res = super().get()
self._lock.release()
return res
def set(self, value):
self._lock.acquire()
super().set(value)
self._lock.release()
def increase(var):
for i in range(N):
value = var.get()
value += 1
var.set(value)
sleep(0.01)
def decrease(var):
for i in range(N):
value = var.get()
value -= 1
var.set(value)
sleep(0.01)
if __name__ == '__main__':
MyManager.register('MyValue', MyValue, ValueProxy)
man = MyManager()
man.start()
variable = man.MyValue('i', N)
p1 = Process(target=increase, args=(variable,))
p2 = Process(target=decrease, args=(variable,))
p1.start()
p2.start()
p1.join()
p2.join()
print(variable)
But this doesn't work. It behaves the same as the version without lock. I know that there's impossible to pickle the Lock
object but maybe someone knows how to handle locking inside shared objects/containers.
EDIT:
Here is an example when a Lock
object is propagate to another processes. It seems that it can be pickle but in this example the Lock
is passing by target
instead of args
parameter.