Hi Antoine, As you said there can be lot of possible use cases for the proposed feature, since there are lot’s of use cases for a lock. I can tell you some cases which I am facing. Let’s say I have a parent process which spawns lots of worker processes to serve some requests. Now the parent process wants to know the statistics about the kind of requests being served by these workers. For, example the average time to serve a request by all workers combined, number of bad requests, number of stalled requests, number of rerouted requests, etc. Now, the worker processes will make updates to these variables, which the parent can report, and accordingly adjust workers. And, instead of locking, it would be much more helpful and easier to use atomic values. The workers can also send data to parent, and then the parent will have the overhead of writing and synchronising, but this wouldn’t be very fast. Whereas, writing directly to shared memory will be much more faster, since Value is stored in shared memory. Another use case I can think of is reference counting, where parent allocates a shared resource, let’s say shared memory, and would want to deallocate it when none of the worker processes are using it. This can be implemented by storing a reference count of the shared memory, and cleaning it when the reference count becomes 0. Now, each such shared resource will be coupled with a reference count variable, and each reference count variable will have a lock with it to prevent data races. Making this reference count variable atomic will prevent passing locks with these variables. GIL helps in preventing this scenario in case of multithreading, but it very well occurs in case of multiprocessing. Although, the type of shared resource changes in case of multiprocessing.
On 10-Sep-2019, at 2:27 PM, Antoine Pitrou
wrote: Hi Vinay,
On Mon, 09 Sep 2019 08:23:48 -0000 Vinay Sharma via Python-ideas
wrote: Also, as far as I know (might be wrong) Value is stored in shared memory and is therefore very fast also. So, what I am proposing is a similar object to value to which operations like add, sub, xor, etc are atomic. Therefore, I wouldn't have to use lock at all for synchronization. Updates to these values can be made very easily by using calls such as __sync_and_and_fetch(), even when they are stored in a shared memory segment, accessible across processes.
I'm curious what your use case is. I have never seen multiprocess Python programs operate at this low level of concurrency (though it's generally possible). The common model when using multiprocessing is to dispatch large tasks and let each worker grind on a task on its own. If you spend your time synchronizining access to shared memory (to the point that you need a separate lock per shared variable), perhaps your model of sharing work between processes is not right.
In any case, providing C++-like atomic types sounds very low-level in a language like Python. I'm skeptical that it would address more than a couple extremely niche use cases, though I'd be happy to be proven wrong.
Regards
Antoine.
_______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/JMVCCK... Code of Conduct: http://python.org/psf/codeofconduct/