Oh, thanks! it is much more palpable now!

On Mon, Sep 9, 2019, 05:23 Vinay Sharma <vinay04sharma@icloud.com> wrote:
Hi,
I apologise for my poor explanation of what I mean. I should have been more comprehensive in describing the idea.

First of all I am only concerned with synchronization across multiple processes only, and not threads. Therefore I never mentioned multi-threading in my original post.
I am familiar with GIL and understand that atomicity doesn't make sense across threads.

Also, I didn't mean simple variables to be atomic. I just mean that some objects which are frequently being used across processes in multiprocessing can have some atomic operations/methods.

Let's take the following example:
import time
from multiprocessing import Process, Value, Lock

def func(val, lock):
   for i in range(50):
   time.sleep(0.01)
   with lock:
      val.value += 1

if __name__ == '__main__':
   v = Value('i', 0) 
   lock = Lock()
   procs = [Process(target=func, args=(v, lock)) for i in range(10)]

   for p in procs: p.start()
   for p in procs: p.join()

   print(v.value)

Because as far as I can perceive none of the types or operations you mention have
any race conditions in multi-threaded Python code, and for multi-process or async code that
is not a problem by design but for data designed to be shared, like memory mapped
stuff (and then locks are the only way to go anyway).

In the above code snippet, I have to use a lock so as to prevent data races, and using lock is quite straightforward and simple. But in cases where I have to synchronize let's say 5 different values. I will have to use multiple locks for synchronization, as using a single lock will be very slow, and I will have to wait if a lock has been acquired to update any one of the 5 values.

Also, as far as I know (might be wrong) Value is stored in shared memory and is therefore very fast also. So, what I am proposing is a similar object to value to which operations like add, sub, xor, etc are atomic. Therefore, I wouldn't have to use lock at all for synchronization. Updates to these values can be made very easily by using calls such as __sync_and_and_fetch(), even when they are stored in a shared memory segment, accessible across processes.
 

On 9 September 2019 at 9:59 AM, "Joao S. O. Bueno" <jsbueno@python.org.br> wrote:

May I ask how acquainted you ar with parallel code, including multi-threading, in Python?

Because as far as I can perceive none of the types or operations you mention have
any race conditions in multi-threaded Python code, and for multi-process or async code that
is not a problem by design but for data designed to be shared, like memory mapped
stuff (and then locks are the only way to go anyway).

Primitive data types like numbers and strings are imutable to startwith, so it
is "very impossible" for multi-threading code to be an issue when using then. 
As for composite native data types like dicts, lists and sets, that could theoretically be left in an 
inconsistent state, that is already taken care of by the GIL. And even for user designed types,
it is a matter of adding a threading lock inside the proper dunder methods, and it
also fixes the issue for the users of those types. 


The only time I had to worry about a lock in a data type in Python (i.e., not a  
program wide state that is inherent to my problem), was when asking a novice question on
"how to rename a dict key" - and even them, it felt a bit overkill.


In other words, I may be wrong, but I think you are talking about a non-issue here.




On Sun, 8 Sep 2019 at 11:33, Vinay Sharma via Python-ideas <python-ideas@python.org> wrote:
Currently, C++ has support for atomic types, on which operations like add, sub, xor, etc can be done atomically, thereby avoiding data races.
Having such a support will be very helpful in Python.

For instance users won't have to use Locks for synchronising shared variables in case of multiprocessing. And, directly passing these atomic objects would be enough. These objects can have methods such as add, sub, etc to change the underlying variable.

I believe this would make python much more easier for users in both trivial and complicated cases, without having to use lock.

gcc also provides inbuilt support for some atomic operations. Documentation for the same is available at https://gcc.gnu.org/onlinedocs/gcc-5.2.0/gcc/_005f_005fsync-Builtins.html#g_t_005f_005fsync-Builtins

Moreover, gcc also has support for Memory Model aware atomic operations documented at https://gcc.gnu.org/onlinedocs/gcc-5.2.0/gcc/_005f_005fatomic-Builtins.html#g_t_005f_005fatomic-Builtins
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-leave@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/S4EV6IJCCZZSTGN7UVTXEU7UNGHNIBRC/
Code of Conduct: http://python.org/psf/codeofconduct/