[pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1

Armin Rigo arigo at tunes.org
Sat Dec 26 18:42:15 EST 2015


Hi Vincent,

On Sat, Dec 26, 2015 at 1:45 PM, Vincent Legoll
<vincent.legoll at gmail.com> wrote:
> OK, I understand we want to have behavior as identical to cpython as possible.
>
> So does that mean implementing the same : "fstat() & check we're still
> on the same inode & device as before" ?
> Do we also need to do the "could have been opened in another thread" dance ?
> And what about the CLO_EXEC thing, this does seem sensible to do too,

This is what I started to do, but I stopped because there are
additional issues on PyPy: you need to make sure that the GIL is not
released at places where CPython does not release it either.
Otherwise, you're opening the code to "race conditions" again.

The goal would be to be *at least* as safe as CPython.  There are a
lot of corner cases that are not discussed in the source code of
CPython.  I'm pretty sure that some of them are possible (but rare).
As an extreme example, if one thread does os.urandom() and another
thread does os.close(4) in parallel, where 4 happens to be the file
descriptor returned by open() in urandom.c, then it is possible that
the open()'s result is closed after open() but before urandom.c
reacquires the GIL.  Then urandom.c gets a closed file descriptor.  If
additionally the other thread (or a 3rd one) opens a different file at
file descriptor 4, then urandom.c will think it successfully opened
/dev/urandom but actually the file descriptor is for some unrelated
file.  However, this is probably an issue that cannot be dealt with at
all on Posix even in C.  That would mean that it is always wrong to
close() unknown file descriptors from one thread when other threads
might be running...  This would not hurt being clarified.


A bientôt,

Armin.


More information about the pypy-dev mailing list