I believe that asyncio should have a way to wait for input from a different process without blocking the event loop. 

The Asyncio module currently contains a Queue class that allows communication between multiple coroutines running on the same event loop. However, this module is not threadsafe or process-safe.

The multiprocessing module contains Queue and Pipe classes that allow inter-process communication, but there's no way to directly read from these objects without blocking the event loop. 

I propose adding a Pipe class to asyncio, that is process-safe and can be read from without blocking the event loop. This was discussed a bit here: https://github.com/python/cpython/pull/20882#issuecomment-683463367

This could be implemented using the multiprocessing.Pipe class. multiprocessing.connection.Connection.fileno() returns the file descriptor used by a pipe. We could then use loop.add_reader() to set an asyncio.Event when something has been written to the pipe by the other process. I did this all manually in a project I was working on. However, this required me to learn a considerable amount about asyncio. It would have saved me a lot of time if there was an easy documented way to wait for input from another process in a non-blocking way. 

One compelling use case for this is a server that uses asyncio, which receives inputs from clients, then sends these to another process that runs a neural network. The server then sends the client a result after the neural network finishes. ProcessPoolExecutor does not seem like a good fit for this use case, because the process needs to stay alive and be re-used for subsequent requests. Starting a new process for each request is impractical, because loading the neural network into GPU memory is an expensive operation. See here for an example of such a server (however this one is mostly written in C++ and does not asyncio): https://www.tensorflow.org/tfx/guide/serving