On 2015-05-31 8:35 AM, Ben Leslie wrote:
I'm just starting my exploration into using async/await; all my 'real-world' scenarios are currently hypothetical.
One such hypothetical scenario however is that if I have a server process running, with some set of concurrent connections, each managed by a co-routine. Each co-routine is of some arbitrary complexity e.g: some combination of reading files, reading from database, reading from peripherals. If I notice one of those co-routines appears stuck and not making progress, I'd very much like to debug that, and preferably in a way that doesn't necessarily stop the rest of the server (or even the co-routine that appears stuck).
The problem with the "if debug: log(...)" approach is that you need foreknowledge of the fault state occurring; on a busy server you don't want to just be logging every 'switch()'. I guess you could do something like "switch_state[outer_coro] = get_current_stack_frames()" on each switch. To me double book-keeping something that the interpreter already knows seems somewhat wasteful but maybe it isn't really too bad.
I guess it all depends on how "switching" is organized in your framework of choice. In asyncio, for instance, all the code that knows about coroutines is in tasks.py. `Task` class is responsible for running coroutines, and it's the single place where you would need to put the "if debug: ..." line for debugging "slow" Futures-- the only thing that coroutines can "stuck" with (the other thing is accidentally calling blocking code, but your proposal wouldn't help with that).