On Tue, Jun 23, 2015 at 11:11 PM, Eric Snow <ericsnowcurrently@gmail.com> wrote:
On Mon, Jun 22, 2015 at 5:59 PM, Nathaniel Smith <njs@pobox.com> wrote:
On Mon, Jun 22, 2015 at 10:37 AM, Gregory P. Smith <greg@krypto.org> wrote:
We have had to turn people away from subinterpreters in the past for use as part of their multithreaded C++ server where they wanted to occasionally run some Python code in embedded interpreters as part of serving some requests. Doing that would suddenly single thread their application (GIIIIIIL!) for all requests currently executing Python code despite multiple subinterpreters.
I've also talked to HPC users who discovered this problem the hard way (e.g. http://www-atlas.lbl.gov/, folks working on the Large Hadron Collider) -- they've been using Python as an extension language in some large physics codes but are now porting those bits to C++ because of the GIL issues. (In this context startup overhead should be easily amortized, but switching to an RPC model is not going to happen.)
Would this proposal make a difference for them?
I'm not sure -- it was just a conversation, so I've never seen their actual code. I'm pretty sure they're still on py2, for one thing :-). But putting that aside, I *think* it potentially could help -- my guess is that at a high level they have an API where they basically want to register a callback once, and then call it in parallel from multiple threads. This kind of usage would require some extra machinery, I guess, to spawn a subinterpreter for each thread and import the relevant libraries so the callback could run, but I can't see any reason one couldn't build that on top of the mechanisms you're talking about. -n -- Nathaniel J. Smith -- http://vorpus.org