On 6/11/2020 6:59 AM, Mark Shannon wrote:
Hi Riccardo,
On 10/06/2020 5:51 pm, Riccardo Ghetta wrote:
Hi, as an user, the "lua use case" is right what I need at work. I realize that for python this is a niche case, and most users don't need any of this, but I hope it will useful to understand why having multiple independent interpreters in a single process can be an essential feature. The company I work for develop and sells a big C++ financial system with python embedded, providing critical flexibility to our customers. Python is used as a scripting language, with most cases having C++ calling a python script itself calling other C++ functions. Most of the times those scripts are in workloads I/O bound or where the time spent in python is negligible. > But some workloads are really cpu bound and those tend to become GIL-bound, even with massive use of C++ helpers; some to the point that GIL-contention makes up over 80% of running time, instead of 1-5%. And every time our customers upgrade their server, they buy machines with more cores and the contention problem worsens.
Different interpreters need to operate in their own isolated address space, or there will be horrible race conditions. Regardless of whether that separation is done in software or hardware, it has to be done.
I realize this is true now, but why must it always be true? Can't we fix this? At least one solution has been proposed: passing around a pointer to the current interpreter. I realize there issues here, like callbacks and signals that will need to be worked out. But I don't think it's axiomatically true that we'll always have race conditions with multiple interpreters in the same address space. Eric