From Swapnil.ST at gmail.com Tue Sep 2 09:54:40 2008 From: Swapnil.ST at gmail.com (Swapnil.ST at gmail.com) Date: Tue, 02 Sep 2008 00:54:40 -0700 Subject: [capi-sig] Threading in Python Message-ID: <001485f3be6239ee0d0455e5087a@google.com> I am working on an application which embeds python in the application threads and each thread can further create threads using the python thread module few things i don't understand: 1. why does the interpreter ie the main thread keeps releasing and reacquiring the GIL every few instruction. why isn't GIL simply allowed to be acquired by any thread which requires it then. I mean why is the acquisition and release of a lock kept periodic and not on need basis?( which is obviously the way it should be) 2. The purpose of GIL was not very clear to me from python-docs. is it just a single lock to guard every global variable? or is it something more? 3. suppose I am not making any blocking call then I would never release GIL since that is the only thing guarding my thread state and i don't want my thread state messed up any time my thread is running. Which implies unacceptable starvation of other threads. From gjcarneiro at gmail.com Tue Sep 2 11:31:53 2008 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Tue, 2 Sep 2008 10:31:53 +0100 Subject: [capi-sig] Threading in Python In-Reply-To: <001485f3be6239ee0d0455e5087a@google.com> References: <001485f3be6239ee0d0455e5087a@google.com> Message-ID: 2008/9/2 > I am working on an application which embeds python in the application > threads and each thread can further create threads using the python thread > module > few things i don't understand: > 1. why does the interpreter ie the main thread keeps releasing and > reacquiring the GIL every few instruction. why isn't GIL simply allowed to > be acquired by any thread which requires it then. I mean why is the > acquisition and release of a lock kept periodic and not on need basis?( > which is obviously the way it should be) Because Python _always_ needs the lock as long as there is Python code left to execute. If it wasn't periodically released and re-acquired, other threads that want to execute Python code would forever block. > > 2. The purpose of GIL was not very clear to me from python-docs. is it just > a single lock to guard every global variable? or is it something more? Anything that needs to call a Python API needs to acquire this lock first. GIL == Global Interpreter Lock. So it really guards access to the whole Python subsystem. > > 3. suppose I am not making any blocking call then I would never release GIL > since that is the only thing guarding my thread state and i don't want my > thread state messed up any time my thread is running. Which implies > unacceptable starvation of other threads. You don't release the GIL but Python will release it periodically, as you mentioned above. If you need more protection, you need to use your own locks (threading.Lock). -- Gustavo J. A. M. Carneiro INESC Porto, Telecommunications and Multimedia Unit "The universe is always one step beyond logic." -- Frank Herbert From Swapnil.ST at gmail.com Tue Sep 2 12:35:51 2008 From: Swapnil.ST at gmail.com (Swapnil.ST at gmail.com) Date: Tue, 02 Sep 2008 03:35:51 -0700 Subject: [capi-sig] Threading in Python Message-ID: <001636459506a2a8080455e7482c@google.com> Hi Gustavo Thanks for reply I am trying to test the thread module that I have put on top of application thread with a simple script that creates two threads. I am getting the following error, the cause of error I cannot understand. The same script when run with bare python, runs perfectly well Traceback (most recent call last): Unhandled exception in thread started by Assertion failed: PyTuple_Check(mro), file .\Python\Objects\typeobject.c, line 824 File "thread_test.py", line 28, in mythread = thread.start_new_thread(func1,tuple) SystemError: error return without exception set ERROR: Failed to import Python module thread_test! From hniksic at xemacs.org Tue Sep 2 13:02:32 2008 From: hniksic at xemacs.org (Hrvoje Niksic) Date: Tue, 02 Sep 2008 13:02:32 +0200 Subject: [capi-sig] Threading in Python In-Reply-To: <001485f3be6239ee0d0455e5087a@google.com> (Swapnil ST's message of "Tue\, 02 Sep 2008 00\:54\:40 -0700") References: <001485f3be6239ee0d0455e5087a@google.com> Message-ID: <87r68295fb.fsf@mulj.homelinux.net> Swapnil.ST at gmail.com writes: > 1. why does the interpreter ie the main thread keeps releasing and > reacquiring the GIL every few instruction. why isn't GIL simply > allowed to be acquired by any thread which requires it then. Because a thread requires it to even be able to run. Release the GIL == give other threads a chance to run. > I mean why is the acquisition and release of a lock kept periodic > and not on need basis?( which is obviously the way it should be) Because holding the GIL allows a thread to run, and all threads need to run at (almost) all times. The GIL is somewhat peculiar in its mode of operation. A typical lock guards a resource and is by default released. The thread that needs a resource acquires the lock, uses the resource, and releases the lock. GIL, on the other hand, is almost always acquired by some thread, and that thread is allowed to access Python data and run Python code. Py_BEGIN_ALLOW_THREADS actually *releases* the lock (in contrast to "begin foo" synchronization blocks which tend to acquire locks), giving other threads a chance to run during a blocking syscall or a long-winded calculation that doesn't touch Python objects. Py_END_ALLOW_THREADS reacquires the lock. > 3. suppose I am not making any blocking call then I would never > release GIL since that is the only thing guarding my thread state and > i don't want my thread state messed up any time my thread is > running. Which implies unacceptable starvation of other threads. If you're referring to Python code, the interpreter will occasionally release the GIL, as Gustavo said. But if you're referring to C code, it is your responsibility to release the lock at the places where you want other threads to run. If you have a long calculation, I suppose you could add pairs of Py_BEGIN_ALLOW_THREADS/Py_END_ALLOW_THREADS to avoid starvation of other threads. From anti00Zero at gmx.de Wed Sep 3 10:49:00 2008 From: anti00Zero at gmx.de (anti00Zero) Date: Wed, 03 Sep 2008 10:49:00 +0200 Subject: [capi-sig] PyImport_ImportModule and folders Message-ID: <20080903084900.65200@gmx.net> Hello. I have a problem with importing a script in a folder. The script is in a folder simple\index.py. This folder is in the root folder of my application. I use boost 1.33.1 with Python 2.4. Here is my code: PyObject *pObject = PyImport_ImportModule(const_cast(pcSkriptName.c_str())); If I use pcSkriptName with value "index" all works right. Now I want to load this script in the folder simple. But pcSkriptName with value "simple.index" doesn't work. What shell I do? Thanks for responding. -- Psssst! Schon das coole Video vom GMX MultiMessenger gesehen? Der Eine f?r Alle: http://www.gmx.net/de/go/messenger03 From gjcarneiro at gmail.com Wed Sep 3 17:06:02 2008 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Wed, 3 Sep 2008 16:06:02 +0100 Subject: [capi-sig] PyImport_ImportModule and folders In-Reply-To: <20080903084900.65200@gmx.net> References: <20080903084900.65200@gmx.net> Message-ID: 2008/9/3 anti00Zero > Hello. > > I have a problem with importing a script in a folder. > The script is in a folder simple\index.py. This folder is in the root > folder of my application. > I use boost 1.33.1 with Python 2.4. > > Here is my code: > > PyObject *pObject = > PyImport_ImportModule(const_cast(pcSkriptName.c_str())); > > If I use pcSkriptName with value "index" all works right. Now I want to > load this script in the folder simple. > But pcSkriptName with value "simple.index" doesn't work. > > What shell I do? > Silly question, but do you have an __init__.py file in the "simple" folder ? > > Thanks for responding. > -- > Psssst! Schon das coole Video vom GMX MultiMessenger gesehen? > Der Eine f?r Alle: http://www.gmx.net/de/go/messenger03 > _______________________________________________ > capi-sig mailing list > capi-sig at python.org > http://mail.python.org/mailman/listinfo/capi-sig > -- Gustavo J. A. M. Carneiro INESC Porto, Telecommunications and Multimedia Unit "The universe is always one step beyond logic." -- Frank Herbert From ideasman42 at gmail.com Sun Sep 7 02:19:15 2008 From: ideasman42 at gmail.com (Campbell Barton) Date: Sun, 7 Sep 2008 10:19:15 +1000 Subject: [capi-sig] Overriding __builtins__ from C to sandbox python scripts Message-ID: <7c1ab96d0809061719w2839f701qa5204dd69773b99@mail.gmail.com> This issue has come up on blender3d since there is work on bringing back the web plugin and also came up on the pygame list. Blender3D currently uses the C api to override __builtin__ functions that could be used to to bad stuff. so you cant do... "import os; os.system('rm -rf ~')" The builtins overridden are import, open, file, execfile, compile and reload. Keep in mind this is not intended to run arbitrary python script, They are packed into a file, external scripts cant be accessed. so the scripts will just have to cope with running inside a restricted python env. By question is - Is there a way to get back the original functions that the C api over writes when python is initialized? - If not, is there any way to run "import os; os.system('rm -rf ~')" from this limited environment? If this is the case then ofcourse there are other problems, " " * 1000 * 1000 * 1000 * 1000 * 1000 * 1000 - would crash the plugin... but for now Im mainly interested in worst case where any py code can be run that damages the system. Here is blender3d's sandbox function with some junk removed to read better. ------------------ // Python Sandbox code // override builtin functions import() and open() PyObject *KXpy_open(PyObject *self, PyObject *args) { PyErr_SetString(PyExc_RuntimeError, "Sandbox: open() function disabled!\nGame Scripts should not use this function."); return NULL; } PyObject *KXpy_reload(PyObject *self, PyObject *args) { PyErr_SetString(PyExc_RuntimeError, "Sandbox: reload() function disabled!\nGame Scripts should not use this function."); return NULL; } PyObject *KXpy_file(PyObject *self, PyObject *args) { PyErr_SetString(PyExc_RuntimeError, "Sandbox: file() function disabled!\nGame Scripts should not use this function."); return NULL; they are packaged inside a blend file so no other scripts will run. } PyObject *KXpy_execfile(PyObject *self, PyObject *args) { PyErr_SetString(PyExc_RuntimeError, "Sandbox: execfile() function disabled!\nGame Scripts should not use this function."); return NULL; } PyObject *KXpy_compile(PyObject *self, PyObject *args) { PyErr_SetString(PyExc_RuntimeError, "Sandbox: compile() function disabled!\nGame Scripts should not use this function."); return NULL; } PyObject *KXpy_import(PyObject *self, PyObject *args) { char *name; PyObject *globals = NULL; PyObject *locals = NULL; PyObject *fromlist = NULL; PyObject *l, *m, *n; if (!PyArg_ParseTuple(args, "s|OOO:m_import", &name, &globals, &locals, &fromlist)) return NULL; /* check for builtin modules */ m = PyImport_AddModule("sys"); l = PyObject_GetAttrString(m, "builtin_module_names"); n = PyString_FromString(name); if (PySequence_Contains(l, n)) { return PyImport_ImportModuleEx(name, globals, locals, fromlist); } /* quick hack for GamePython modules TODO: register builtin modules properly by ExtendInittab */ if (!strcmp(name, "GameLogic") || !strcmp(name, "GameKeys") || !strcmp(name, "PhysicsConstraints") || !strcmp(name, "Rasterizer") || !strcmp(name, "Mathutils")) { return PyImport_ImportModuleEx(name, globals, locals, fromlist); } PyErr_Format(PyExc_ImportError, "Import of external Module %.20s not allowed.", name); return NULL; } static PyMethodDef meth_open[] = {{ "open", KXpy_open, METH_VARARGS, "(disabled)"}}; static PyMethodDef meth_reload[] = {{ "reload", KXpy_reload, METH_VARARGS, "(disabled)"}}; static PyMethodDef meth_file[] = {{ "file", KXpy_file, METH_VARARGS, "(disabled)"}}; static PyMethodDef meth_execfile[] = {{ "execfile", KXpy_execfile, METH_VARARGS, "(disabled)"}}; static PyMethodDef meth_compile[] = {{ "compile", KXpy_compile, METH_VARARGS, "(disabled)"}}; static PyMethodDef meth_import[] = {{ "import", KXpy_import, METH_VARARGS, "our own import"}}; void setSandbox(TPythonSecurityLevel level) { PyObject *m = PyImport_AddModule("__builtin__"); PyObject *d = PyModule_GetDict(m); // functions we cant trust PyDict_SetItemString(d, "open", PyCFunction_New(meth_open, NULL)); PyDict_SetItemString(d, "reload", PyCFunction_New(meth_reload, NULL)); PyDict_SetItemString(d, "file", PyCFunction_New(meth_file, NULL)); PyDict_SetItemString(d, "execfile", PyCFunction_New(meth_execfile, NULL)); PyDict_SetItemString(d, "compile", PyCFunction_New(meth_compile, NULL)); // our own import PyDict_SetItemString(d, "__import__", PyCFunction_New(meth_import, NULL)); } From eljay at adobe.com Fri Sep 26 17:16:52 2008 From: eljay at adobe.com (Eljay Love-Jensen) Date: Fri, 26 Sep 2008 10:16:52 -0500 Subject: [capi-sig] Embedded Python in C application Message-ID: Hi everyone, First, my apologies if I'm in the wrong forum for my "embedding Python in a C application" questions. Please redirect me if I've wandered into the wrong place. I have two needs for using Python in my application that I hope has an easy answer without rewriting Python's internals. I need to use Python* in a multi-threaded application, where separate threads may be working on very long lasting Python scripts, and other threads may be involved in short Python scripts. None of the Python scripts running concurrently have any shared state with any of the other Python scripts running concurrently. Number of threads is in the 100-1000 range. I need to manage Python's use of the heap by providing a memory pool for Python to use, rather than allowing Python to use malloc/free. This is to prevent memory fragmentation, and to allow easy disposal of a memory pool used for a closed Python interpreter instance. A quick view of Py_Initialize() indicates that Python does not return some sort of "Py_State" pointer which represents the entire state of a Python interpreter. (Nor some sort of Py_Alloc().) Nor accepts a custom malloc/free function pointers. Hmmm. Does anyone have experience with using Python in this fashion? (If relevant, it will be Python 3.x not Python 2.x.) Thanks, --Eljay * Doesn't HAVE to be Python. Could be JavaScript or Lua or whatnot. My preferences these days is a Python solution. From gjcarneiro at gmail.com Fri Sep 26 19:48:58 2008 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Fri, 26 Sep 2008 18:48:58 +0100 Subject: [capi-sig] Embedded Python in C application In-Reply-To: References: Message-ID: 2008/9/26 Eljay Love-Jensen > Hi everyone, > > First, my apologies if I'm in the wrong forum for my "embedding Python in a > C application" questions. Please redirect me if I've wandered into the > wrong place. > > I have two needs for using Python in my application that I hope has an easy > answer without rewriting Python's internals. > > I need to use Python* in a multi-threaded application, where separate > threads may be working on very long lasting Python scripts, and other > threads may be involved in short Python scripts. None of the Python > scripts > running concurrently have any shared state with any of the other Python > scripts running concurrently. Number of threads is in the 100-1000 range. > > I need to manage Python's use of the heap by providing a memory pool for > Python to use, rather than allowing Python to use malloc/free. This is to > prevent memory fragmentation, and to allow easy disposal of a memory pool > used for a closed Python interpreter instance. > > A quick view of Py_Initialize() indicates that Python does not return some > sort of "Py_State" pointer which represents the entire state of a Python > interpreter. (Nor some sort of Py_Alloc().) Nor accepts a custom > malloc/free function pointers. Hmmm. Python already has its own highly optimized memory allocator, it does not use malloc/free directly. That's why the configure option --without-pymalloc exists. So I think your basic premise is wrong. But in any case maybe you are looking for PyInterpreterState_New(). But beware that going down that path is going to be painful: multiple interpreter states and threading can lead to many hours of debugging. I would think thrice before deciding I really need it. > Does anyone have experience with using Python in this fashion? I remember trying to debug xchat python plugin interface which used multiple interpreter states and multiple threads. I wish I could forget those horrors... > > > (If relevant, it will be Python 3.x not Python 2.x.) > > Thanks, > --Eljay > > * Doesn't HAVE to be Python. Could be JavaScript or Lua or whatnot. My > preferences these days is a Python solution. > > _______________________________________________ > capi-sig mailing list > capi-sig at python.org > http://mail.python.org/mailman/listinfo/capi-sig > -- Gustavo J. A. M. Carneiro INESC Porto, Telecommunications and Multimedia Unit "The universe is always one step beyond logic." -- Frank Herbert From rhamph at gmail.com Fri Sep 26 23:32:15 2008 From: rhamph at gmail.com (Adam Olsen) Date: Fri, 26 Sep 2008 15:32:15 -0600 Subject: [capi-sig] Embedded Python in C application In-Reply-To: References: Message-ID: On Fri, Sep 26, 2008 at 9:16 AM, Eljay Love-Jensen wrote: > Hi everyone, > > First, my apologies if I'm in the wrong forum for my "embedding Python in a > C application" questions. Please redirect me if I've wandered into the > wrong place. > > I have two needs for using Python in my application that I hope has an easy > answer without rewriting Python's internals. > > I need to use Python* in a multi-threaded application, where separate > threads may be working on very long lasting Python scripts, and other > threads may be involved in short Python scripts. None of the Python scripts > running concurrently have any shared state with any of the other Python > scripts running concurrently. Number of threads is in the 100-1000 range. > > I need to manage Python's use of the heap by providing a memory pool for > Python to use, rather than allowing Python to use malloc/free. This is to > prevent memory fragmentation, and to allow easy disposal of a memory pool > used for a closed Python interpreter instance. > > A quick view of Py_Initialize() indicates that Python does not return some > sort of "Py_State" pointer which represents the entire state of a Python > interpreter. (Nor some sort of Py_Alloc().) Nor accepts a custom > malloc/free function pointers. Hmmm. > > Does anyone have experience with using Python in this fashion? Don't use multiple interpreters. They're not really separate, they're buggy, they offer *NO* advantage to you over just using multiple threads. Likewise, you can't force memory to be freed, as it'd still be used by python. The only way to force cleanup is to spawn a subprocess. This'd also let you use multiple cores. You can probably mitigate the startup cost by having a given subprocess run several short scripts or one long script. -- Adam Olsen, aka Rhamphoryncus From swapnil.st at gmail.com Sat Sep 27 06:48:35 2008 From: swapnil.st at gmail.com (Swapnil Talekar) Date: Sat, 27 Sep 2008 10:18:35 +0530 Subject: [capi-sig] Embedded Python in C application In-Reply-To: References: Message-ID: On Fri, Sep 26, 2008 at 11:18 PM, Gustavo Carneiro wrote: > 2008/9/26 Eljay Love-Jensen > > > Hi everyone, > > > > First, my apologies if I'm in the wrong forum for my "embedding Python in > a > > C application" questions. Please redirect me if I've wandered into the > > wrong place. > > > > I have two needs for using Python in my application that I hope has an > easy > > answer without rewriting Python's internals. > > > > I need to use Python* in a multi-threaded application, where separate > > threads may be working on very long lasting Python scripts, and other > > threads may be involved in short Python scripts. None of the Python > > scripts > > running concurrently have any shared state with any of the other Python > > scripts running concurrently. Number of threads is in the 100-1000 > range. > > > > I need to manage Python's use of the heap by providing a memory pool for > > Python to use, rather than allowing Python to use malloc/free. This is > to > > prevent memory fragmentation, and to allow easy disposal of a memory pool > > used for a closed Python interpreter instance. > > > > A quick view of Py_Initialize() indicates that Python does not return > some > > sort of "Py_State" pointer which represents the entire state of a Python > > interpreter. (Nor some sort of Py_Alloc().) Nor accepts a custom > > malloc/free function pointers. Hmmm. > > > >Python already has its own highly optimized memory allocator, it does not > >use malloc/free directly. That's why the configure option > >--without-pymalloc exists. > > >So I think your basic premise is wrong. But in any case maybe you are > >looking for PyInterpreterState_New(). But beware that going down that > path > >is going to be painful: multiple interpreter states and threading can lead > >to many hours of debugging. I would think thrice before deciding I really > >need it. > >But If Eljay is trying to actually create multiple threads to run the >scripts simultaneously, then I guess he has much more to worry >about than just PyInterpreterState. The API does not take >care of all the Python global variables for sure. --Swapnil Talekar From swapnil.st at gmail.com Sat Sep 27 07:24:10 2008 From: swapnil.st at gmail.com (Swapnil Talekar) Date: Sat, 27 Sep 2008 10:54:10 +0530 Subject: [capi-sig] Embedded Python in C application In-Reply-To: References: Message-ID: On Sat, Sep 27, 2008 at 3:02 AM, Adam Olsen wrote: > On Fri, Sep 26, 2008 at 9:16 AM, Eljay Love-Jensen > wrote: > > Hi everyone, > > > > First, my apologies if I'm in the wrong forum for my "embedding Python in > a > > C application" questions. Please redirect me if I've wandered into the > > wrong place. > > > > I have two needs for using Python in my application that I hope has an > easy > > answer without rewriting Python's internals. > > > > I need to use Python* in a multi-threaded application, where separate > > threads may be working on very long lasting Python scripts, and other > > threads may be involved in short Python scripts. None of the Python > scripts > > running concurrently have any shared state with any of the other Python > > scripts running concurrently. Number of threads is in the 100-1000 > range. > > > > I need to manage Python's use of the heap by providing a memory pool for > > Python to use, rather than allowing Python to use malloc/free. This is > to > > prevent memory fragmentation, and to allow easy disposal of a memory pool > > used for a closed Python interpreter instance. > > > > A quick view of Py_Initialize() indicates that Python does not return > some > > sort of "Py_State" pointer which represents the entire state of a Python > > interpreter. (Nor some sort of Py_Alloc().) Nor accepts a custom > > malloc/free function pointers. Hmmm. > > > > Does anyone have experience with using Python in this fashion? > > >Don't use multiple interpreters. They're not really separate, > they're > >buggy, they offer *NO* advantage to you over just using multiple > >threads. > >they're buggy? sure. they'r not really separate? well, now if you want >to have multiple threads running scripts, I don't see how you can get away >without having multiple interpreters (in the same process)and they >REALLY have to be separate. That's not a easy task though. As I said, >the separation has to be more than just separate PyInterpreterStates > > Likewise, you can't force memory to be freed, as it'd still be used by > python. > > The only way to force cleanup is to spawn a subprocess. This'd also > let you use multiple cores. You can probably mitigate the startup > cost by having a given subprocess run several short scripts or one > long script. >Well, if you have your own memory manager, i.e. other than >Python's and you are embedding the Interpreter in your application. >I don't see any reason why you should not be able to cleanup at any >appropriate point you think. Python is still using the memory? sure it is. >But not after its done with the script -- Swapnil Talekar From rhamph at gmail.com Sat Sep 27 08:02:34 2008 From: rhamph at gmail.com (Adam Olsen) Date: Sat, 27 Sep 2008 00:02:34 -0600 Subject: [capi-sig] Embedded Python in C application In-Reply-To: References: Message-ID: On Fri, Sep 26, 2008 at 11:24 PM, Swapnil Talekar wrote: > On Sat, Sep 27, 2008 at 3:02 AM, Adam Olsen wrote: > >> On Fri, Sep 26, 2008 at 9:16 AM, Eljay Love-Jensen >> wrote: >> > Hi everyone, >> > >> > First, my apologies if I'm in the wrong forum for my "embedding Python in >> a >> > C application" questions. Please redirect me if I've wandered into the >> > wrong place. >> > >> > I have two needs for using Python in my application that I hope has an >> easy >> > answer without rewriting Python's internals. >> > >> > I need to use Python* in a multi-threaded application, where separate >> > threads may be working on very long lasting Python scripts, and other >> > threads may be involved in short Python scripts. None of the Python >> scripts >> > running concurrently have any shared state with any of the other Python >> > scripts running concurrently. Number of threads is in the 100-1000 >> range. >> > >> > I need to manage Python's use of the heap by providing a memory pool for >> > Python to use, rather than allowing Python to use malloc/free. This is >> to >> > prevent memory fragmentation, and to allow easy disposal of a memory pool >> > used for a closed Python interpreter instance. >> > >> > A quick view of Py_Initialize() indicates that Python does not return >> some >> > sort of "Py_State" pointer which represents the entire state of a Python >> > interpreter. (Nor some sort of Py_Alloc().) Nor accepts a custom >> > malloc/free function pointers. Hmmm. >> > >> > Does anyone have experience with using Python in this fashion? >> >> >Don't use multiple interpreters. They're not really separate, >> they're >> >buggy, they offer *NO* advantage to you over just using multiple >> >threads. >> > >>they're buggy? sure. they'r not really separate? well, now if you want >>to have multiple threads running scripts, I don't see how you can get away >>without having multiple interpreters (in the same process)and they >>REALLY have to be separate. That's not a easy task though. As I said, >>the separation has to be more than just separate PyInterpreterStates You must not be very familiar with threading. All you need to do is give each script it's own *local* state and not modify any globals. No need for multiple interpreters. All the subinterpreter API does is give each interpreter a separate copy of the modules, so poorly designed APIs that use global state can pretend they've got separate processes, without actually having separate processes. Rather obscure, and not useful for the OP. >> Likewise, you can't force memory to be freed, as it'd still be used by >> python. >> >> The only way to force cleanup is to spawn a subprocess. This'd also >> let you use multiple cores. You can probably mitigate the startup >> cost by having a given subprocess run several short scripts or one >> long script. > > >>Well, if you have your own memory manager, i.e. other than >>Python's and you are embedding the Interpreter in your application. >>I don't see any reason why you should not be able to cleanup at any >>appropriate point you think. Python is still using the memory? sure it is. >>But not after its done with the script If you free any memory like that you'll have hosed the entire python interpreter. After that there's nothing useful to do but exit the process.. so you might as well exit in the first place. -- Adam Olsen, aka Rhamphoryncus From Jack.Jansen at cwi.nl Sat Sep 27 23:18:29 2008 From: Jack.Jansen at cwi.nl (Jack Jansen) Date: Sat, 27 Sep 2008 23:18:29 +0200 Subject: [capi-sig] Embedded Python in C application In-Reply-To: References: Message-ID: <1032571E-F2DE-4DCB-A2D9-4547D5CE703D@cwi.nl> And, to possibly make this a bit clearer for the OP: the basic problem is that a lot of state is kept in global variables. So, the "interpreter state" is spread all over memory, and there's no way you can duplicate or free this without freeing the whole process. -- Jack Jansen, , http://www.cwi.nl/~jack If I can't dance I don't want to be part of your revolution -- Emma Goldman