From lanyjie at yahoo.com Sat May 1 03:38:57 2010 From: lanyjie at yahoo.com (Yingjie Lan) Date: Fri, 30 Apr 2010 18:38:57 -0700 (PDT) Subject: [capi-sig] ANN: expy 0.6.5 released! Message-ID: <866875.74298.qm@web54203.mail.re2.yahoo.com> EXPY is an express way to extend Python! EXPY provides a way to extend python in an elegant way. For more information and a tutorial, see: http://expy.sourceforge.net/ What's new: 1. Correct treatment of __init__ method. 2. Give warnings of missing Py_INCREF on appropriate special type methods. 3. Documentation update. Cheers, Yingjie From Jack.Jansen at cwi.nl Sat May 1 23:40:13 2010 From: Jack.Jansen at cwi.nl (Jack Jansen) Date: Sat, 1 May 2010 23:40:13 +0200 Subject: [capi-sig] SWIG + expy In-Reply-To: <272522.69975.qm@web54204.mail.re2.yahoo.com> References: <272522.69975.qm@web54204.mail.re2.yahoo.com> Message-ID: <652256E5-5D3A-41DE-A7C5-6BED41F5944E@cwi.nl> On 27-Apr-2010, at 08:30 , Yingjie Lan wrote: > Hi, > > Is it possible to use SWIG to parse C/C++, and provide an interface for me to generate some code? I thought it might be good to have SWIG help generate expy (see http://expy.sourceforge.net) files, then generate the python extension via expy. I would be very interested in a universal intermediate format for all the interface generators. I'm still using a version of Guido's old bgen, now grudgingly extended to handle C++ and do bidirectional bridging between Python and C++, and while I love and cherish the code generator the C++ parser is, uhm... challenging. Parsing C++ with per-line regular expressions is no fun:-) I looked at gccxml at some point, as well as at some of the competing Python interface generators, but it went nowhere. gccxml output is far too detailed, and the output is too much of a simple parse tree to be of any use. The intermediate formats of the other interface generators I looked at were all too inaccessible. Maybe we can come up with something decent in this group? If there is enough interest: I can start by describing bgen's intermediate format, and if other people do the same for theirs we may be able to get to common ground... -- Jack Jansen, , http://www.cwi.nl/~jack If I can't dance I don't want to be part of your revolution -- Emma Goldman From python_capi at behnel.de Sun May 2 07:27:05 2010 From: python_capi at behnel.de (Stefan Behnel) Date: Sun, 02 May 2010 07:27:05 +0200 Subject: [capi-sig] intermediate parsed representation of C/C++ API descriptions for multiple wrapper generators In-Reply-To: <652256E5-5D3A-41DE-A7C5-6BED41F5944E@cwi.nl> References: <272522.69975.qm@web54204.mail.re2.yahoo.com> <652256E5-5D3A-41DE-A7C5-6BED41F5944E@cwi.nl> Message-ID: <4BDD0D29.1050607@behnel.de> [changing subject appropriately] Jack Jansen, 01.05.2010 23:40: > I would be very interested in a universal intermediate format for all > the interface generators. I'm still using a version of Guido's old bgen, > now grudgingly extended to handle C++ and do bidirectional bridging > between Python and C++, and while I love and cherish the code generator > the C++ parser is, uhm... challenging. Parsing C++ with per-line regular > expressions is no fun:-) Certainly not the right tool here. It appears that clang seems to work quite well for both C and C++. > I looked at gccxml at some point, as well as at some of the competing > Python interface generators, but it went nowhere. gccxml output is far > too detailed, and the output is too much of a simple parse tree to be of > any use. The intermediate formats of the other interface generators I > looked at were all too inaccessible. That's likely because the parser requirements grow with the tool itself. > Maybe we can come up with something decent in this group? I think it really makes sense to do this. A suitable level of detail for all generators may not be immediately obvious, but should be doable. > If there is enough interest: I can start by describing bgen's > intermediate format, and if other people do the same for theirs we may > be able to get to common ground... Please do. I'll ask over at the Cython-users list to see if others have something to contribute to this discussion. Stefan From Jack.Jansen at cwi.nl Sun May 2 23:47:24 2010 From: Jack.Jansen at cwi.nl (Jack Jansen) Date: Sun, 2 May 2010 23:47:24 +0200 Subject: [capi-sig] intermediate parsed representation of C/C++ API descriptions for multiple wrapper generators In-Reply-To: <4BDD0D29.1050607@behnel.de> References: <272522.69975.qm@web54204.mail.re2.yahoo.com> <652256E5-5D3A-41DE-A7C5-6BED41F5944E@cwi.nl> <4BDD0D29.1050607@behnel.de> Message-ID: On 2-May-2010, at 07:27 , Stefan Behnel wrote: >> If there is enough interest: I can start by describing bgen's >> intermediate format, and if other people do the same for theirs we may >> be able to get to common ground... > > Please do. I'll ask over at the Cython-users list to see if others have something to contribute to this discussion. Ok, here goes. People interested in a (slightly) more complete writeup can read , but here is the basics. The bgen intermediate format is a python file. Each C or C++ definition is transformed into a few lines of Python code that describe the definition. Here is an example (manually entered, so probably incorrect:-): --------- test.h: int increment(int value); void print(const char *string); void clear(int *location); ---------- intermediate code: f = Function(int, 'increment', (int, 'value', InMode)) functions.append(f) f = Function(void, 'print', (char_ptr, 'string', InMode)) functions.append(f) f = Function(void, 'clear', (int, 'location', OutMode)) functions.append(f) ---------- That's the basics. There is a little mangling of names going on, as you can see in the second function, so that the C type is representable as a Python identifier. But, as you can see in the third line, there is a little more to it: patterns are applied before outputting the intermediate format. One of the patterns has turned the expected (int_ptr, 'location', InMode) argument into the (int, 'location', OutMode). The current implementation applies the patterns before creating the intermediate format, but I think that for a future implementation I would be much more in favor of having that be an extra step (so it would read intermediate code and write intermediate code). The pattern substitution engine is really the power of bgen, because it can do much more than the simple transformation shown here. Patterns can trigger on multiple arguments, and they can also be told to look for "C-style" object-oriented code. So, int writestream(streamptr *sp, char *buf, int nbytes); is turned into f = Method(int, 'writestream', (VarInputBufferSize, 'buf', InMode)) methods_streamptr.append(f) This is why I love bgen so much, because it means that the Python interface is the expected sp.writestream("hello") as opposed to the barebones writestream(sp, "hello", 5). But that's bgen-evangelism, so I'll stop here:-) -- Jack Jansen, , http://www.cwi.nl/~jack If I can't dance I don't want to be part of your revolution -- Emma Goldman From gjcarneiro at gmail.com Sun May 2 23:57:07 2010 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Sun, 2 May 2010 22:57:07 +0100 Subject: [capi-sig] intermediate parsed representation of C/C++ API descriptions for multiple wrapper generators In-Reply-To: References: <272522.69975.qm@web54204.mail.re2.yahoo.com> <652256E5-5D3A-41DE-A7C5-6BED41F5944E@cwi.nl> <4BDD0D29.1050607@behnel.de> Message-ID: On Sun, May 2, 2010 at 22:47, Jack Jansen wrote: > > On 2-May-2010, at 07:27 , Stefan Behnel wrote: > >> If there is enough interest: I can start by describing bgen's > >> intermediate format, and if other people do the same for theirs we may > >> be able to get to common ground... > > > > Please do. I'll ask over at the Cython-users list to see if others have > something to contribute to this discussion. > > > Ok, here goes. People interested in a (slightly) more complete writeup can > read , but > here is the basics. > > The bgen intermediate format is a python file. Each C or C++ definition is > transformed into a few lines of Python code that describe the definition. You know, this sounds a lot like pybingen. It reads gccxml and generates an API description as a python file. Syntax used in pybindgen is only slightly different thant what you propose. The pybindgen parser has some problems, but it is functioning in a very large and complex C++ API (network simulator 3). Just trying to save people from reinventing the wheel... :-) Here's the link: http://code.google.com/p/pybindgen/ Here is an example (manually entered, so probably incorrect:-): > > --------- test.h: > int increment(int value); > void print(const char *string); > void clear(int *location); > > ---------- intermediate code: > f = Function(int, 'increment', (int, 'value', InMode)) > functions.append(f) > > f = Function(void, 'print', (char_ptr, 'string', InMode)) > functions.append(f) > > f = Function(void, 'clear', (int, 'location', OutMode)) > functions.append(f) > ---------- > That's the basics. There is a little mangling of names going on, as you can > see in the second function, so that the C type is representable as a Python > identifier. > > But, as you can see in the third line, there is a little more to it: > patterns are applied before outputting the intermediate format. One of the > patterns has turned the expected (int_ptr, 'location', InMode) argument into > the (int, 'location', OutMode). The current implementation applies the > patterns before creating the intermediate format, but I think that for a > future implementation I would be much more in favor of having that be an > extra step (so it would read intermediate code and write intermediate code). > > The pattern substitution engine is really the power of bgen, because it can > do much more than the simple transformation shown here. Patterns can trigger > on multiple arguments, and they can also be told to look for "C-style" > object-oriented code. So, > > int writestream(streamptr *sp, char *buf, int nbytes); > > is turned into > f = Method(int, 'writestream', (VarInputBufferSize, 'buf', InMode)) > methods_streamptr.append(f) > > This is why I love bgen so much, because it means that the Python interface > is the expected sp.writestream("hello") as opposed to the barebones > writestream(sp, "hello", 5). But that's bgen-evangelism, so I'll stop > here:-) > > -- > Jack Jansen, , http://www.cwi.nl/~jack > If I can't dance I don't want to be part of your revolution -- Emma Goldman > > > > _______________________________________________ > capi-sig mailing list > capi-sig at python.org > http://mail.python.org/mailman/listinfo/capi-sig > -- Gustavo J. A. M. Carneiro INESC Porto, UTM, WiN, http://win.inescporto.pt/gjc "The universe is always one step beyond logic." -- Frank Herbert From Jack.Jansen at cwi.nl Mon May 3 00:08:42 2010 From: Jack.Jansen at cwi.nl (Jack Jansen) Date: Mon, 3 May 2010 00:08:42 +0200 Subject: [capi-sig] intermediate parsed representation of C/C++ API descriptions for multiple wrapper generators In-Reply-To: References: <272522.69975.qm@web54204.mail.re2.yahoo.com> <652256E5-5D3A-41DE-A7C5-6BED41F5944E@cwi.nl> <4BDD0D29.1050607@behnel.de> Message-ID: <2D339176-7635-4743-8500-283210AEE559@cwi.nl> On 2-May-2010, at 23:57 , Gustavo Carneiro wrote: > On Sun, May 2, 2010 at 22:47, Jack Jansen wrote: > > On 2-May-2010, at 07:27 , Stefan Behnel wrote: > >> If there is enough interest: I can start by describing bgen's > >> intermediate format, and if other people do the same for theirs we may > >> be able to get to common ground... > > > > Please do. I'll ask over at the Cython-users list to see if others have something to contribute to this discussion. > > > Ok, here goes. People interested in a (slightly) more complete writeup can read , but here is the basics. > > The bgen intermediate format is a python file. Each C or C++ definition is transformed into a few lines of Python code that describe the definition. > > You know, this sounds a lot like pybingen. It reads gccxml and generates an API description as a python file. Syntax used in pybindgen is only slightly different thant what you propose. Can you elaborate (a bit)? Because I'm interested in the places where the wrapper generators have different requirements (because that's going to be the difficulat areas for a common format). An example of where things may be different is that the bgen format has absolutely no knowledge of C types. As you can see from my example they are nothing more than python-identifier representations of the C names. -- Jack Jansen, , http://www.cwi.nl/~jack If I can't dance I don't want to be part of your revolution -- Emma Goldman From lanyjie at yahoo.com Mon May 3 01:24:11 2010 From: lanyjie at yahoo.com (Yingjie Lan) Date: Sun, 2 May 2010 16:24:11 -0700 (PDT) Subject: [capi-sig] ANN: expy 0.6.6 released! Message-ID: <196302.50328.qm@web54202.mail.re2.yahoo.com> EXPY is an express way to extend Python! EXPY provides a way to extend python in an elegant way. For more information and a tutorial, see: http://expy.sourceforge.net/ What's new: 1. Special methods can now take @throws decorators. 2. Added convenience macros _NEW and _CheckExact for extension types. 3. Give warnings of missing Py_INCREF on all methods/functions returning an object. 4. And the responsibility of Py_INCREF is left for the developer. 5. Documentation update. Cheers, Yingjie From lanyjie at yahoo.com Mon May 3 03:40:30 2010 From: lanyjie at yahoo.com (Yingjie Lan) Date: Sun, 2 May 2010 18:40:30 -0700 (PDT) Subject: [capi-sig] ANN: expy 0.6.6 released! In-Reply-To: <196302.50328.qm@web54202.mail.re2.yahoo.com> Message-ID: <74508.23739.qm@web54204.mail.re2.yahoo.com> > Subject: ANN: expy 0.6.6 released! > To: "python list" > Cc: "CAPI Python" > Date: Monday, May 3, 2010, 3:24 AM > EXPY is an express way to extend Python! > > EXPY provides a way to extend python in an elegant way. For > more information and a tutorial, see: http://expy.sourceforge.net/ > I'm using expy in a serious project to wrap an old project written in C and deliver it up via www with django. That is why expy is getting improved quickly these days. So far, both the project and expy are making good progress hand in hand. Cheers, Yingjie From lanyjie at yahoo.com Mon May 3 14:43:55 2010 From: lanyjie at yahoo.com (Yingjie Lan) Date: Mon, 3 May 2010 05:43:55 -0700 (PDT) Subject: [capi-sig] ANN: expy 0.6.7 released! In-Reply-To: <74508.23739.qm@web54204.mail.re2.yahoo.com> Message-ID: <741400.49245.qm@web54202.mail.re2.yahoo.com> EXPY is an express way to extend Python! EXPY provides a way to extend python in an elegant way. For more information and a tutorial, see: http://expy.sourceforge.net/ I'm glad to announce a new release again today. ^_^ What's new: Version 0.6.7 1. Now functions can have 'value on failure' via rawtype. 2. Now property getters/setters can have @throws 3. Bug fix: if __init__ wrapper fails, it must return -1. Cheers, Yingjie From savages at mozapps.org Thu May 6 10:09:42 2010 From: savages at mozapps.org (Shaun Savage) Date: Thu, 06 May 2010 16:09:42 +0800 Subject: [capi-sig] PyImport_GetModuleDict: no module dictionary!" Message-ID: <4BE27946.4020404@mozapps.org> I am getting a SIGABRT PyImport_GetModuleDict: no module dictionary!" this is the code. I need a InterpreterState because I need to create new threads using pthread and clone the existing state. Does PyInterpreterState_New, create a fulling initalized IS? Does PyThreadState_New clone the existing state or a clean new one? Py_Initialize(); is = PyInterpreterState_New(); if ( is == NULL ) { fclose ( maa->pFILE); free( maa ); return -2; } maa->is = is; PyEval_InitThreads(); ts = PyThreadState_New(is); PyThreadState_Swap( ts ); ts = PyThreadState_Get(); maa->ts = ts; PyImport_ImportModule( "sys" ); From switch2mathan at gmail.com Wed May 19 09:00:37 2010 From: switch2mathan at gmail.com (MathanK) Date: Tue, 18 May 2010 19:00:37 -1200 Subject: [capi-sig] Getting System error with PyModule_AddIntConstant funtion Message-ID: <128af5d2a08.-27552154284981426.-5868253431233049633@gmail.com> Following is a Python3.1 C api which runs properly without PyModule_AddIntConstant function. But when PyModule_AddIntConstant() function is used, getting the following error when i call c. path("test call"); " SystemError: NULL result without error in PyObject_Call " Python C api- c.c int test(int a){ return (2*a+a); } PyObject* dict = NULL; static PyObject* path(PyObject* self, PyObject* args) { char *cpath; PyUnicodeObject *path; PyUnicodeObject *test; if (!PyArg_ParseTuple(args, "s", &path)) return NULL; cpath = (char *)path; test = (PyUnicodeObject*)(cpath); PyObject *testDict = PyDict_New(); int testVal = PyDict_SetItem(testDict, (PyObject *)PyUnicode_FromString(cpath), (PyObject *)PyUnicode_FromString(cpath)); return Py_BuildValue("s", test); } static PyObject* c(PyObject* self, PyObject* args) { int a; int result; if(!PyArg_ParseTuple(args, "i", &a)) return NULL; result = test(a); return Py_BuildValue("i", result); } static PyMethodDef cmethods[] = { {"c", c, METH_VARARGS, "watch"}, {"path", path, METH_VARARGS, "test"}, {NULL, NULL, 0, NULL}, }; static struct PyModuleDef c_Module = { PyModuleDef_HEAD_INIT, "c", ( "Interface." ), -1, cmethods }; PyMODINIT_FUNC PyInit_c(void) { PyObject* obj = PyModule_Create(&c_Module); if (obj == NULL) return NULL; long lon1 = 0; PyModule_AddIntConstant(obj, "test", lon1); } int main(int argc, wchar_t *argv[]) { PyImport_AppendInittab("c", PyInit_c); Py_SetProgramName(argv[0]); Py_Initialize(); PyImport_ImportModule("c"); return 0; } From daniel at stutzbachenterprises.com Fri May 21 16:34:25 2010 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Fri, 21 May 2010 09:34:25 -0500 Subject: [capi-sig] Unicode compatibility Message-ID: I'm working on http://bugs.python.org/issue8654 and I'd like to get some feedback from extension-writers, since it will impact them. Synopsis of the problem: If you try to load an extension module that: - uses any of Python's Unicode functions, and - was compiled by a Python with the opposite Unicode setting (UCS2 vs UCS4) then you get an ugly "undefined symbol" error from the linker. For Python 3, __repr__ must return a Unicode object which means that almost all extensions will need to call some Unicode functions. It's basically fruitless to upload a binary egg for Python 3 to PyPi, since it will generate link errors for a large fraction of downloaders (as I discovered the hard way). Proposed solution: By default, extensions will compile in a "Unicode-agnostic" mode, where Py_UNICODE is an incomplete type. The extension's code can pass Py_UNICODE pointers back and forth between Python API functions, but it cannot dereference them nor use sizeof(Py_UNICODE). Unicode-agnostic modules will load and run in both UCS2 and UCS4 interpreters. Most extensions fall into this category. If a module needs to dereference Py_UNICODE, it can define PY_REAL_PY_UNICODE before including Python.h to make Py_UNICODE a complete type, .Attempting to load such a module into a mismatched interpreter will cause an ImportError (instead of an ugly linker error). If an extension uses PY_REAL_PY_UNICODE in any .c file, it must also use it in the .c file that calls PyModule_Create to ensure the Unicode width is stored in the module's information. I have two questions for the greater community: 1) Do you have any fundamental concerns with this design? 2) Would you prefer the default be reversed? i.e, that Py_UNICODE be a complete type by default, and an extension must have a #define to compile in Unicode-agnostic mode? -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC From issyk.kol at gmx.com Fri May 21 19:47:39 2010 From: issyk.kol at gmx.com (Issyk Kol) Date: Fri, 21 May 2010 19:47:39 +0200 Subject: [capi-sig] Some type inference for python? Message-ID: <20100521175243.90160@gmx.com> Hello.? I'd like to do some ML-like type inference for Python code. I'm not trying to resurrect the static typing vs. dynamic typing debate here. It's simply that I'd need to get static typing for Python for code-generation reasons. Therefore, I'd like to ask: Does the Python C API support a way to inspect the bytecode or the source code of a given piece of Python code? What approach would be promising enough to type things in ML-way (I'm unfortunately not interested in cartesian-product type inference), and what help may the C API provide to meet this goal? (Maybe I haven't looked far enough, but I could not discern clearly what could be useful...) (Sorry for posting in non-text-mode. Using a webmail because port 25 policies in this bloody country ar sooooo restrictive.) Thanks. Issyk Kol. From vojta.rylko at seznam.cz Sun May 23 16:40:24 2010 From: vojta.rylko at seznam.cz (=?ISO-8859-2?Q?Vojt=ECch_Rylko?=) Date: Sun, 23 May 2010 16:40:24 +0200 Subject: [capi-sig] Checking int type Message-ID: <4BF93E58.60408@seznam.cz> Hi, what am I doing wrong? C code: -------------- static PyObject *is_prime(PyObject *self, PyObject *value) { if (!PyInt_Check(value)) { PyErr_SetString(PyExc_TypeError, "only integers are acceptable"); return NULL; } -------------- Test (I'm using integer 3): ------------ >>> primes.is_prime(3) Traceback (most recent call last): File "", line 1, in ValueError: only integers are acceptable ------------- Thanks From jon at indelible.org Mon May 24 08:56:21 2010 From: jon at indelible.org (Jon Parise) Date: Sun, 23 May 2010 23:56:21 -0700 Subject: [capi-sig] Checking int type In-Reply-To: <4BF93E58.60408@seznam.cz> References: <4BF93E58.60408@seznam.cz> Message-ID: On Sun, May 23, 2010 at 7:40 AM, Vojt?ch Rylko wrote: > what am I doing wrong? > > C code: > -------------- > static PyObject *is_prime(PyObject *self, PyObject *value) > { > ? ?if (!PyInt_Check(value)) { > ? ? ? ?PyErr_SetString(PyExc_TypeError, "only integers are acceptable"); > ? ? ? ?return NULL; > ? ?} > -------------- > > Test (I'm using integer 3): > ------------ >>>> primes.is_prime(3) > Traceback (most recent call last): > ?File "", line 1, in > ValueError: only integers are acceptable > ------------- The second argument ('value', in your code) is actually a tuple containing the arguments passed to your is_prime() function. You'll need to unpack the individual values from that tuple. http://docs.python.org/extending/extending.html#extracting-parameters-in-extension-functions -- Jon Parise (jon of indelible.org) :: "Scientia potentia est" From stefan_ml at behnel.de Sun May 23 19:51:09 2010 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 23 May 2010 19:51:09 +0200 Subject: [capi-sig] Unicode compatibility In-Reply-To: References: Message-ID: <4BF96B0D.8010400@behnel.de> Daniel Stutzbach, 21.05.2010 16:34: > If you try to load an extension module that: > - uses any of Python's Unicode functions, and > - was compiled by a Python with the opposite Unicode setting (UCS2 vs UCS4) > then you get an ugly "undefined symbol" error from the linker. Well known problem, yes. > By default, extensions will compile in a "Unicode-agnostic" mode, where > Py_UNICODE is an incomplete type. The extension's code can pass Py_UNICODE > pointers back and forth between Python API functions, but it cannot > dereference them nor use sizeof(Py_UNICODE). Unicode-agnostic modules will > load and run in both UCS2 and UCS4 interpreters. Most extensions fall into > this category. This is a pretty bad default for Cython code. Starting with version 0.13, Cython will try to infer Py_UNICODE for single character unicode strings and use that whenever possible, e.g. when for-looping over unicode strings and during character comparisons. Making Py_UNICODE an incomplete type will render this impossible. > If a module needs to dereference Py_UNICODE, it can define > PY_REAL_PY_UNICODE before including Python.h to make Py_UNICODE a complete > type So that would be an option that all Cython modules (or at least those that use Py_UNICODE and/or single unicode characters somewhere) would use automatically. Not much to win here. > Attempting to load such a module into a mismatched interpreter will > cause an ImportError (instead of an ugly linker error). If an extension > uses PY_REAL_PY_UNICODE in any .c file, it must also use it in the .c file > that calls PyModule_Create to ensure the Unicode width is stored in the > module's information. Cython modules should normally be self-contained, but it will not be 100% sure that a module that wraps C code using Py_UNICODE will also use Py_UNICODE somewhere, so that Cython could enable that option automatically. Cython would therefore be forced to enable the option for basically all code that calls into C code. > 2) Would you prefer the default be reversed? i.e, that Py_UNICODE be a > complete type by default, and an extension must have a #define to compile in > Unicode-agnostic mode? Absolutely. IMHO, the only platform that always requires binaries due to incomplete operating system support for source distributions is MS Windows, where Py_UNICODE equals wchar_t anyway. In some cases, MacOS-X is broken enough to require binary releases, too, but the normal target on that platform is the system Python, which has a universal setting for the Py_UNICODE size as well. So the only remaining platforms that suffer from binary incompatibility problems here are Linux und Unix systems, where the Py_UNICODE size differs between installations and distributions. Given that these systems are best targeted with a source distribution, it sounds like a bad default to complicate the usage of Py_UNICODE for everyone, unless users explicitly disable this behaviour. It's much better to provide this as an option for extension writers who really want (or need) to provide portable binary distributions for whatever reason. Personally, I think the drawbacks totally outweigh the single advantage, though, so I could absolutely live without this change. It's easy enough to drop the linkage error message into a web search engine. Stefan From vojta.rylko at seznam.cz Mon May 24 13:31:04 2010 From: vojta.rylko at seznam.cz (=?UTF-8?B?Vm9qdMSbY2ggUnlsa28=?=) Date: Mon, 24 May 2010 13:31:04 +0200 Subject: [capi-sig] Checking int type In-Reply-To: References: <4BF93E58.60408@seznam.cz> Message-ID: <4BFA6378.2070707@seznam.cz> But after unpacking, I still cannot detect no-integer type object. >>> from primes import is_prime >>> is_prime(7) True >>> is_prime(5.5) # should raise exception, but is rounded as 5 __main__:1: DeprecationWarning: integer argument expected, got float True >>> is_prime(9.8) # should raise exception, but is rounded as 9 False ================================= static PyObject *is_prime(PyObject *self, PyObject *arg) { PyObject *value; if (!PyArg_ParseTuple(arg, "O", value)) return NULL; if (PyInt_Check(value)) { // never raise (test case - in real its if(!PyInt_Check(value) which always raise) PyErr_SetString(PyExc_ValueError, "only integers are acceptable"); return NULL; } int number; if (!PyArg_ParseTuple(value, "i", &number)) return NULL; int result = is_prime_solve(number); Dne 24.5.2010 8:56, Jon Parise napsal(a): > On Sun, May 23, 2010 at 7:40 AM, Vojt?ch Rylko wrote: > > >> what am I doing wrong? >> >> C code: >> -------------- >> static PyObject *is_prime(PyObject *self, PyObject *value) >> { >> if (!PyInt_Check(value)) { >> PyErr_SetString(PyExc_TypeError, "only integers are acceptable"); >> return NULL; >> } >> -------------- >> >> Test (I'm using integer 3): >> ------------ >> >>>>> primes.is_prime(3) >>>>> >> Traceback (most recent call last): >> File "", line 1, in >> ValueError: only integers are acceptable >> ------------- >> > The second argument ('value', in your code) is actually a tuple > containing the arguments passed to your is_prime() function. You'll > need to unpack the individual values from that tuple. > > http://docs.python.org/extending/extending.html#extracting-parameters-in-extension-functions > > From daniel at stutzbachenterprises.com Mon May 24 15:50:23 2010 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Mon, 24 May 2010 08:50:23 -0500 Subject: [capi-sig] Checking int type In-Reply-To: <4BFA6378.2070707@seznam.cz> References: <4BF93E58.60408@seznam.cz> <4BFA6378.2070707@seznam.cz> Message-ID: On Mon, May 24, 2010 at 6:31 AM, Vojt?ch Rylko wrote: > PyObject *value; > if (!PyArg_ParseTuple(arg, "O", value)) > Shouldn't that be "&value"? -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC From emekamicro at gmail.com Mon May 24 17:18:00 2010 From: emekamicro at gmail.com (Emeka) Date: Mon, 24 May 2010 16:18:00 +0100 Subject: [capi-sig] New Here Message-ID: Hello All, Is there any tutorial I can read beside manual? Regards, Emeka From robertwb at math.washington.edu Mon May 24 18:18:27 2010 From: robertwb at math.washington.edu (Robert Bradshaw) Date: Mon, 24 May 2010 09:18:27 -0700 Subject: [capi-sig] Unicode compatibility In-Reply-To: <4BF96B0D.8010400@behnel.de> References: <4BF96B0D.8010400@behnel.de> Message-ID: <95D66BBB-FA03-40E1-9E61-C010EC006C9B@math.washington.edu> On May 23, 2010, at 10:51 AM, Stefan Behnel wrote: > Daniel Stutzbach, 21.05.2010 16:34: >> If you try to load an extension module that: >> - uses any of Python's Unicode functions, and >> - was compiled by a Python with the opposite Unicode setting (UCS2 >> vs UCS4) >> then you get an ugly "undefined symbol" error from the linker. > > Well known problem, yes. > > >> By default, extensions will compile in a "Unicode-agnostic" mode, >> where >> Py_UNICODE is an incomplete type. The extension's code can pass >> Py_UNICODE >> pointers back and forth between Python API functions, but it cannot >> dereference them nor use sizeof(Py_UNICODE). Unicode-agnostic >> modules will >> load and run in both UCS2 and UCS4 interpreters. Most extensions >> fall into >> this category. > > This is a pretty bad default for Cython code. Starting with version > 0.13, Cython will try to infer Py_UNICODE for single character > unicode strings and use that whenever possible, e.g. when for- > looping over unicode strings and during character comparisons. > Making Py_UNICODE an incomplete type will render this impossible. > > >> If a module needs to dereference Py_UNICODE, it can define >> PY_REAL_PY_UNICODE before including Python.h to make Py_UNICODE a >> complete >> type > > So that would be an option that all Cython modules (or at least > those that use Py_UNICODE and/or single unicode characters > somewhere) would use automatically. Not much to win here. > > >> Attempting to load such a module into a mismatched interpreter will >> cause an ImportError (instead of an ugly linker error). If an >> extension >> uses PY_REAL_PY_UNICODE in any .c file, it must also use it in >> the .c file >> that calls PyModule_Create to ensure the Unicode width is stored in >> the >> module's information. > > Cython modules should normally be self-contained, but it will not be > 100% sure that a module that wraps C code using Py_UNICODE will also > use Py_UNICODE somewhere, so that Cython could enable that option > automatically. Cython would therefore be forced to enable the option > for basically all code that calls into C code. > > >> 2) Would you prefer the default be reversed? i.e, that Py_UNICODE >> be a >> complete type by default, and an extension must have a #define to >> compile in >> Unicode-agnostic mode? > > Absolutely. IMHO, the only platform that always requires binaries > due to incomplete operating system support for source distributions > is MS Windows, where Py_UNICODE equals wchar_t anyway. In some > cases, MacOS-X is broken enough to require binary releases, too, but > the normal target on that platform is the system Python, which has a > universal setting for the Py_UNICODE size as well. > > So the only remaining platforms that suffer from binary > incompatibility problems here are Linux und Unix systems, where the > Py_UNICODE size differs between installations and distributions. > Given that these systems are best targeted with a source > distribution, it sounds like a bad default to complicate the usage > of Py_UNICODE for everyone, unless users explicitly disable this > behaviour. It's much better to provide this as an option for > extension writers who really want (or need) to provide portable > binary distributions for whatever reason. > > Personally, I think the drawbacks totally outweigh the single > advantage, though, so I could absolutely live without this change. > It's easy enough to drop the linkage error message into a web search > engine. I (unsurprisingly) be against this change as well, given the reasons listed above, but would like to suggest some alternatives. First, is there a way to easily get the runtime size of Py_UNICODE? Then the module could be sure to raise an error itself when there's a mismatch before doing anything dangerous. A potentially better alternative would be to store record the UCS2/UCS4 distinction as part of the binary specification/name, with support for choosing the right one added into the package management infrastructures. Of course this will double the number of binaries, but that's just a reflection of the choice to make UCS2/UCS4 a binary incompatible compile time decision. - Robert From vojta.rylko at seznam.cz Mon May 24 19:02:38 2010 From: vojta.rylko at seznam.cz (=?UTF-8?B?Vm9qdMSbY2ggUnlsa28=?=) Date: Mon, 24 May 2010 19:02:38 +0200 Subject: [capi-sig] Checking int type In-Reply-To: References: <4BF93E58.60408@seznam.cz> <4BFA6378.2070707@seznam.cz> Message-ID: <4BFAB12E.1070300@seznam.cz> You are right, thank you very much. Vojt?ch Rylko Dne 24.5.2010 15:50, Daniel Stutzbach napsal(a): > On Mon, May 24, 2010 at 6:31 AM, Vojt?ch Rylko > wrote: > > PyObject *value; > if (!PyArg_ParseTuple(arg, "O", value)) > > Shouldn't that be "&value"? > -- > Daniel Stutzbach, Ph.D. > President, Stutzbach Enterprises, LLC From philip at semanchuk.com Mon May 24 18:38:40 2010 From: philip at semanchuk.com (Philip Semanchuk) Date: Mon, 24 May 2010 12:38:40 -0400 Subject: [capi-sig] New Here In-Reply-To: References: Message-ID: <68BC4968-BAE9-4B5E-A5FB-676E0851D4D0@semanchuk.com> On May 24, 2010, at 11:18 AM, Emeka wrote: > Hello All, > > Is there any tutorial I can read beside manual? You mean besides the tutorial that's linked to from the main page of the Python documentation? http://docs.python.org/extending/index.html The tutorial is sparse, like a lot of the documentation for the C interface. When I was writing my modules I looked at the source of other modules (some in the standard library, some 3rd party) to see how they did the things I wanted to do. Hope this helps Philip From daniel at stutzbachenterprises.com Mon May 24 19:09:52 2010 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Mon, 24 May 2010 12:09:52 -0500 Subject: [capi-sig] Unicode compatibility In-Reply-To: <95D66BBB-FA03-40E1-9E61-C010EC006C9B@math.washington.edu> References: <4BF96B0D.8010400@behnel.de> <95D66BBB-FA03-40E1-9E61-C010EC006C9B@math.washington.edu> Message-ID: Robert, Stefan, thank you for your feedback. How about the following variation, which I believe will address your concerns: By default, Py_UNICODE will be a fully-specified type. In a nutshell, the default will behave just like Python 2 or 3.1, except that trying to load a mismatched module will raise an ImportError with a more helpful error message (much friendlier to novice programmers). Cython would continue to use this mode. Extension authors who want a Unicode-agnostic build can specify an option in their setup.py that will instruct distutils to pass a -D_Py_UNICODE_AGNOSTIC compiler flag to ensure that all of their .c files are built in Unicode-independent mode. That way, the whole extension is compiled in the same mode. It would indeed be great if package managers included the Unicode setting as part of the platform type. PJE's proposed implementation of that feature ( http://bit.ly/1bO62) would allow eggs to specify UCS2, UCS4, or "Don't Care". My patch greatly increases the number of eggs that could label themselves "Don't Care", reducing maintenance work for package maintainers who like to distribute binary eggs [1]. In other words, they are complimentary solutions. [1] A quick Google search of PyPi reveals many packages offering Linux binary eggs. -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC From craigcitro at gmail.com Mon May 24 19:22:31 2010 From: craigcitro at gmail.com (Craig Citro) Date: Mon, 24 May 2010 10:22:31 -0700 Subject: [capi-sig] New Here In-Reply-To: <68BC4968-BAE9-4B5E-A5FB-676E0851D4D0@semanchuk.com> References: <68BC4968-BAE9-4B5E-A5FB-676E0851D4D0@semanchuk.com> Message-ID: > The tutorial is sparse, like a lot of the documentation for the C interface. > When I was writing my modules I looked at the source of other modules (some > in the standard library, some 3rd party) to see how they did the things I > wanted to do. > Along these same lines, it can also be useful to take some Python code, run it through Cython [1] or Pyrex [2], and look at the generated C source code. I'm not trying to say they generate perfect code -- but I find it's often faster to check out generated source than it is to find the appropriate entry in the reference manual, or use it to figure out *where to look* in the reference manual, especially before you know exactly what you're looking for. ;) -cc [1] http://www.cython.org [2] http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/ From python_capi at behnel.de Mon May 24 20:19:51 2010 From: python_capi at behnel.de (Stefan Behnel) Date: Mon, 24 May 2010 20:19:51 +0200 Subject: [capi-sig] Checking int type In-Reply-To: <4BFA6378.2070707@seznam.cz> References: <4BF93E58.60408@seznam.cz> <4BFA6378.2070707@seznam.cz> Message-ID: <4BFAC347.4070504@behnel.de> Vojt?ch Rylko, 24.05.2010 13:31: > But after unpacking, I still cannot detect no-integer type object. > > >>> from primes import is_prime > >>> is_prime(7) > True > >>> is_prime(5.5) # should raise exception, but is rounded as 5 > __main__:1: DeprecationWarning: integer argument expected, got float > True > >>> is_prime(9.8) # should raise exception, but is rounded as 9 > False > > ================================= > static PyObject *is_prime(PyObject *self, PyObject *arg) > { > PyObject *value; > if (!PyArg_ParseTuple(arg, "O", value)) > return NULL; > > if (PyInt_Check(value)) { > // never raise (test case - in real its if(!PyInt_Check(value) which > always raise) > PyErr_SetString(PyExc_ValueError, "only integers are acceptable"); > return NULL; > } > int number; > if (!PyArg_ParseTuple(value, "i", &number)) > return NULL; > > int result = is_prime_solve(number); You should consider giving Cython a try, where the above spells cdef extern from "yourcode.h": bint is_prime_solve(int number) def is_prime(int number): """ >>> is_prime(7) True """ return is_prime_solve(number) It will raise a TypeError for you if a user passes something that can't coerce to an int, or an OverflowError for something that is too large for a C int. However, note that coercion to a C int means calling int() in Python, which also works for float values. If you *really* want to prevent non-int values, you can do this: def is_prime(number): if not isinstance(number, int): raise TypeError("please pass an int") return is_prime_solve(number) Stefan From stefan_ml at behnel.de Mon May 24 14:19:38 2010 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 24 May 2010 14:19:38 +0200 Subject: [capi-sig] Checking int type In-Reply-To: <4BFA6378.2070707@seznam.cz> References: <4BF93E58.60408@seznam.cz> <4BFA6378.2070707@seznam.cz> Message-ID: <4BFA6EDA.8090604@behnel.de> Vojt?ch Rylko, 24.05.2010 13:31: > But after unpacking, I still cannot detect no-integer type object. > > >>> from primes import is_prime > >>> is_prime(7) > True > >>> is_prime(5.5) # should raise exception, but is rounded as 5 > __main__:1: DeprecationWarning: integer argument expected, got float > True > >>> is_prime(9.8) # should raise exception, but is rounded as 9 > False > > ================================= > static PyObject *is_prime(PyObject *self, PyObject *arg) > { > PyObject *value; > if (!PyArg_ParseTuple(arg, "O", value)) > return NULL; > > if (PyInt_Check(value)) { > // never raise (test case - in real its if(!PyInt_Check(value) which > always raise) > PyErr_SetString(PyExc_ValueError, "only integers are acceptable"); > return NULL; > } > int number; > if (!PyArg_ParseTuple(value, "i", &number)) > return NULL; > > int result = is_prime_solve(number); You should consider giving Cython a try, where the above spells cdef extern from "yourcode.h": bint is_prime_solve(int number) def is_prime(int number): """ >>> is_prime(7) True """ return is_prime_solve(number) It will raise a TypeError for you if a user passes something that can't coerce to an int, or an OverflowError for something that is too large for a C int. However, note that coercion to a C int means calling int() in Python, which also works for float values. If you *really* want to prevent non-int values, you can do this: def is_prime(number): if not isinstance(number, int): raise TypeError("please pass an int") return is_prime_solve(number) Stefan From ulf.worsoe at mosek.com Tue May 25 09:39:55 2010 From: ulf.worsoe at mosek.com (Ulf Worsoe) Date: Tue, 25 May 2010 09:39:55 +0200 Subject: [capi-sig] Some type inference for python? In-Reply-To: <20100521175243.90160@gmx.com> References: <20100521175243.90160@gmx.com> Message-ID: Hi, The C API does not provide any code-inspection functionality like that directly, but you can access the byte-code objects of a python-function (the "func_code" attribute) and use the "inspect" module from the Python library. If I understand you correctly, I don't think it will help you much to inspect the byte-code. It is nearly impossible in general to conclude, say, the return types of a function from the argument list given the python code for the function. I doubt that there is a feasible way to make type inference analysis on Python in a robust way since types are a bit vague in Python. -- Ulf Wors?e On Fri, May 21, 2010 at 7:47 PM, Issyk Kol wrote: > Hello.? > > I'd like to do some ML-like type inference for Python code. I'm not trying to resurrect the static typing vs. dynamic typing debate here. It's simply that I'd need to get static typing for Python for code-generation reasons. > > Therefore, I'd like to ask: Does the Python C API support a way to inspect the bytecode or the source code of a given piece of Python code? What approach would be promising enough to type things in ML-way (I'm unfortunately not interested in cartesian-product type inference), and what help may the C API provide to meet this goal? (Maybe I haven't looked far enough, but I could not discern clearly what could be useful...) > > (Sorry for posting in non-text-mode. Using a webmail because port 25 policies in this bloody country ar sooooo restrictive.) > > Thanks. > > Issyk Kol. > _______________________________________________ > capi-sig mailing list > capi-sig at python.org > http://mail.python.org/mailman/listinfo/capi-sig > From stefan_ml at behnel.de Tue May 25 10:24:11 2010 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 25 May 2010 10:24:11 +0200 Subject: [capi-sig] Some type inference for python? In-Reply-To: <20100521175243.90160@gmx.com> References: <20100521175243.90160@gmx.com> Message-ID: <4BFB892B.8050400@behnel.de> Issyk Kol, 21.05.2010 19:47: > I'd like to do some ML-like type inference for Python code. I'm not > trying to resurrect the static typing vs. dynamic typing debate here. > It's simply that I'd need to get static typing for Python for > code-generation reasons. I think your best bet is to look at an alternative implementation of Python, such as PyPy, Jython, IronPython or Cython. CPython doesn't do any type inference, but at least PyPy and Cython infer types to a certain extent, and I would expect the others to do it, too. There's also ShedSkin which is supposed to have a pretty good type inferer for (static) Python-like code. Might be enough for your purpose. In any case, the C-API won't help you here, so this is the wrong forum to discuss this. Stefan From vojta.rylko at seznam.cz Tue May 25 17:48:43 2010 From: vojta.rylko at seznam.cz (=?UTF-8?B?Vm9qdMSbY2ggUnlsa28=?=) Date: Tue, 25 May 2010 17:48:43 +0200 Subject: [capi-sig] New Here In-Reply-To: References: Message-ID: <4BFBF15B.1000605@seznam.cz> Dne 24.5.2010 17:18, Emeka napsal(a): > Hello All, > > Is there any tutorial I can read beside manual? > > Regards, > Emeka > _______________________________________________ > capi-sig mailing list > capi-sig at python.org > http://mail.python.org/mailman/listinfo/capi-sig > > Maybe simple real project could help: http://ginstrom.com/code/subdist.html (Quick Links -> installer package) I'm not author. Regards -- Vojt?ch Rylko vojta.rylko at seznam.cz Czech Republic From mal at egenix.com Wed May 26 12:45:32 2010 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 26 May 2010 12:45:32 +0200 Subject: [capi-sig] Unicode compatibility In-Reply-To: References: <4BF96B0D.8010400@behnel.de> <95D66BBB-FA03-40E1-9E61-C010EC006C9B@math.washington.edu> Message-ID: <4BFCFBCC.5020203@egenix.com> Daniel Stutzbach wrote: > Robert, Stefan, thank you for your feedback. > > How about the following variation, which I believe will address your > concerns: > > By default, Py_UNICODE will be a fully-specified type. In a nutshell, the > default will behave just like Python 2 or 3.1, except that trying to load a > mismatched module will raise an ImportError with a more helpful error > message (much friendlier to novice programmers). Cython would continue to > use this mode. > > Extension authors who want a Unicode-agnostic build can specify an option in > their setup.py that will instruct distutils to pass a -D_Py_UNICODE_AGNOSTIC > compiler flag to ensure that all of their .c files are built in > Unicode-independent mode. That way, the whole extension is compiled in the > same mode. That would be our (eGenix) preferred implementation variant as well. Building Unicode agnostic extensions should be a feature that the extension writers turn on explicitly, rather than being the default that has to be turned off. However, rather than using a distutils options to specify enable the agnostic mode, I would presume that extension writers simply write: #define _Py_UNICODE_AGNOSTIC 1 #include "Python.h" in their code and then add [build_ext] unicode-agnostic=1 to their setup.cfg. > It would indeed be great if package managers included the Unicode setting as > part of the platform type. PJE's proposed implementation of that feature ( > http://bit.ly/1bO62) would allow eggs to specify UCS2, UCS4, or "Don't > Care". My patch greatly increases the number of eggs that could label > themselves "Don't Care", reducing maintenance work for package maintainers > who like to distribute binary eggs [1]. In other words, they are > complimentary solutions. Rather than waiting for package managers to include support for this (I've been trying to get some awareness for this problem for years, without much success), it's probably better to just fix distutils to include a UCS2/UCS4 marker in the platform string. > [1] A quick Google search of PyPi reveals many packages offering Linux > binary eggs. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 26 2010) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2010-07-19: EuroPython 2010, Birmingham, UK 53 days to go ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From daniel at stutzbachenterprises.com Wed May 26 15:57:20 2010 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Wed, 26 May 2010 08:57:20 -0500 Subject: [capi-sig] Unicode compatibility In-Reply-To: <4BFCFBCC.5020203@egenix.com> References: <4BF96B0D.8010400@behnel.de> <95D66BBB-FA03-40E1-9E61-C010EC006C9B@math.washington.edu> <4BFCFBCC.5020203@egenix.com> Message-ID: On Wed, May 26, 2010 at 5:45 AM, M.-A. Lemburg wrote: > However, rather than using a distutils options to specify enable > the agnostic mode, I would presume that extension writers simply > write: > > #define _Py_UNICODE_AGNOSTIC 1 > #include "Python.h" > > in their code and then add > > [build_ext] > unicode-agnostic=1 > > to their setup.cfg. > I think I was much too vague when I said "distutils option". I fear that I implied a command-line option, which is not at all what I intended. I was picturing that the module author would include something like the following in their setup.py: Extension("foo", ["foo.c"], unicode_agnostic=True) which would arrange to add _Py_UNICODE_AGNOSTIC to their define_macros. The module author would not (and should not) define the macro themselves at the top of a .c file. By enabling it in setup.py, we guarantee that it will be defined when compiling all of the module's .c files or not at all. > Rather than waiting for package managers to include support > for this (I've been trying to get some awareness for this problem > for years, without much success), it's probably better to just fix > distutils to include a UCS2/UCS4 marker in the platform string. > In principle, I agree. I don't personally have enough familiarity with the innards of distutils to feel comfortable writing a patch that alters the platform string. -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC From support at sheepsystems.com Thu May 27 16:26:39 2010 From: support at sheepsystems.com (Jerry Krinock) Date: Thu, 27 May 2010 07:26:39 -0700 Subject: [capi-sig] Easy way to Return Value from 1 line of Embedded Python? Message-ID: <66C071FC-068A-4E5C-9C30-D3F2CDD20411@sheepsystems.com> In a C program I need to unpickle a Python pickle which is generated by someone's Python program. A Python newbie, I was able to do this, but only by writing a little python script file and loading it as a module. But reading from a script file is fragile, has lots of possible errors to handle as you can see below, and is silly since there's only one line of Python. I'd therefore like to create the Python module from a string constant, as with PyRun_SimpleString(), instead of reading a file. But I can't find a function like that which returns a value. Or I just don't know how. Certainly there must be a simpler way to do this... Thank you, Jerry Krinock ***** somePython.py ***************************** #!/usr/bin/env python from pickle import loads def unpickle(x): return loads(x) ***** My C "Proof of Concept" Function ********** #include // Running on Mac OS X char* PythonUnpickle(char* pickle) { PyObject *pName, *pModule, *pFunc; PyObject *pArgs, *pValue; char* unpickledString = NULL ; Py_Initialize(); PyRun_SimpleString("import sys"); PyRun_SimpleString("sys.path.append(\"/Users/jk/Documents/Programming/Python/\")"); pName = PyString_FromString("somePython"); pModule = PyImport_Import(pName); Py_DECREF(pName); if (pModule != NULL) { pFunc = PyObject_GetAttrString(pModule, "unpickle"); /* pFunc is a new reference */ if (pFunc && PyCallable_Check(pFunc)) { pArgs = PyTuple_New(1); // Create the argument to unpickle() pValue = PyString_FromString(pickle); if (!pValue) { Py_DECREF(pArgs); Py_DECREF(pModule); fprintf(stderr, "Cannot convert argument\n"); return NULL ; } // Set the argument into the pArgs tuple PyTuple_SetItem(pArgs, 0, pValue); // Run the python unpickle() function, get result pValue pValue = PyObject_CallObject(pFunc, pArgs) ; Py_DECREF(pArgs) ; // Convert the Python string pValue to a C string if (pValue != NULL) { unpickledString = PyString_AsString(pValue) ; Py_DECREF(pValue); } else { Py_DECREF(pFunc); Py_DECREF(pModule); PyErr_Print(); fprintf(stderr,"Call failed\n"); return NULL ; } } else { if (PyErr_Occurred()) PyErr_Print(); fprintf(stderr, "Cannot find 'unpickle' function\n") ; } Py_XDECREF(pFunc); Py_DECREF(pModule); } else { PyErr_Print(); fprintf(stderr, "Failed to load python script file\n"); } Py_Finalize() ; return unpickledString ; } From mal at egenix.com Fri May 28 10:54:38 2010 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 28 May 2010 10:54:38 +0200 Subject: [capi-sig] Easy way to Return Value from 1 line of Embedded Python? In-Reply-To: <66C071FC-068A-4E5C-9C30-D3F2CDD20411@sheepsystems.com> References: <66C071FC-068A-4E5C-9C30-D3F2CDD20411@sheepsystems.com> Message-ID: <4BFF84CE.7050005@egenix.com> Jerry Krinock wrote: > In a C program I need to unpickle a Python pickle which is generated by someone's Python program. A Python newbie, I was able to do this, but only by writing a little python script file and loading it as a module. But reading from a script file is fragile, has lots of possible errors to handle as you can see below, and is silly since there's only one line of Python. > > I'd therefore like to create the Python module from a string constant, as with PyRun_SimpleString(), instead of reading a file. But I can't find a function like that which returns a value. Or I just don't know how. > > Certainly there must be a simpler way to do this... You can compile the code into code object using Py_CompileString() and then pass this to PyEval_EvalCode() for execution. This will evaluate the code and return the resulting Python object. In your case, it's probably easier to just load the pickle module from C and call the loads() function directly rather than going through an extra layer of Python code. > Thank you, > > Jerry Krinock > > ***** somePython.py ***************************** > > #!/usr/bin/env python > > from pickle import loads > > def unpickle(x): > return loads(x) > > > ***** My C "Proof of Concept" Function ********** > > #include // Running on Mac OS X > > char* PythonUnpickle(char* pickle) { > PyObject *pName, *pModule, *pFunc; > PyObject *pArgs, *pValue; > char* unpickledString = NULL ; > > Py_Initialize(); > PyRun_SimpleString("import sys"); > PyRun_SimpleString("sys.path.append(\"/Users/jk/Documents/Programming/Python/\")"); > > pName = PyString_FromString("somePython"); > > pModule = PyImport_Import(pName); > Py_DECREF(pName); > > if (pModule != NULL) { > > pFunc = PyObject_GetAttrString(pModule, "unpickle"); > /* pFunc is a new reference */ > > if (pFunc && PyCallable_Check(pFunc)) { > pArgs = PyTuple_New(1); > > // Create the argument to unpickle() > pValue = PyString_FromString(pickle); > if (!pValue) { > Py_DECREF(pArgs); > Py_DECREF(pModule); > fprintf(stderr, "Cannot convert argument\n"); > return NULL ; > } > // Set the argument into the pArgs tuple > PyTuple_SetItem(pArgs, 0, pValue); > > // Run the python unpickle() function, get result pValue > pValue = PyObject_CallObject(pFunc, pArgs) ; > Py_DECREF(pArgs) ; > // Convert the Python string pValue to a C string > if (pValue != NULL) { > unpickledString = PyString_AsString(pValue) ; > Py_DECREF(pValue); > } > else { > Py_DECREF(pFunc); > Py_DECREF(pModule); > PyErr_Print(); > fprintf(stderr,"Call failed\n"); > return NULL ; > } > } > else { > if (PyErr_Occurred()) > PyErr_Print(); > fprintf(stderr, "Cannot find 'unpickle' function\n") ; > } > Py_XDECREF(pFunc); > > > Py_DECREF(pModule); > } > else { > PyErr_Print(); > fprintf(stderr, "Failed to load python script file\n"); > } > Py_Finalize() ; > > return unpickledString ; > } The string object you get from the Python function will only be allocated while the interpreter is initialized. In your example the unpickledString will point to unallocated memory when the function returns. It is not even guaranteed to still have the correct string data. To correct this, you will have to get the pointer to the string data, copy it to a buffer you allocate in your app and only then finalize the interpreter. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 28 2010) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2010-07-19: EuroPython 2010, Birmingham, UK 51 days to go ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From support at sheepsystems.com Sun May 30 07:38:02 2010 From: support at sheepsystems.com (Jerry Krinock) Date: Sat, 29 May 2010 22:38:02 -0700 Subject: [capi-sig] Easy way to Return Value from 1 line of Embedded Python? In-Reply-To: <4BFF84CE.7050005@egenix.com> References: <66C071FC-068A-4E5C-9C30-D3F2CDD20411@sheepsystems.com> <4BFF84CE.7050005@egenix.com> Message-ID: On 2010 May 28, at 01:54, M.-A. Lemburg wrote: > You can compile the code into code object using Py_CompileString() > and then pass this to PyEval_EvalCode() for execution. This > will evaluate the code and return the resulting Python object. Thank you, Mark-Andre. I had trouble getting that to work. Let's move on to your preferred answer. > In your case, it's probably easier to just load the pickle > module from C and call the loads() function directly rather > than going through an extra layer of Python code. I'm not sure if you meant to use PyFunction_New() or PyRun_String(). I got it to work using the latter. See code below. > The string object you get from the Python function will only be allocated while the interpreter is initialized. ... To correct this, you will have to get the pointer to the string data, copy it to a buffer you allocate in your app and only then finalize the interpreter. Thanks. I've corrected that now by creating and returning a Cocoa object. So, here is the working code using PyRun_String(). The NSString is the Cocoa object used in Mac OS X or iPhone OS. Otherwise, use your favorite string object. #import #include NSString* PythonStringCodeUnpickle(char* pickle) { NSString* answer = nil ; char* myErrorDesc = NULL ; Py_Initialize() ; // Create Python Namespace (dictionary of variables) PyObject* pythonStringArg = PyString_FromString(pickle); if (!pythonStringArg) { myErrorDesc = "Cannot convert string arg to Python\n" ; goto end ; } PyObject* pythonVarsDic = PyDict_New(); PyDict_SetItemString( pythonVarsDic, "__builtins__", PyEval_GetBuiltins()); PyDict_SetItemString( pythonVarsDic, "x", pythonStringArg) ; // Create "hard" source code string to unpickle // (For some strange reason, they call it loads() // The "s" in "loads" stands for "string".) char* pythonSource = "from pickle import loads\n\n" "y = loads(x)\n" ; // Run the python source code PyRun_String(pythonSource, Py_file_input, pythonVarsDic, pythonVarsDic) ; PyObject* unpickledPythonString = PyDict_GetItemString(pythonVarsDic, "y") ; if (!unpickledPythonString) { myErrorDesc = "Unpickling returned NULL\n" ; goto end ; } // Convert the unpickled Python string to a C string char* unpickledString = PyString_AsString(unpickledPythonString) ; Py_DECREF(unpickledPythonString); if (!unpickledString) { myErrorDesc = "Failed converting unpickled string to C string\n" ; goto end ; } // Convert the C string into a string object answer = [NSString stringWithUTF8String:unpickledString] ; end: if (myErrorDesc) { printf("%s\n", myErrorDesc) ; } if (PyErr_Occurred()) { PyErr_Print(); } Py_Finalize() ; return answer ; } int main(int argc, char *argv[]) { NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init] ; // The following string is an actual Python pickle. // Of course, one would never hard code such a thing in a real program. // This is just a test :) char* s = "V/Users/jk/Dropbox2/Dropbox\np1\n.\n" ; printf("Unpickling using Python code from string:\n") ; printf(" Unpickled result: %s\n\n", [PythonStringCodeUnpickle(s) UTF8String]) ; // The correct answer for Unpickled result is: // "/Users/jk/Dropbox2/Dropbox" [pool release] ; return 0 ; } From mal at egenix.com Sun May 30 23:28:01 2010 From: mal at egenix.com (M.-A. Lemburg) Date: Sun, 30 May 2010 23:28:01 +0200 Subject: [capi-sig] Easy way to Return Value from 1 line of Embedded Python? In-Reply-To: References: <66C071FC-068A-4E5C-9C30-D3F2CDD20411@sheepsystems.com> <4BFF84CE.7050005@egenix.com> Message-ID: <4C02D861.8080007@egenix.com> Jerry Krinock wrote: > > On 2010 May 28, at 01:54, M.-A. Lemburg wrote: > >> You can compile the code into code object using Py_CompileString() >> and then pass this to PyEval_EvalCode() for execution. This >> will evaluate the code and return the resulting Python object. > > Thank you, Mark-Andre. I had trouble getting that to work. Let's move on to your preferred answer. > >> In your case, it's probably easier to just load the pickle >> module from C and call the loads() function directly rather >> than going through an extra layer of Python code. > > I'm not sure if you meant to use PyFunction_New() or PyRun_String(). I got it to work using the latter. See code below. I was thinking of PyImport_ImportModule() to import the pickle module, PyModule_GetDict() and PyDict_GetItemString() to get the function object and finally PyEval_CallFunction() to call the function. You can also use a short-cut directly from the module object to the function call by using PyEval_CallMethod(pickle_module, "loads", ...) - top-level symbols in a module are available as attributes of the module object, e.g. pickle_module = PyImport_ImportModule("pickle"); result = PyEval_CallMethod(pickle_module, "loads", "s#", ...); This avoids having to compile any Python code just to unpickle a string. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, May 30 2010) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2010-07-19: EuroPython 2010, Birmingham, UK 49 days to go ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From stefan_ml at behnel.de Mon May 31 08:36:45 2010 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 31 May 2010 08:36:45 +0200 Subject: [capi-sig] Easy way to Return Value from 1 line of Embedded Python? In-Reply-To: <66C071FC-068A-4E5C-9C30-D3F2CDD20411@sheepsystems.com> References: <66C071FC-068A-4E5C-9C30-D3F2CDD20411@sheepsystems.com> Message-ID: <4C0358FD.3000806@behnel.de> Jerry Krinock, 27.05.2010 16:26: > In a C program I need to unpickle a Python pickle which is generated by someone's Python program. A Python newbie, I was able to do this, but only by writing a little python script file and loading it as a module. But reading from a script file is fragile, has lots of possible errors to handle as you can see below, and is silly since there's only one line of Python. > > I'd therefore like to create the Python module from a string constant, as with PyRun_SimpleString(), instead of reading a file. But I can't find a function like that which returns a value. Or I just don't know how. > > Certainly there must be a simpler way to do this... > > Thank you, > > Jerry Krinock > > ***** somePython.py ***************************** > > #!/usr/bin/env python > > from pickle import loads > > def unpickle(x): > return loads(x) Try compiling this with Cython, the generated C code will show you what you need to do. Stefan From amnorvend at gmail.com Mon May 31 18:15:34 2010 From: amnorvend at gmail.com (Jason Baker) Date: Mon, 31 May 2010 11:15:34 -0500 Subject: [capi-sig] Python to Scheme type conversion in C? Message-ID: I'm working on a Python-to-mzscheme binding using mzscheme's ffi library (mzscheme's rough equivalent of the Python ctypes library). I'm looking for a way to convert Python objects into Scheme values. Essentially what I'm trying to do is say "Is this a Python integer? Ok, convert it to a Scheme integer." or "Is this a Python string? Ok, convert it into a Scheme string." ... etc. What is the best way to do this? My first intuition was to call Py*_Check to determine the type, but it turns out that's a macro that can't be used in non-c code. I'm sure I could translate that into the appropriate C code, but that gives me a bad feeling. Right now, I'm looking for simple and easy more than efficient and complete (but I would also like to know what the efficient and complete approach would be). Can anyone tell me what the best way to tackle this problem is? From stefan_ml at behnel.de Mon May 31 21:05:30 2010 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 31 May 2010 21:05:30 +0200 Subject: [capi-sig] Python to Scheme type conversion in C? In-Reply-To: References: Message-ID: <4C04087A.4060704@behnel.de> Jason Baker, 31.05.2010 18:15: > I'm working on a Python-to-mzscheme binding using mzscheme's ffi > library (mzscheme's rough equivalent of the Python ctypes library). > I'm looking for a way to convert Python objects into Scheme values. > Essentially what I'm trying to do is say "Is this a Python integer? > Ok, convert it to a Scheme integer." or "Is this a Python string? > Ok, convert it into a Scheme string." ... etc. > > What is the best way to do this? My first intuition was to call > Py*_Check to determine the type, but it turns out that's a macro that > can't be used in non-c code. I'm sure I could translate that into the > appropriate C code, but that gives me a bad feeling. Those are macros because that makes them really, really fast, especially in Py3. However, their definition changes between Python versions, so replacing them by regular C code is really not a good idea. In general, I wouldn't try to make the wrapper too generic. You might want to inspect the underlying Scheme code (potentially at runtime), and infer the possible types (or the required conversion) from that. > Right now, I'm > looking for simple and easy more than efficient and complete (but I > would also like to know what the efficient and complete approach would > be). Have you considered integrating this with Cython via a thin C bridge, instead of talking to Python's C-API directly? Without knowing anything about the FFI you are using here, I suspect that it likely wouldn't be a generic wrapper in that case. Instead, it would allow users to write specialised and well optimised Scheme wrappers for their specific use case. So far, Cython supports calling into C, C++ and Fortran, where the Fortran wrapper is also implemented as a C-bridge (via fwrap). A similar Scheme binding would allow users to take advantage of Cython's static typing features, so that your code wouldn't have to guess types in the first place. Stefan From amnorvend at gmail.com Mon May 31 22:31:23 2010 From: amnorvend at gmail.com (Jason Baker) Date: Mon, 31 May 2010 15:31:23 -0500 Subject: [capi-sig] Python to Scheme type conversion in C? In-Reply-To: <4C0416A8.2080709@behnel.de> References: <4C04087A.4060704@behnel.de> <4C0416A8.2080709@behnel.de> Message-ID: On Mon, May 31, 2010 at 3:06 PM, Stefan Behnel wrote: > [please always reply to the list] Sorry. :-/ > Jason Baker, 31.05.2010 21:45: >> >> On Mon, May 31, 2010 at 2:05 PM, Stefan Behnel wrote: >>> >>> Jason Baker, 31.05.2010 18:15: >>>> >>>> I'm working on a Python-to-mzscheme binding using mzscheme's ffi >>>> library (mzscheme's rough equivalent of the Python ctypes library). > > Since you didn't provide the link (and "mzscheme ffi" doesn't get me > anything that looks right at first sight), am I right in guessing that you > might be referring to this? > > http://download.plt-scheme.org/doc/351/html/foreign/ That's an older version of the docs, but yes. Here's a more recent link: http://docs.plt-scheme.org/foreign/index.html >>>> I'm looking for a way to convert Python objects into Scheme values. >>>> Essentially what I'm trying to do is say "Is this a Python ?integer? >>>> Ok, convert it to a Scheme integer." ?or "Is this a Python string? >>>> Ok, convert it into a Scheme string." ?... etc. >>>> >>>> What is the best way to do this? ?My first intuition was to call >>>> Py*_Check to determine the type, but it turns out that's a macro that >>>> can't be used in non-c code. ?I'm sure I could translate that into the >>>> appropriate C code, but that gives me a bad feeling. >>> >>> Those are macros because that makes them really, really fast, especially >>> in >>> Py3. However, their definition changes between Python versions, so >>> replacing >>> them by regular C code is really not a good idea. >>> >>> In general, I wouldn't try to make the wrapper too generic. You might >>> want >>> to inspect the underlying Scheme code (potentially at runtime), and infer >>> the possible types (or the required conversion) from that. >> >> Perhaps I should clarify. For now, I'm just trying to embed the python >> interpreter in Scheme. ?Thus, I know what types the scheme values are, >> and I know how to translate them into the appropriate Python values. >> What I'm trying to figure out is how to go the other way. ?That is, >> how do I figure out what type a PyObject is so I know how to create >> the appropriate Scheme value? > > Why not write a type checking cascade in plain C or Cython? I suppose I could, but I'd rather write it in pure Scheme or Python if at all possible. If that's not efficient enough, I can look into adding C into the mix. >>>> Right now, I'm >>>> looking for simple and easy more than efficient and complete (but I >>>> would also like to know what the efficient and complete approach would >>>> be). >>> >>> Have you considered integrating this with Cython via a thin C bridge, >>> instead of talking to Python's C-API directly? Without knowing anything >>> about the FFI you are using here, I suspect that it likely wouldn't be a >>> generic wrapper in that case. Instead, it would allow users to write >>> specialised and well optimised Scheme wrappers for their specific use >>> case. >>> So far, Cython supports calling into C, C++ and Fortran, where the >>> Fortran >>> wrapper is also implemented as a C-bridge (via fwrap). A similar Scheme >>> binding would allow users to take advantage of Cython's static typing >>> features, so that your code wouldn't have to guess types in the first >>> place. >> >> While I'm sure this approach works for C, C++, and Fortran, I'm afraid >> I don't see how this would apply to Scheme. ?Scheme is a dynamically >> typed language just like Python. ?Why do I want a layer of static >> typing in between two dynamically typed languages? > > Scheme is a strictly typed language, though, and your Scheme code won't run > with (or even accept) arbitrary Python input value types, just like your > Python code won't run with arbitrary Scheme value types as input. So you > have to draw the line somewhere. And since you are using an FFI, which, I > assume, passes through C anyway, you might just as well restrict your glue > code to suitable types. I suppose I could use tagged pointers[1], but I was wanting something more generic (in other words, an interface that automatically does the right thing). Perhaps I'm biting off more than I can chew though. [1] http://docs.plt-scheme.org/foreign/Derived_Utilities.html#(part._foreign~3atagged-pointers) From stefan_ml at behnel.de Mon May 31 22:06:00 2010 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 31 May 2010 22:06:00 +0200 Subject: [capi-sig] Python to Scheme type conversion in C? In-Reply-To: References: <4C04087A.4060704@behnel.de> Message-ID: <4C0416A8.2080709@behnel.de> [please always reply to the list] Jason Baker, 31.05.2010 21:45: > On Mon, May 31, 2010 at 2:05 PM, Stefan Behnel wrote: >> Jason Baker, 31.05.2010 18:15: >>> >>> I'm working on a Python-to-mzscheme binding using mzscheme's ffi >>> library (mzscheme's rough equivalent of the Python ctypes library). Since you didn't provide the link (and "mzscheme ffi" doesn't get me anything that looks right at first sight), am I right in guessing that you might be referring to this? http://download.plt-scheme.org/doc/351/html/foreign/ >>> I'm looking for a way to convert Python objects into Scheme values. >>> Essentially what I'm trying to do is say "Is this a Python integer? >>> Ok, convert it to a Scheme integer." or "Is this a Python string? >>> Ok, convert it into a Scheme string." ... etc. >>> >>> What is the best way to do this? My first intuition was to call >>> Py*_Check to determine the type, but it turns out that's a macro that >>> can't be used in non-c code. I'm sure I could translate that into the >>> appropriate C code, but that gives me a bad feeling. >> >> Those are macros because that makes them really, really fast, especially in >> Py3. However, their definition changes between Python versions, so replacing >> them by regular C code is really not a good idea. >> >> In general, I wouldn't try to make the wrapper too generic. You might want >> to inspect the underlying Scheme code (potentially at runtime), and infer >> the possible types (or the required conversion) from that. > > Perhaps I should clarify. For now, I'm just trying to embed the python > interpreter in Scheme. Thus, I know what types the scheme values are, > and I know how to translate them into the appropriate Python values. > What I'm trying to figure out is how to go the other way. That is, > how do I figure out what type a PyObject is so I know how to create > the appropriate Scheme value? Why not write a type checking cascade in plain C or Cython? >>> Right now, I'm >>> looking for simple and easy more than efficient and complete (but I >>> would also like to know what the efficient and complete approach would >>> be). >> >> Have you considered integrating this with Cython via a thin C bridge, >> instead of talking to Python's C-API directly? Without knowing anything >> about the FFI you are using here, I suspect that it likely wouldn't be a >> generic wrapper in that case. Instead, it would allow users to write >> specialised and well optimised Scheme wrappers for their specific use case. >> So far, Cython supports calling into C, C++ and Fortran, where the Fortran >> wrapper is also implemented as a C-bridge (via fwrap). A similar Scheme >> binding would allow users to take advantage of Cython's static typing >> features, so that your code wouldn't have to guess types in the first place. > > While I'm sure this approach works for C, C++, and Fortran, I'm afraid > I don't see how this would apply to Scheme. Scheme is a dynamically > typed language just like Python. Why do I want a layer of static > typing in between two dynamically typed languages? Scheme is a strictly typed language, though, and your Scheme code won't run with (or even accept) arbitrary Python input value types, just like your Python code won't run with arbitrary Scheme value types as input. So you have to draw the line somewhere. And since you are using an FFI, which, I assume, passes through C anyway, you might just as well restrict your glue code to suitable types. Stefan From stefan_ml at behnel.de Mon May 31 23:12:52 2010 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 31 May 2010 23:12:52 +0200 Subject: [capi-sig] Python to Scheme type conversion in C? In-Reply-To: References: <4C04087A.4060704@behnel.de> <4C0416A8.2080709@behnel.de> Message-ID: <4C042654.5040402@behnel.de> Jason Baker, 31.05.2010 22:31: > On Mon, May 31, 2010 at 3:06 PM, Stefan Behnel wrote: >> Jason Baker, 31.05.2010 21:45: >>> On Mon, May 31, 2010 at 2:05 PM, Stefan Behnel wrote: >>>> Jason Baker, 31.05.2010 18:15: >>>>> I'm looking for a way to convert Python objects into Scheme values. >>>>> Essentially what I'm trying to do is say "Is this a Python integer? >>>>> Ok, convert it to a Scheme integer." or "Is this a Python string? >>>>> Ok, convert it into a Scheme string." ... etc. >>>>> >>>>> What is the best way to do this? My first intuition was to call >>>>> Py*_Check to determine the type, but it turns out that's a macro that >>>>> can't be used in non-c code. I'm sure I could translate that into the >>>>> appropriate C code, but that gives me a bad feeling. >>>> >>>> Those are macros because that makes them really, really fast, especially >>>> in >>>> Py3. However, their definition changes between Python versions, so >>>> replacing >>>> them by regular C code is really not a good idea. >>>> >>>> In general, I wouldn't try to make the wrapper too generic. You might >>>> want >>>> to inspect the underlying Scheme code (potentially at runtime), and infer >>>> the possible types (or the required conversion) from that. >>> >>> Perhaps I should clarify. For now, I'm just trying to embed the python >>> interpreter in Scheme. Thus, I know what types the scheme values are, >>> and I know how to translate them into the appropriate Python values. >>> What I'm trying to figure out is how to go the other way. That is, >>> how do I figure out what type a PyObject is so I know how to create >>> the appropriate Scheme value? >> >> Why not write a type checking cascade in plain C or Cython? > > I suppose I could, but I'd rather write it in pure Scheme or Python if > at all possible. If that's not efficient enough, I can look into > adding C into the mix. You could build a dict that maps any combination of a builtin Python type and a Scheme type to a way to convert the first to the second. That might even be faster than a type checking cascade in C. Structured types would fall back to the standard Python type conversion rules. Stefan