From gr at componic.co.nz Mon Oct 3 04:54:22 2011 From: gr at componic.co.nz (Glenn Ramsey) Date: Mon, 03 Oct 2011 15:54:22 +1300 Subject: [C++-sig] Linking debug/release using MSVC Message-ID: <4E8923DE.10905@componic.co.nz> When making a debug build (using MSVC on Windows) of an extension module using Boost.Python it is possible to link with a release build of pythonXX.lib. However this appears to not be possible for modules wrapped using SIP or SWIG E.g. [1]. How does Boost.Python do this? The reason I ask is that I would like to be able to do the same with SIP, if that is possible, to avoid having to link the debug python lib and having to use debug builds of all the dependencies of the program. Glenn [1] From dave at boostpro.com Mon Oct 3 14:34:58 2011 From: dave at boostpro.com (Dave Abrahams) Date: Mon, 03 Oct 2011 08:34:58 -0400 Subject: [C++-sig] [Boost.Python v3] Planning and Logistics References: <4E56B7A6.2030008@gmail.com> <4E5A61B2.17574.3A790CA9@s_sourceforge.nedprod.com> Message-ID: on Sun Aug 28 2011, "Niall Douglas" wrote: > On 27 Aug 2011 at 12:29, Dave Abrahams wrote: > >> In that case, if I were you, I would actually start using Git with the >> modularized / CMake-ified Boost at http://github.com/boost-lib. > > If you do go for git, I have found repo embedded per-branch issue > tracking (e.g. http://bugseverywhere.org/) to be a god send for > productivity because you can raise issues with your own code without > bothering the mainline issue tracker about branch specific (and > indeed often personal) issues. It has made as much difference to my > productivity as adopting git did because I no longer need to keep > (and often misplace) post it notes reminding me of things to do. Last time I looked at BE it was not really ready for primetime. You're motivating me to check it out again. *checks it out* Hmm... not sure this will get beyond the personal, though: it lacks so much of what people have come to expect, like hyperlinking, and especially an *interactive* web interface for bug creation. > I even coded up a GUI for it for the TortoiseXXX family of revision > tracking GUIs which you can find at > http://www.nedprod.com/programs/Win32/BEurtle/. This lets you mark > off BE issue fixes with GIT/whatever commits. Cool! Maybe with your GUI it would be enough to get people to use it. Still needs a web interface for bug creation. I'm going to have to keep a closer eye on BE. -- Dave Abrahams BoostPro Computing http://www.boostpro.com From s_sourceforge at nedprod.com Mon Oct 3 16:11:28 2011 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Mon, 03 Oct 2011 15:11:28 +0100 Subject: [C++-sig] Off-topic: Personal bug tracking (was: Re: [Boost.Python v3] Planning and Logistics) In-Reply-To: References: <4E56B7A6.2030008@gmail.com>, Message-ID: <4E89C290.29928.2482FD58@s_sourceforge.nedprod.com> On 3 Oct 2011 at 8:34, Dave Abrahams wrote: > Last time I looked at BE it was not really ready for primetime. You're > motivating me to check it out again. > > *checks it out* > > Hmm... not sure this will get beyond the personal, though: it lacks so > much of what people have come to expect, like hyperlinking, and > especially an *interactive* web interface for bug creation. I agree it isn't fit to go beyond the personal. I'd even say, actually, that when you go beyond bug tracking within a (very) small group it's time for Redmine or Trac. I'd also say personal bug trackers are much more of a design/development tool than a deployment/support tool. They're really a type of post-it note tied into the repository. As I know you know Dave, you also often get emails containing good ideas for some software project of yours, but right now you don't have the time to really look into them. Here BE also excels - you dump out the email and attach it to a BE feature issue. That prevents you forgetting about them, or much later on wasting time searching your email not quite remembering the correct search terms. > > I even coded up a GUI for it for the TortoiseXXX family of revision > > tracking GUIs which you can find at > > http://www.nedprod.com/programs/Win32/BEurtle/. This lets you mark > > off BE issue fixes with GIT/whatever commits. > > Cool! Maybe with your GUI it would be enough to get people to use it. > Still needs a web interface for bug creation. I'm going to have to keep > a closer eye on BE. I develop for Windows first and POSIX later, so something which integrates into the Tortoise family of SCM GUIs is much more important to me than a web interface. I really, really wish KDE would integrate proper multi-SCM support into itself too. I agree though that a JSON-RPC interface to BE would be a very useful addition, not just to aid BEurtle's performance, and of course from that a dynamic web interface becomes much easier. My thread regarding the topic (http://void.printf.net/pipermail/be-devel/2011- September/thread.html) went nowhere though. Again, I might add the support myself for the next BEurtle release and upload it as a fork to pypi as BE's author is unwilling to permit BE to go onto pypi. [BTW to anyone interested, my GUI needs a new release as several show stopping bugs have since been found. Hopefully I'll get a chance before Christmas, meanwhile GIT HEAD has already fixed the worst of them] Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Company no: 472909. From dave at boostpro.com Mon Oct 3 17:13:15 2011 From: dave at boostpro.com (Dave Abrahams) Date: Mon, 03 Oct 2011 11:13:15 -0400 Subject: [C++-sig] Off-topic: Personal bug tracking References: <4E56B7A6.2030008@gmail.com> <4E89C290.29928.2482FD58@s_sourceforge.nedprod.com> Message-ID: on Mon Oct 03 2011, "Niall Douglas" wrote: > On 3 Oct 2011 at 8:34, Dave Abrahams wrote: > >> Last time I looked at BE it was not really ready for primetime. You're >> motivating me to check it out again. >> >> *checks it out* >> >> Hmm... not sure this will get beyond the personal, though: it lacks so >> much of what people have come to expect, like hyperlinking, and >> especially an *interactive* web interface for bug creation. > > I agree it isn't fit to go beyond the personal. I'd even say, > actually, that when you go beyond bug tracking within a (very) small > group it's time for Redmine or Trac. > > I'd also say personal bug trackers are much more of a > design/development tool than a deployment/support tool. They're > really a type of post-it note tied into the repository. I'd like to try it. But now, suppose you have a project on github and you want to make it easy for people to report bugs. Clearly you need to enable the GitHub issue tracker, right? How do you integrate that with your be workflow? > As I know you know Dave, you also often get emails containing good > ideas for some software project of yours, but right now you don't > have the time to really look into them. Here BE also excels - you > dump out the email and attach it to a BE feature issue. That prevents > you forgetting about them, or much later on wasting time searching > your email not quite remembering the correct search terms. Yeah, I can see the appeal. >> > I even coded up a GUI for it for the TortoiseXXX family of revision >> > tracking GUIs which you can find at >> > http://www.nedprod.com/programs/Win32/BEurtle/. This lets you mark >> > off BE issue fixes with GIT/whatever commits. >> >> Cool! Maybe with your GUI it would be enough to get people to use it. >> Still needs a web interface for bug creation. I'm going to have to keep >> a closer eye on BE. > > I develop for Windows first and POSIX later, so something which > integrates into the Tortoise family of SCM GUIs is much more > important to me than a web interface. I really, really wish KDE would > integrate proper multi-SCM support into itself too. > > I agree though that a JSON-RPC interface to BE would be a very useful > addition, not just to aid BEurtle's performance, and of course from > that a dynamic web interface becomes much easier. My thread regarding > the topic (http://void.printf.net/pipermail/be-devel/2011- > September/thread.html) went nowhere though. Again, I might add the > support myself for the next BEurtle release and upload it as a fork > to pypi as BE's author is unwilling to permit BE to go onto pypi. Seriously? That's not a very good sign. Why not? I found installing BE to be a bit of a struggle. -- Dave Abrahams BoostPro Computing http://www.boostpro.com From s_sourceforge at nedprod.com Tue Oct 4 17:59:11 2011 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Tue, 04 Oct 2011 16:59:11 +0100 Subject: [C++-sig] Off-topic: Personal bug tracking In-Reply-To: References: <4E56B7A6.2030008@gmail.com>, Message-ID: <4E8B2D4F.28025.2A18442E@s_sourceforge.nedprod.com> On 3 Oct 2011 at 11:13, Dave Abrahams wrote: > > I'd also say personal bug trackers are much more of a > > design/development tool than a deployment/support tool. They're > > really a type of post-it note tied into the repository. > > I'd like to try it. But now, suppose you have a project on github and > you want to make it easy for people to report bugs. Clearly you need to > enable the GitHub issue tracker, right? How do you integrate that with > your be workflow? That problem, for me personally at least, comes up infrequently enough that copying & pasting is sufficient for now. I absolutely agree that it would be simply great that should say a github bug be assigned to dabrahams on github then it automagically gets pushed into your personal branch. You'd probably need a custom tracker plugin for github though given github's design (github is weird and doesn't appear to export its issues as RSS). Redmine and Trac are much more accommodating. A post-fetch hook in git could pull the RSS bug feed, scan it for things referencing you and push them into your personal BE easily enough once it had a JSON-RPC interface. > > I agree though that a JSON-RPC interface to BE would be a very useful > > addition, not just to aid BEurtle's performance, and of course from > > that a dynamic web interface becomes much easier. My thread regarding > > the topic (http://void.printf.net/pipermail/be-devel/2011- > > September/thread.html) went nowhere though. Again, I might add the > > support myself for the next BEurtle release and upload it as a fork > > to pypi as BE's author is unwilling to permit BE to go onto pypi. > > Seriously? That's not a very good sign. Why not? I found installing > BE to be a bit of a struggle. I agree :) not helped by unspecified dependencies. BE's author feels that BE ought to be supplied as a vendor distro package e.g. .deb for debian, .rpm for Fedora etc. He feels that anything which parallels that is unhelpful. He did mention that once BE is ported to Python3k he'd support using distribute.py. In http://void.printf.net/pipermail/be-devel/2011- September/000839.html I pointed out that Linux still occupies a small minority of developer eyeballs. Windows still is the overwhelming majority, and pypi is the best you have there. I argued for a pypi based windows distro especially as it makes dealing with Windows Installer easier (I hope you have never had to deal with Windows Installer. It sucks, even with WiX to hand). In http://void.printf.net/pipermail/be-devel/2011- September/000845.html I asked for permission to upload a windows fork of BE fixing several show stopping Windows-specific bugs to pypi but there was no answer. I'll give it a while longer out of grace. I might make a new BEurtle release a Christmas holiday personal project :) I originally wrote nedmalloc that way too. Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Company no: 472909. From brandsmeier at gmx.de Wed Oct 5 09:08:08 2011 From: brandsmeier at gmx.de (Holger Brandsmeier) Date: Wed, 5 Oct 2011 09:08:08 +0200 Subject: [C++-sig] Custom smart pointer with const Types Message-ID: Dear list, how should I export functions to python which return smart pointers to const-pointers, e.g. shared_ptr? For my own classes I always tried to avoid this problem by always providing a methods which returns shared_ptr. Now I need to export the following method in a class provided by some other software package (Trilinos). Its implementation I do not want to change, the function is declared as Teuchos::RCP< const Teuchos::Comm< int > > getComm () const (if you need details: http://trilinos.sandia.gov/packages/docs/r10.6/packages/tpetra/doc/html/classTpetra_1_1MpiPlatform.html ) I believe I already exported the custom smart pointer `Teuchos::RCP` to python, I also exported the class `Teuchos::Comm< int >` to python, but I get the error No to_python (by-value) converter found for C++ type: Teuchos::RCP const> which is perfectly true, as I did not export the class `const Teuchos::Comm`. I briefly tried to export also the const version of this class (all methods that I need are provided are available for the const version), but I failed exporting the class with varying error message, depending on how I tried to export it. I realized that I don't know how or even if I should export a const version of a type? Is there another workaround to this problem? Is there something I'm missing in the implementation of my custom smart pointer? I could wrap the function getComm() above to cast away the const'ness, but do I need to? I also found some old messages on the list titled "[Boost.Python] shared_ptr" with some workaround proposed by providing get_pointer for const T, but I believe that I have a different problem that I can not solve by modifying get_pointer. -Holger From dave at boostpro.com Wed Oct 5 13:21:48 2011 From: dave at boostpro.com (Dave Abrahams) Date: Wed, 05 Oct 2011 07:21:48 -0400 Subject: [C++-sig] [Boost.Python v3] Conversions and Registries In-Reply-To: <4E78AC11.5043.B0CBE312@s_sourceforge.nedprod.com> (Niall Douglas's message of "Tue, 20 Sep 2011 16:06:57 +0100") References: <4E77AE2F.3070702@gmail.com> <4E78AC11.5043.B0CBE312@s_sourceforge.nedprod.com> Message-ID: on Tue Sep 20 2011, "Niall Douglas" wrote: > On 19 Sep 2011 at 17:03, Jim Bosch wrote: > >> I'd like to see support for static, template-based conversions. These >> would be defined by [partial-]specializing a traits class, and I tend to >> think they should only be invoked after attempting all registry-based >> conversions. > > Surely not! You'd want to let template specialisaton be the first > point of call so the compiler can compile in obvious conversions, > *then* and only then do you go to a runtime registry. I don't understand why you guys would want compile-time converters at all, really. Frankly, I think they should all be eliminated. They complicate the model Boost.Python needs to support and cause confusion when the built-in ones mask runtime conversions. > This also lets one override the runtime registry when needed in the > local compiland. I'm not against having another set of template > specialisations do something should the first set of specialisations > fail, and/or the runtime registry lookup fails. There are better ways to deal with conversion specialization, IMO. The runtime registry should be scoped, and it should be possible to find the "nearest eligible converter" based on the python module hierarchy. >> Users would have to include the same headers in groups of >> interdependent modules to avoid specializing the same traits class >> multiple times in different ways; I can't think of a way to protect them >> from this, but template-based specializations are a sufficiently >> advanced featured that I'm comfortable leaving it up to users to avoid >> this problem. > > Just make sure what you do works with precompiled headers :) Another problem that you avoid by not supporting compile-time selection of converters. >> 3) Considering that we will have a "best-match" overloading system, what >> should take precedence, an inexact match in a module-specific registry, >> or an exact match in a global registry? (Clearly this is a moot point >> for to-Python conversion). Nearer scopes should mask more distant scopes. This is unfortunately necessary, or you get unpredictable results depending on the context in which you're running (all the other modules in the system). > Imagine the following. Program A loads DLL B and DLL C. DLL B is > dependent on DLL D which uses BPL. DLL C is dependent on DLL E which > uses BPL. Jeez, I'm going to have to graph this A / \ B C | | D E \ / BPL > DLL D tells BPL that class foo is implicitly convertible with an > integer. > > DLL E tells BPL that class foo is actually a thin wrapper for > std::string. > > Right now with present BPL, we have to load two copies of BPL, one > for DLL D and one for DLL E. They maintain separate type registries, > so all is good. That's not correct. Boost.Python was designed to deal with scenarios like this and be run as a single instance in such a system, with a single registry. > But what if DLL B returns a python function to Program A, which then > installs it as a callback with DLL C? OMG, could you make this more convoluted, please? > In the normal case, BPL code in DLL E will call into BPL code DLL D > and all is well. > > But what if the function in DLL D throws an exception? > > This gets converted into a C++ exception by throwing > boost::error_already_set. > > Now the C++ runtime must figure where to send the exception. But what > is the C++ runtime supposed to do with such an exception type? It > isn't allowed to see the copy of BPL living in DLL E, so it will fire > the exception type into DLL D where it doesn't belong. At this point, > the program will almost certainly segfault. Sorry, you completely lost me here. > As I mentioned earlier, this is a very semantically similar problem > to supporting multiple python interpreters anyway with each calling > into one another. How exactly is one python interpreter supposed to "call into" another one? Are you suggesting they have their own threads and one blocks to wait for the other, or is it something completely different. -- Dave Abrahams BoostPro Computing http://www.boostpro.com From stefan at seefeld.name Wed Oct 5 15:03:48 2011 From: stefan at seefeld.name (Stefan Seefeld) Date: Wed, 05 Oct 2011 09:03:48 -0400 Subject: [C++-sig] [Boost.Python v3] Conversions and Registries In-Reply-To: References: <4E77AE2F.3070702@gmail.com> <4E78AC11.5043.B0CBE312@s_sourceforge.nedprod.com> Message-ID: <4E8C55B4.8060204@seefeld.name> On 10/05/2011 07:21 AM, Dave Abrahams wrote: > on Tue Sep 20 2011, "Niall Douglas" wrote: > >> On 19 Sep 2011 at 17:03, Jim Bosch wrote: >> >>> I'd like to see support for static, template-based conversions. These >>> would be defined by [partial-]specializing a traits class, and I tend to >>> think they should only be invoked after attempting all registry-based >>> conversions. >> Surely not! You'd want to let template specialisaton be the first >> point of call so the compiler can compile in obvious conversions, >> *then* and only then do you go to a runtime registry. > I don't understand why you guys would want compile-time converters at > all, really. Frankly, I think they should all be eliminated. They > complicate the model Boost.Python needs to support and cause confusion > when the built-in ones mask runtime conversions. Indeed, I never understood that myself. At the Python/C++ language boundary there is no such thing as "compile-time". >> This also lets one override the runtime registry when needed in the >> local compiland. I'm not against having another set of template >> specialisations do something should the first set of specialisations >> fail, and/or the runtime registry lookup fails. > There are better ways to deal with conversion specialization, IMO. The > runtime registry should be scoped, and it should be possible to find the > "nearest eligible converter" based on the python module hierarchy. ...combined with some hints users can add to their modules. Again, I think we should favor explicit conversion policy settings over implicit ones. Sorry, I haven't yet managed to find time to sketch this out in any detail. I hope to be able to do that to help with this project, though. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin... From talljimbo at gmail.com Wed Oct 5 15:18:27 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Wed, 05 Oct 2011 09:18:27 -0400 Subject: [C++-sig] [Boost.Python v3] Conversions and Registries In-Reply-To: References: <4E77AE2F.3070702@gmail.com> <4E78AC11.5043.B0CBE312@s_sourceforge.nedprod.com> Message-ID: <4E8C5923.7010102@gmail.com> On 10/05/2011 07:21 AM, Dave Abrahams wrote: > > on Tue Sep 20 2011, "Niall Douglas" wrote: > >> On 19 Sep 2011 at 17:03, Jim Bosch wrote: >> >>> I'd like to see support for static, template-based conversions. These >>> would be defined by [partial-]specializing a traits class, and I tend to >>> think they should only be invoked after attempting all registry-based >>> conversions. >> >> Surely not! You'd want to let template specialisaton be the first >> point of call so the compiler can compile in obvious conversions, >> *then* and only then do you go to a runtime registry. > > I don't understand why you guys would want compile-time converters at > all, really. Frankly, I think they should all be eliminated. They > complicate the model Boost.Python needs to support and cause confusion > when the built-in ones mask runtime conversions. > I have one (perhaps unusual) use case that's extremely important for me: I have a templated matrix/vector/array class, and I want to define converters between those types and numpy that work with any combination of template parameters. I can do that with compile-time converters, and after including the header everything just works. With runtime conversions, I have to explicitly declare all the template parameter combinations I intend to use. >> This also lets one override the runtime registry when needed in the >> local compiland. I'm not against having another set of template >> specialisations do something should the first set of specialisations >> fail, and/or the runtime registry lookup fails. > > There are better ways to deal with conversion specialization, IMO. The > runtime registry should be scoped, and it should be possible to find the > "nearest eligible converter" based on the python module hierarchy. > I think this might turn into something that approaches the same mass of complexity Niall describes, because a Python module can be imported into several places in a hierarchy at once, and it seems we'd have to track which instance of the module is active in order to resolve those scopes correctly. I do hope that most people won't mind if I don't implement something as completely general as what Niall has described - there is a lot of complexity there I think most users don't need, and I hope he'd be willing to help with that if he does need to deal with e.g. passing callbacks between multiple interpreters. But I'm also afraid he might be onto something in pointing out that fixing the more standard cases might already be more complicated than it seems. Jim From talljimbo at gmail.com Wed Oct 5 15:24:24 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Wed, 05 Oct 2011 09:24:24 -0400 Subject: [C++-sig] [Boost.Python v3] Conversions and Registries In-Reply-To: <4E8C55B4.8060204@seefeld.name> References: <4E77AE2F.3070702@gmail.com> <4E78AC11.5043.B0CBE312@s_sourceforge.nedprod.com> <4E8C55B4.8060204@seefeld.name> Message-ID: <4E8C5A88.9030303@gmail.com> On 10/05/2011 09:03 AM, Stefan Seefeld wrote: > ...combined with some hints users can add to their modules. Again, I > think we should favor explicit conversion policy settings over implicit > ones. > > Sorry, I haven't yet managed to find time to sketch this out in any > detail. I hope to be able to do that to help with this project, though. > Unfortunately, I have to admit there's no rush - I have plenty of other things taking most of my time at the moment, so you're in no danger of being left out of the discussion by being busy. I am very curious to see exactly how you see this working, however; to me the notion of explicit conversions between modules seems to require the developer of one module to know too much about the internals of another. But I'm sure you've got your reasons. Jim From stefan at seefeld.name Wed Oct 5 15:26:45 2011 From: stefan at seefeld.name (Stefan Seefeld) Date: Wed, 05 Oct 2011 09:26:45 -0400 Subject: [C++-sig] [Boost.Python v3] Conversions and Registries In-Reply-To: <4E8C5923.7010102@gmail.com> References: <4E77AE2F.3070702@gmail.com> <4E78AC11.5043.B0CBE312@s_sourceforge.nedprod.com> <4E8C5923.7010102@gmail.com> Message-ID: <4E8C5B15.8060309@seefeld.name> On 10/05/2011 09:18 AM, Jim Bosch wrote: > > I have one (perhaps unusual) use case that's extremely important for > me: I have a templated matrix/vector/array class, and I want to define > converters between those types and numpy that work with any > combination of template parameters. I can do that with compile-time > converters, and after including the header everything just works. > With runtime conversions, I have to explicitly declare all the > template parameter combinations I intend to use. Jim, I may be a little slow here, but I still don't see the issue. You need to export your classes to Python one at a time anyhow, i.e. not as a template, letting the Python runtime figure out all valid template argument permutations. So why can't the converter definitions simply be bound to those type definitions ? Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin... From talljimbo at gmail.com Wed Oct 5 15:42:58 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Wed, 05 Oct 2011 09:42:58 -0400 Subject: [C++-sig] [Boost.Python v3] Conversions and Registries In-Reply-To: <4E8C5B15.8060309@seefeld.name> References: <4E77AE2F.3070702@gmail.com> <4E78AC11.5043.B0CBE312@s_sourceforge.nedprod.com> <4E8C5923.7010102@gmail.com> <4E8C5B15.8060309@seefeld.name> Message-ID: <4E8C5EE2.7090903@gmail.com> On 10/05/2011 09:26 AM, Stefan Seefeld wrote: > On 10/05/2011 09:18 AM, Jim Bosch wrote: >> >> I have one (perhaps unusual) use case that's extremely important for >> me: I have a templated matrix/vector/array class, and I want to define >> converters between those types and numpy that work with any >> combination of template parameters. I can do that with compile-time >> converters, and after including the header everything just works. >> With runtime conversions, I have to explicitly declare all the >> template parameter combinations I intend to use. > > Jim, > > I may be a little slow here, but I still don't see the issue. You need > to export your classes to Python one at a time anyhow, i.e. not as a > template, letting the Python runtime figure out all valid template > argument permutations. So why can't the converter definitions simply be > bound to those type definitions ? > The key point is that I'm not exporting these with "class_"; I define converters that go directly to and from numpy.ndarray. So if I define template-based converters for my class ("ndarray::Array"), a function that takes one as an argument: void fillArray(ndarray::Array array); ...can be wrapped to take a numpy.ndarray as an argument, just by doing: #include "array-from-python.hpp" ... bp::def("fillArray", &fillArray); Without template converters, I also have to add something like: register_array_from_python< ndarray::Array >(); (where register_array_from_python is some custom runtime converter I'd have written) and repeat that for every instantiation of ndarray::Array I use. This involves looking through all my code, finding all the combinations of template parameters I use, and registering each one exactly once across all modules. That would get better with some sort of multi-module registry support, but I don't think I should have to declare the converters for each set of template parameters at all; it's better just to write a single compile-time converter. Jim From talljimbo at gmail.com Wed Oct 5 15:44:38 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Wed, 05 Oct 2011 09:44:38 -0400 Subject: [C++-sig] Custom smart pointer with const Types In-Reply-To: References: Message-ID: <4E8C5F46.1040209@gmail.com> On 10/05/2011 03:08 AM, Holger Brandsmeier wrote: > Dear list, > > how should I export functions to python which return smart pointers to > const-pointers, e.g. shared_ptr? > > For my own classes I always tried to avoid this problem by always > providing a methods which returns shared_ptr. > > Now I need to export the following method in a class provided by some > other software package (Trilinos). Its implementation I do not want to > change, the function is declared as > Teuchos::RCP< const Teuchos::Comm< int> > getComm () const > (if you need details: > http://trilinos.sandia.gov/packages/docs/r10.6/packages/tpetra/doc/html/classTpetra_1_1MpiPlatform.html > ) > > I believe I already exported the custom smart pointer `Teuchos::RCP` > to python, I also exported the class `Teuchos::Comm< int>` to python, > but I get the error > No to_python (by-value) converter found for C++ type: > Teuchos::RCP const> > which is perfectly true, as I did not export the class `const > Teuchos::Comm`. > > I briefly tried to export also the const version of this class (all > methods that I need are provided are available for the const version), > but I failed exporting the class with varying error message, depending > on how I tried to export it. I realized that I don't know how or even > if I should export a const version of a type? > > Is there another workaround to this problem? Is there something I'm > missing in the implementation of my custom smart pointer? > > I could wrap the function getComm() above to cast away the const'ness, > but do I need to? > > I also found some old messages on the list titled "[Boost.Python] > shared_ptr" with some workaround proposed by providing > get_pointer for const T, but I believe that I have a different problem > that I can not solve by modifying get_pointer. > I'm afraid you do have a different problem, though some of those ideas help. Essentially, Boost.Python's support for custom smart pointers isn't as complete as its support for shared_ptr, and its support for const smart pointers is basically nonexistent. You're pretty much in not-currently-supported territory. I think wrapping getComm() to cast away the constness is going to be the easiest way to make this work, but if you have many such functions it may be worth digging deeper into the Boost.Python internals to try and trick it into working by specializing some templates. Jim From brandsmeier at gmx.de Wed Oct 5 17:31:53 2011 From: brandsmeier at gmx.de (Holger Brandsmeier) Date: Wed, 5 Oct 2011 17:31:53 +0200 Subject: [C++-sig] Custom smart pointer with const Types In-Reply-To: <4E8C5F46.1040209@gmail.com> References: <4E8C5F46.1040209@gmail.com> Message-ID: Jim, how do you handle smart_ptr in boost python? Do you simply cast away the constness? For my custom smart pointer I provide a class extending to_python_converter, rcp_to_python, true> now I decided to also implement to_python_converter, rcp_to_python_const, true> where I simply cast away the constness and use the above implementation. This seems to be working so far. Did you provide a smarter implementation for shared_ptr? -Holger On Wed, Oct 5, 2011 at 15:44, Jim Bosch wrote: > On 10/05/2011 03:08 AM, Holger Brandsmeier wrote: >> >> Dear list, >> >> how should I export functions to python which return smart pointers to >> const-pointers, e.g. shared_ptr? >> >> For my own classes I always tried to avoid this problem by always >> providing a methods which returns shared_ptr. >> >> Now I need to export the following method in a class provided by some >> other software package (Trilinos). Its implementation I do not want to >> change, the function is declared as >> ?Teuchos::RCP< ?const Teuchos::Comm< ?int> ?> ? ? getComm () const >> (if you need details: >> >> http://trilinos.sandia.gov/packages/docs/r10.6/packages/tpetra/doc/html/classTpetra_1_1MpiPlatform.html >> ) >> >> I believe I already exported the custom smart pointer `Teuchos::RCP` >> to python, I also exported the class `Teuchos::Comm< ?int>` to python, >> but I get the error >> ?No to_python (by-value) converter found for C++ type: >> Teuchos::RCP ?const> >> which is perfectly true, as I did not export the class `const >> Teuchos::Comm`. >> >> I briefly tried to export also the const version of this class (all >> methods that I need are provided are available for the const version), >> but I failed exporting the class with varying error message, depending >> on how I tried to export it. I realized that I don't know how or even >> if I should export a const version of a type? >> >> Is there another workaround to this problem? Is there something I'm >> missing in the implementation of my custom smart pointer? >> >> I could wrap the function getComm() above to cast away the const'ness, >> but do I need to? >> >> I also found some old messages on the list titled "[Boost.Python] >> shared_ptr" with some workaround proposed by providing >> get_pointer for const T, but I believe that I have a different problem >> that I can not solve by modifying get_pointer. >> > > I'm afraid you do have a different problem, though some of those ideas help. > ?Essentially, Boost.Python's support for custom smart pointers isn't as > complete as its support for shared_ptr, and its support for const smart > pointers is basically nonexistent. ?You're pretty much in > not-currently-supported territory. > > I think wrapping getComm() to cast away the constness is going to be the > easiest way to make this work, but if you have many such functions it may be > worth digging deeper into the Boost.Python internals to try and trick it > into working by specializing some templates. > > Jim > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig > From dave at boostpro.com Wed Oct 5 17:30:47 2011 From: dave at boostpro.com (Dave Abrahams) Date: Wed, 05 Oct 2011 11:30:47 -0400 Subject: [C++-sig] [Boost.Python v3] Conversions and Registries References: <4E77AE2F.3070702@gmail.com> <4E78AC11.5043.B0CBE312@s_sourceforge.nedprod.com> <4E8C5923.7010102@gmail.com> Message-ID: on Wed Oct 05 2011, Jim Bosch wrote: > On 10/05/2011 07:21 AM, Dave Abrahams wrote: > >> I don't understand why you guys would want compile-time converters at >> all, really. Frankly, I think they should all be eliminated. They >> complicate the model Boost.Python needs to support and cause confusion >> when the built-in ones mask runtime conversions. >> > > I have one (perhaps unusual) use case that's extremely important for > me: I have a templated matrix/vector/array class, and I want to define > converters between those types and numpy that work with any > combination of template parameters. I can do that with compile-time > converters, and after including the header everything just works. Not really. In the end you can only expose particular specializations of the templates to Python, and you have to decide, somehow, what those are. > With runtime conversions, I have to explicitly declare all the > template parameter combinations I intend to use. Not really; a little metaprogramming makes it reasonably easy to generate all those combinations. You can also use compile-time triggers to register runtime converters. I'm happy to demonstrate if you like. >> There are better ways to deal with conversion specialization, IMO. The >> runtime registry should be scoped, and it should be possible to find the >> "nearest eligible converter" based on the python module hierarchy. > > I think this might turn into something that approaches the same mass > of complexity Niall describes, Nothing ever needs to be quite as complex as what Niall describes ;-) (no offense intended, Niall) > because a Python module can be imported into several places in a > hierarchy at once, and it seems we'd have to track which instance of > the module is active in order to resolve those scopes correctly. Meh. I think a module has an official identity, its __name__. > I do hope that most people won't mind if I don't implement something > as completely general as what Niall has described No problem. As the original author I think you should give what I describe a little more weight in this discussion, though ;-) > - there is a lot of complexity there I think most users don't need, > and I hope he'd be willing to help with that if he does need to deal > with e.g. passing callbacks between multiple interpreters. But I'm > also afraid he might be onto something in pointing out that fixing the > more standard cases might already be more complicated than it seems. Don't let him scare you off. He's a very smart guy, and a good guy, but he tends to describe things in a way that I find to be needlessly daunting. -- Dave Abrahams BoostPro Computing http://www.boostpro.com From David.Aldrich at EMEA.NEC.COM Wed Oct 5 18:12:51 2011 From: David.Aldrich at EMEA.NEC.COM (David Aldrich) Date: Wed, 5 Oct 2011 16:12:51 +0000 Subject: [C++-sig] How to configure makefile for different build platforms Message-ID: <41302A7145AC054FA7A96CFD03835A0A05E5FF@EX10MBX02.EU.NEC.COM> Hi I have a C++ application that uses Boost.Python. We build it on Centos 5.3, with Python 2.4 and Boost 1.34. Our makefile uses explicit paths to find Python and Boost. For the headers we use: PYTHON = /usr/include/python2.4 BOOST_INC = /usr/include/boost INCPATH=$(PYTHON) INCPATH+=$(BOOST_INC) CXXFLAGS += $(patsubst %,-I%,$(INCPATH)) $(CXX) -c $(CXXFLAGS) sourcefile etc Now I need to support building on Ubuntu 10.04 which has Python 2.6, not 2.4, installed. Please can someone suggest how I can modify the makefile to conveniently handle the different Python paths according to the build platform. Should I simply require the user to define PYTHON as an environment variable, or is there a better way without resorting to something complex like autoconf? Best regards David -------------- next part -------------- An HTML attachment was scrubbed... URL: From talljimbo at gmail.com Wed Oct 5 19:00:20 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Wed, 05 Oct 2011 13:00:20 -0400 Subject: [C++-sig] Custom smart pointer with const Types In-Reply-To: References: <4E8C5F46.1040209@gmail.com> Message-ID: <4E8C8D24.9010602@gmail.com> On 10/05/2011 11:31 AM, Holger Brandsmeier wrote: > Jim, > > how do you handle smart_ptr in boost python? Do you simply > cast away the constness? > > > For my custom smart pointer I provide a class extending > to_python_converter, rcp_to_python, true> > now I decided to also implement > to_python_converter, rcp_to_python_const, true> > where I simply cast away the constness and use the above implementation. > > This seems to be working so far. Did you provide a smarter > implementation for shared_ptr? > Personally, I pretty much always use shared_ptr, and I've written a rather large extension to support constness on the Python side, and dealing with shared_ptr is a side effect of that. I'm welcome to pass it on if you're interested, but I don't think it really addresses your problem. Jim > > On Wed, Oct 5, 2011 at 15:44, Jim Bosch wrote: >> On 10/05/2011 03:08 AM, Holger Brandsmeier wrote: >>> >>> Dear list, >>> >>> how should I export functions to python which return smart pointers to >>> const-pointers, e.g. shared_ptr? >>> >>> For my own classes I always tried to avoid this problem by always >>> providing a methods which returns shared_ptr. >>> >>> Now I need to export the following method in a class provided by some >>> other software package (Trilinos). Its implementation I do not want to >>> change, the function is declared as >>> Teuchos::RCP< const Teuchos::Comm< int> > getComm () const >>> (if you need details: >>> >>> http://trilinos.sandia.gov/packages/docs/r10.6/packages/tpetra/doc/html/classTpetra_1_1MpiPlatform.html >>> ) >>> >>> I believe I already exported the custom smart pointer `Teuchos::RCP` >>> to python, I also exported the class `Teuchos::Comm< int>` to python, >>> but I get the error >>> No to_python (by-value) converter found for C++ type: >>> Teuchos::RCP const> >>> which is perfectly true, as I did not export the class `const >>> Teuchos::Comm`. >>> >>> I briefly tried to export also the const version of this class (all >>> methods that I need are provided are available for the const version), >>> but I failed exporting the class with varying error message, depending >>> on how I tried to export it. I realized that I don't know how or even >>> if I should export a const version of a type? >>> >>> Is there another workaround to this problem? Is there something I'm >>> missing in the implementation of my custom smart pointer? >>> >>> I could wrap the function getComm() above to cast away the const'ness, >>> but do I need to? >>> >>> I also found some old messages on the list titled "[Boost.Python] >>> shared_ptr" with some workaround proposed by providing >>> get_pointer for const T, but I believe that I have a different problem >>> that I can not solve by modifying get_pointer. >>> >> >> I'm afraid you do have a different problem, though some of those ideas help. >> Essentially, Boost.Python's support for custom smart pointers isn't as >> complete as its support for shared_ptr, and its support for const smart >> pointers is basically nonexistent. You're pretty much in >> not-currently-supported territory. >> >> I think wrapping getComm() to cast away the constness is going to be the >> easiest way to make this work, but if you have many such functions it may be >> worth digging deeper into the Boost.Python internals to try and trick it >> into working by specializing some templates. >> >> Jim >> _______________________________________________ >> Cplusplus-sig mailing list >> Cplusplus-sig at python.org >> http://mail.python.org/mailman/listinfo/cplusplus-sig >> > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig From s_sourceforge at nedprod.com Wed Oct 5 21:18:59 2011 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Wed, 05 Oct 2011 20:18:59 +0100 Subject: [C++-sig] [Boost.Python v3] Conversions and Registries In-Reply-To: References: <4E77AE2F.3070702@gmail.com>, <4E78AC11.5043.B0CBE312@s_sourceforge.nedprod.com> (Niall Douglas's message of "Tue, 20 Sep 2011 16:06:57 +0100"), Message-ID: <4E8CADA3.4896.2FF78196@s_sourceforge.nedprod.com> On 5 Oct 2011 at 7:21, Dave Abrahams wrote: > >> I'd like to see support for static, template-based conversions. These > >> would be defined by [partial-]specializing a traits class, and I tend to > >> think they should only be invoked after attempting all registry-based > >> conversions. > > > > Surely not! You'd want to let template specialisaton be the first > > point of call so the compiler can compile in obvious conversions, > > *then* and only then do you go to a runtime registry. > > I don't understand why you guys would want compile-time converters at > all, really. Frankly, I think they should all be eliminated. They > complicate the model Boost.Python needs to support and cause confusion > when the built-in ones mask runtime conversions. What I was proposing was that the compile-time registry is identical to the runtime registry. Hence the order of lookup so a lot of the simpler conversion can be done inline by the compiler. Sure, the same system can be abused to have special per-compiland behaviours. I personally have found that rather useful for working around very special situations such as compiler bugs. I agree that you shouldn't have two separate systems, and 99% of the time both registries need to do the same thing. In my own code in fact I have a lot of unit tests ensuring that the compile-time and run-time registries behave identically. > > Imagine the following. Program A loads DLL B and DLL C. DLL B is > > dependent on DLL D which uses BPL. DLL C is dependent on DLL E which > > uses BPL. > > Jeez, I'm going to have to graph this > > A > / \ > B C > | | > D E > \ / > BPL You can't guarantee that Dave. It depends on what flags to dlopen the end user uses. And right now, Python itself defaults to multiple BPLs. > > Right now with present BPL, we have to load two copies of BPL, one > > for DLL D and one for DLL E. They maintain separate type registries, > > so all is good. > > That's not correct. Boost.Python was designed to deal with scenarios > like this and be run as a single instance in such a system, with a > single registry. http://muttley.hates-software.com/2006/01/25/c37456e6.html There are plenty more all over the net. > > But what if DLL B returns a python function to Program A, which then > > installs it as a callback with DLL C? > > OMG, could you make this more convoluted, please? No, it's a valid use case. Again, search google and you'll see. Lots of people with this same problem. > > As I mentioned earlier, this is a very semantically similar problem > > to supporting multiple python interpreters anyway with each calling > > into one another. > > How exactly is one python interpreter supposed to "call into" another > one? Are you suggesting they have their own threads and one blocks to > wait for the other, or is it something completely different. Right now BPL doesn't touch the GIL or current interpreter context. I'm saying it ought to manage both, because getting it right isn't obvious. And once again, if program A causes the loading of two DLLs each of which runs its own python interpreter, you can get all sorts of unfun when the two interpreters call into one another. Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Company no: 472909. From s_sourceforge at nedprod.com Wed Oct 5 21:28:44 2011 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Wed, 05 Oct 2011 20:28:44 +0100 Subject: [C++-sig] [Boost.Python v3] Conversions and Registries In-Reply-To: <4E8C5923.7010102@gmail.com> References: <4E77AE2F.3070702@gmail.com>, , <4E8C5923.7010102@gmail.com> Message-ID: <4E8CAFEC.22464.30006D76@s_sourceforge.nedprod.com> On 5 Oct 2011 at 9:18, Jim Bosch wrote: > I think this might turn into something that approaches the same mass of > complexity Niall describes, because a Python module can be imported into > several places in a hierarchy at once, and it seems we'd have to track > which instance of the module is active in order to resolve those scopes > correctly. > > I do hope that most people won't mind if I don't implement something as > completely general as what Niall has described - there is a lot of > complexity there I think most users don't need, and I hope he'd be > willing to help with that if he does need to deal with e.g. passing > callbacks between multiple interpreters. But I'm also afraid he might > be onto something in pointing out that fixing the more standard cases > might already be more complicated than it seems. It's really not that complex when implemented, honestly. It's just complex creating that simple design to cover all the possible use cases. Once the design is down, you'd be amazed at how little code it turns into. Obviously Jim, you're the one who's implementing it, so you do what you like. However, I would suggest that you might consider setting up a wiki page on Boost's trac (https://svn.boost.org/trac/boost/ ?) describing the proposed design in detail. I'm also happy to offer a full project management host for your efforts on ned Productions' Redmine site (http://www.nedproductions.biz/redmine/) if you'd prefer. You'd get your own full self-contained project there. In either case, I'm sure people from the list here would be happy to comment and/or contribute to the design document even if they are unable to contribute code. Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Company no: 472909. From s_sourceforge at nedprod.com Wed Oct 5 21:41:25 2011 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Wed, 05 Oct 2011 20:41:25 +0100 Subject: [C++-sig] [Boost.Python v3] Conversions and Registries In-Reply-To: References: <4E77AE2F.3070702@gmail.com>, Message-ID: <4E8CB2E5.29830.300C0BB4@s_sourceforge.nedprod.com> On 5 Oct 2011 at 11:30, Dave Abrahams wrote: > > I think this might turn into something that approaches the same mass > > of complexity Niall describes, > > Nothing ever needs to be quite as complex as what Niall describes ;-) > > (no offense intended, Niall) And here I am thinking I am clear as bell! :) No offence taken at all Dave. I often find your thinking confusing too. We just don't think similarly, but that's likely a good thing. [BTW, I have a small book shortly going on sale early December outlining my personal recommendations on how to make human civilisation sustainable. If you think my coding stuff hurts the head, I am told that said book is unbelievably complex. Can't see why myself :)] > > because a Python module can be imported into several places in a > > hierarchy at once, and it seems we'd have to track which instance of > > the module is active in order to resolve those scopes correctly. > > Meh. I think a module has an official identity, its __name__. And version and current state. It's like how a single piece of code can have multiple identities because multiple threads and processes can execute it. > Don't let him scare you off. He's a very smart guy, and a good guy, but > he tends to describe things in a way that I find to be needlessly > daunting. Thank you Dave. I actually didn't know you had an opinion on me and I am genuinely both surprised and pleased. Your opinion I take seriously. I hope you keep your high opinion when you see me on ISO SC22 (I hopefully will be becoming the Irish representative for ISO later this month). I agree entirely with Dave - don't let me scare you off! What you're doing Jim is great and keep at it. Do what you feel is best, in the end it's your code and your time. Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Company no: 472909. From talljimbo at gmail.com Wed Oct 5 21:47:56 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Wed, 05 Oct 2011 15:47:56 -0400 Subject: [C++-sig] [Boost.Python v3] Conversions and Registries In-Reply-To: <4E8CAFEC.22464.30006D76@s_sourceforge.nedprod.com> References: <4E77AE2F.3070702@gmail.com>, , <4E8C5923.7010102@gmail.com> <4E8CAFEC.22464.30006D76@s_sourceforge.nedprod.com> Message-ID: <4E8CB46C.6000303@gmail.com> On 10/05/2011 03:28 PM, Niall Douglas wrote: > On 5 Oct 2011 at 9:18, Jim Bosch wrote: > >> I think this might turn into something that approaches the same mass of >> complexity Niall describes, because a Python module can be imported into >> several places in a hierarchy at once, and it seems we'd have to track >> which instance of the module is active in order to resolve those scopes >> correctly. >> >> I do hope that most people won't mind if I don't implement something as >> completely general as what Niall has described - there is a lot of >> complexity there I think most users don't need, and I hope he'd be >> willing to help with that if he does need to deal with e.g. passing >> callbacks between multiple interpreters. But I'm also afraid he might >> be onto something in pointing out that fixing the more standard cases >> might already be more complicated than it seems. > > It's really not that complex when implemented, honestly. It's just > complex creating that simple design to cover all the possible use > cases. Once the design is down, you'd be amazed at how little code it > turns into. > > Obviously Jim, you're the one who's implementing it, so you do what > you like. However, I would suggest that you might consider setting up > a wiki page on Boost's trac (https://svn.boost.org/trac/boost/ ?) > describing the proposed design in detail. > > I'm also happy to offer a full project management host for your > efforts on ned Productions' Redmine site > (http://www.nedproductions.biz/redmine/) if you'd prefer. You'd get > your own full self-contained project there. > Thanks for the suggestion and the offer. I should probably just go with getting a boost trac account; there are some aspects of trac I dislike, but it's also what I know, and I very much doubt my needs will exceed its abilities in this case. But this is indeed approaching the point where we need a concrete straw-man to pummel. > In either case, I'm sure people from the list here would be happy to > comment and/or contribute to the design document even if they are > unable to contribute code. Good to hear! Jim From talljimbo at gmail.com Wed Oct 5 21:47:57 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Wed, 05 Oct 2011 15:47:57 -0400 Subject: [C++-sig] [Boost.Python v3] Conversions and Registries In-Reply-To: References: <4E77AE2F.3070702@gmail.com> <4E78AC11.5043.B0CBE312@s_sourceforge.nedprod.com> <4E8C5923.7010102@gmail.com> Message-ID: <4E8CB46D.20600@gmail.com> On 10/05/2011 11:30 AM, Dave Abrahams wrote: > > on Wed Oct 05 2011, Jim Bosch wrote: > >> On 10/05/2011 07:21 AM, Dave Abrahams wrote: >> >>> I don't understand why you guys would want compile-time converters at >>> all, really. Frankly, I think they should all be eliminated. They >>> complicate the model Boost.Python needs to support and cause confusion >>> when the built-in ones mask runtime conversions. >>> >> >> I have one (perhaps unusual) use case that's extremely important for >> me: I have a templated matrix/vector/array class, and I want to define >> converters between those types and numpy that work with any >> combination of template parameters. I can do that with compile-time >> converters, and after including the header everything just works. > > Not really. In the end you can only expose particular specializations > of the templates to Python, and you have to decide, somehow, what those > are. > >> With runtime conversions, I have to explicitly declare all the >> template parameter combinations I intend to use. > > Not really; a little metaprogramming makes it reasonably easy to > generate all those combinations. You can also use compile-time triggers > to register runtime converters. I'm happy to demonstrate if you like. > The latter sounds more like what I'd want, though a brief demonstration would be great. You're right in guessing that I don't really care whether it's a runtime or compile-time conversion. The key is that I don't want to have to explicitly declare the conversions, even if I have some metaprogramming to make that easier - I'd like to only declare what's actually used, since that's potentially a much smaller number of declarations. >>> There are better ways to deal with conversion specialization, IMO. The >>> runtime registry should be scoped, and it should be possible to find the >>> "nearest eligible converter" based on the python module hierarchy. >> >> I think this might turn into something that approaches the same mass >> of complexity Niall describes, > > Nothing ever needs to be quite as complex as what Niall describes ;-) > > (no offense intended, Niall) > >> because a Python module can be imported into several places in a >> hierarchy at once, and it seems we'd have to track which instance of >> the module is active in order to resolve those scopes correctly. > > Meh. I think a module has an official identity, its __name__. > >> I do hope that most people won't mind if I don't implement something >> as completely general as what Niall has described > > No problem. As the original author I think you should give what I > describe a little more weight in this discussion, though ;-) > Doing something that's only a small modification to the current single-registry model is also very appealing from an ease-of-implementation standpoint too, and it would also be sufficient for my own needs. I'd like to see what Stefan's ideas are first, of course, and I should take a look at some of the code Niall has pointed me at to see if I can take some steps towards a design that would meet his needs as well. But at the moment I'm inclined to go with something pretty similar to the current design to keep this problem from overshadowing and swallowing all the other things I'd like to go into the upgrade. From dave at boostpro.com Wed Oct 5 22:04:36 2011 From: dave at boostpro.com (Dave Abrahams) Date: Wed, 05 Oct 2011 16:04:36 -0400 Subject: [C++-sig] [Boost.Python v3] Conversions and Registries References: <4E77AE2F.3070702@gmail.com> <4E78AC11.5043.B0CBE312@s_sourceforge.nedprod.com> <4E8CADA3.4896.2FF78196@s_sourceforge.nedprod.com> Message-ID: on Wed Oct 05 2011, "Niall Douglas" wrote: > On 5 Oct 2011 at 7:21, Dave Abrahams wrote: > > What I was proposing was that the compile-time registry is identical > to the runtime registry. I don't even know what that means. > Hence the order of lookup so a lot of the simpler conversion can be > done inline by the compiler. But AFAICT there's really almost no advantage in that, and it adds special cases to the model. > Sure, the same system can be abused to have special per-compiland > behaviours. That's fine; a scoped registry would allow the same thing. When you have a bunch of independently-developed modules flying around it's more than likely that there will be "ODR violations" across different modules, and that should be OK as long as they don't try to exchange those types. > I personally have found that rather useful for working around very > special situations such as compiler bugs. I agree that you shouldn't > have two separate systems, and 99% of the time both registries need to > do the same thing. In my own code in fact I have a lot of unit tests > ensuring that the compile-time and run-time registries behave > identically. > >> > Imagine the following. Program A loads DLL B and DLL C. DLL B is >> > dependent on DLL D which uses BPL. DLL C is dependent on DLL E which >> > uses BPL. >> >> Jeez, I'm going to have to graph this >> >> A >> / \ >> B C >> | | >> D E >> \ / >> BPL > > You can't guarantee that Dave. It depends on what flags to dlopen the > end user uses. And on the OS, and on what order things are loaded in. I wasn't trying to make an assertion, just trying to picture what you were describing. > And right now, Python itself defaults to multiple BPLs. I wouldn't put it that way, not at all. Again, what happens depends on the platform and a lot of other factors. >> > Right now with present BPL, we have to load two copies of BPL, one >> > for DLL D and one for DLL E. They maintain separate type registries, >> > so all is good. >> >> That's not correct. Boost.Python was designed to deal with scenarios >> like this and be run as a single instance in such a system, with a >> single registry. > > http://muttley.hates-software.com/2006/01/25/c37456e6.html > > There are plenty more all over the net. Believe me, I'm fully aware of those problems, and note that your reference doesn't mention Boost at all. I happen to know that this very large project out of Lawrence Berekely Labs has been successfully using Boost.Python in multi-module setups with a single instance of the library, across many different platforms, for years: http://cctbx.sourceforge.net/ You should also take a look at this whole thread http://gcc.gnu.org/ml/gcc/2002-05/msg02945.html if you want to have a clear sense of some of the issues. >> > But what if DLL B returns a python function to Program A, which then >> > installs it as a callback with DLL C? >> >> OMG, could you make this more convoluted, please? > > No, it's a valid use case. Again, search google and you'll see. Lots > of people with this same problem. I'm sure it's a valid use case, and I'm also sure you can illustrate whatever problem you're describing with no more than two Boost.Python modules. >> > As I mentioned earlier, this is a very semantically similar problem >> > to supporting multiple python interpreters anyway with each calling >> > into one another. >> >> How exactly is one python interpreter supposed to "call into" another >> one? Are you suggesting they have their own threads and one blocks to >> wait for the other, or is it something completely different. > > Right now BPL doesn't touch the GIL or current interpreter context. > > I'm saying it ought to manage both, because getting it right isn't > obvious. Sure. > And once again, if program A causes the loading of two DLLs each of > which runs its own python interpreter, you can get all sorts of unfun > when the two interpreters call into one another. Again, what does it mean for one interpreter to "call into another"? -- Dave Abrahams BoostPro Computing http://www.boostpro.com From wichert at wiggy.net Wed Oct 5 21:52:01 2011 From: wichert at wiggy.net (Wichert Akkerman) Date: Wed, 05 Oct 2011 21:52:01 +0200 Subject: [C++-sig] How to configure makefile for different build platforms In-Reply-To: <41302A7145AC054FA7A96CFD03835A0A05E5FF@EX10MBX02.EU.NEC.COM> References: <41302A7145AC054FA7A96CFD03835A0A05E5FF@EX10MBX02.EU.NEC.COM> Message-ID: <4E8CB561.8000202@wiggy.net> On 2011-10-5 18:12, David Aldrich wrote: > Hi > > I have a C++ application that uses Boost.Python. We build it on Centos > 5.3, with Python 2.4 and Boost 1.34. > > Our makefile uses explicit paths to find Python and Boost. For the > headers we use: > > PYTHON = /usr/include/python2.4 > > BOOST_INC = /usr/include/boost > > INCPATH=$(PYTHON) > > INCPATH+=$(BOOST_INC) > > CXXFLAGS += $(patsubst %,-I%,$(INCPATH)) > > $(CXX) -c $(CXXFLAGS) sourcefile etc > > Now I need to support building on Ubuntu 10.04 which has Python 2.6, not > 2.4, installed. > > Please can someone suggest how I can modify the makefile to conveniently > handle the different Python paths according to the build platform. On Ubuntu you can call pkg-config to figure out the right compiler and linker options for both Boost and Python. I would expect CentOS to support that as well. Wichert. -- Wichert Akkerman It is simple to make things. http://www.wiggy.net/ It is hard to make things simple. From dave at boostpro.com Thu Oct 6 10:31:23 2011 From: dave at boostpro.com (Dave Abrahams) Date: Thu, 06 Oct 2011 04:31:23 -0400 Subject: [C++-sig] [Boost.Python v3] Conversions and Registries References: <4E77AE2F.3070702@gmail.com> <4E78AC11.5043.B0CBE312@s_sourceforge.nedprod.com> <4E8C5923.7010102@gmail.com> <4E8CB46D.20600@gmail.com> Message-ID: on Wed Oct 05 2011, Jim Bosch wrote: > On 10/05/2011 11:30 AM, Dave Abrahams wrote: > >> on Wed Oct 05 2011, Jim Bosch wrote: >> >>> With runtime conversions, I have to explicitly declare all the >>> template parameter combinations I intend to use. >> >> Not really; a little metaprogramming makes it reasonably easy to >> generate all those combinations. You can also use compile-time triggers >> to register runtime converters. I'm happy to demonstrate if you like. > > The latter sounds more like what I'd want, though a brief > demonstration would be great. You're right in guessing that I don't > really care whether it's a runtime or compile-time conversion. The > key is that I don't want to have to explicitly declare the > conversions, even if I have some metaprogramming to make that easier - > I'd like to only declare what's actually used, since that's > potentially a much smaller number of declarations. I'm not sure exactly what you have in mind when you say "declare what's actually used," because you haven't said what counts as "usage." That said, I can give you an abstract description. It's just a matter of being able to "hitch a ride" at its point-of-use: arrange for some customization point to be instantiated during "use" that you can customize elsewhere. That customization then is where you register the type. For example, let's imagine that by saying a type T is used you mean it's a parameter or return value of a wrapped function. Then you might designate this class template to be instantiated and its constructor called: namespace boost { namespace python { namespace user_hooks { // users are encouraged to specialize templates in this namespace template struct is_used { is_used() { /* do nothing by default */ } }; }}} In A user's wrapping code, she could make a partial specialization of this class template: namespace boost { namespace python { namespace user_hooks { template struct is_used > { is_used() { my_register_converter(); } }; }}} The other way to make customization points like this uses argument dependent lookup and is usually less verbose, though brings with it other sticky problems that you probably want to avoid. > Doing something that's only a small modification to the current > single-registry model is also very appealing from an > ease-of-implementation standpoint too, and it would also be sufficient > for my own needs. > > I'd like to see what Stefan's ideas are first, of course, and I should > take a look at some of the code Niall has pointed me at to see if I > can take some steps towards a design that would meet his needs as > well. But at the moment I'm inclined to go with something pretty > similar to the current design to keep this problem from overshadowing > and swallowing all the other things I'd like to go into the upgrade. Good idea. -- Dave Abrahams BoostPro Computing http://www.boostpro.com From David.Aldrich at EMEA.NEC.COM Thu Oct 6 14:55:26 2011 From: David.Aldrich at EMEA.NEC.COM (David Aldrich) Date: Thu, 6 Oct 2011 12:55:26 +0000 Subject: [C++-sig] How to configure makefile for different build platforms In-Reply-To: <4E8CB561.8000202@wiggy.net> References: <41302A7145AC054FA7A96CFD03835A0A05E5FF@EX10MBX02.EU.NEC.COM> <4E8CB561.8000202@wiggy.net> Message-ID: <41302A7145AC054FA7A96CFD03835A0A05EADF@EX10MBX02.EU.NEC.COM> > On Ubuntu you can call pkg-config to figure out the right compiler and linker > options for both Boost and Python. I would expect CentOS to support that as > well. Hi Wichert Thanks for your suggestion. However, on my Ubuntu system: pkg-config --list-all lists neither python-dev not libboost-all-dev which are what my makefile needs to reference. David From wichert at wiggy.net Thu Oct 6 14:59:45 2011 From: wichert at wiggy.net (Wichert Akkerman) Date: Thu, 06 Oct 2011 14:59:45 +0200 Subject: [C++-sig] How to configure makefile for different build platforms In-Reply-To: <41302A7145AC054FA7A96CFD03835A0A05EADF@EX10MBX02.EU.NEC.COM> References: <41302A7145AC054FA7A96CFD03835A0A05E5FF@EX10MBX02.EU.NEC.COM> <4E8CB561.8000202@wiggy.net> <41302A7145AC054FA7A96CFD03835A0A05EADF@EX10MBX02.EU.NEC.COM> Message-ID: <4E8DA641.8070309@wiggy.net> On 10/06/2011 02:55 PM, David Aldrich wrote: >> On Ubuntu you can call pkg-config to figure out the right compiler and linker >> options for both Boost and Python. I would expect CentOS to support that as >> well. > > Hi Wichert > > Thanks for your suggestion. However, on my Ubuntu system: > > pkg-config --list-all > > lists neither python-dev not libboost-all-dev > > which are what my makefile needs to reference. pkg-config does not use Debian packages names. Try using "python" for the current standard python 2 version, or pythonX.Y for specific versions. Boost appears to be installed in a standard location, so doesn't need any special compiler or linker options. Wichert. From David.Aldrich at EMEA.NEC.COM Thu Oct 6 15:09:05 2011 From: David.Aldrich at EMEA.NEC.COM (David Aldrich) Date: Thu, 6 Oct 2011 13:09:05 +0000 Subject: [C++-sig] How to configure makefile for different build platforms In-Reply-To: <4E8DA641.8070309@wiggy.net> References: <41302A7145AC054FA7A96CFD03835A0A05E5FF@EX10MBX02.EU.NEC.COM> <4E8CB561.8000202@wiggy.net> <41302A7145AC054FA7A96CFD03835A0A05EADF@EX10MBX02.EU.NEC.COM> <4E8DA641.8070309@wiggy.net> Message-ID: <41302A7145AC054FA7A96CFD03835A0A05EB1D@EX10MBX02.EU.NEC.COM> > pkg-config does not use Debian packages names. Try using "python" for the > current standard python 2 version, or pythonX.Y for specific versions. Boost > appears to be installed in a standard location, so doesn't need any special > compiler or linker options. Thanks but --list-all only lists notify-python and dbus-python. David From wichert at wiggy.net Thu Oct 6 15:51:21 2011 From: wichert at wiggy.net (Wichert Akkerman) Date: Thu, 06 Oct 2011 15:51:21 +0200 Subject: [C++-sig] How to configure makefile for different build platforms In-Reply-To: <41302A7145AC054FA7A96CFD03835A0A05EB1D@EX10MBX02.EU.NEC.COM> References: <41302A7145AC054FA7A96CFD03835A0A05E5FF@EX10MBX02.EU.NEC.COM> <4E8CB561.8000202@wiggy.net> <41302A7145AC054FA7A96CFD03835A0A05EADF@EX10MBX02.EU.NEC.COM> <4E8DA641.8070309@wiggy.net> <41302A7145AC054FA7A96CFD03835A0A05EB1D@EX10MBX02.EU.NEC.COM> Message-ID: <4E8DB259.3090607@wiggy.net> On 10/06/2011 03:09 PM, David Aldrich wrote: >> pkg-config does not use Debian packages names. Try using "python" for the >> current standard python 2 version, or pythonX.Y for specific versions. Boost >> appears to be installed in a standard location, so doesn't need any special >> compiler or linker options. > Thanks but --list-all only lists notify-python and dbus-python. Odd. Try python2.6-config. From David.Aldrich at EMEA.NEC.COM Thu Oct 6 16:02:00 2011 From: David.Aldrich at EMEA.NEC.COM (David Aldrich) Date: Thu, 6 Oct 2011 14:02:00 +0000 Subject: [C++-sig] How to configure makefile for different build platforms In-Reply-To: <4E8DB259.3090607@wiggy.net> References: <41302A7145AC054FA7A96CFD03835A0A05E5FF@EX10MBX02.EU.NEC.COM> <4E8CB561.8000202@wiggy.net> <41302A7145AC054FA7A96CFD03835A0A05EADF@EX10MBX02.EU.NEC.COM> <4E8DA641.8070309@wiggy.net> <41302A7145AC054FA7A96CFD03835A0A05EB1D@EX10MBX02.EU.NEC.COM> <4E8DB259.3090607@wiggy.net> Message-ID: <41302A7145AC054FA7A96CFD03835A0A05EB62@EX10MBX02.EU.NEC.COM> > -----Original Message----- > From: cplusplus-sig-bounces+david.aldrich=emea.nec.com at python.org > [mailto:cplusplus-sig-bounces+david.aldrich=emea.nec.com at python.org] > On Behalf Of Wichert Akkerman > Sent: 06 October 2011 14:51 > To: cplusplus-sig at python.org > Subject: Re: [C++-sig] How to configure makefile for different build platforms > > On 10/06/2011 03:09 PM, David Aldrich wrote: > >> pkg-config does not use Debian packages names. Try using "python" for > >> the current standard python 2 version, or pythonX.Y for specific > >> versions. Boost appears to be installed in a standard location, so > >> doesn't need any special compiler or linker options. > > Thanks but --list-all only lists notify-python and dbus-python. > > Odd. Try python2.6-config. No, that's not there either. http://bugs.python.org/issue3585 suggests that pkg-config support in Python was not added until at least 2.6. Perhaps it did not make it in the version I have. Besides, I also wanted it to detect Python 2.4. So this method won't work unfortunately. From talljimbo at gmail.com Thu Oct 6 16:36:07 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Thu, 06 Oct 2011 10:36:07 -0400 Subject: [C++-sig] How to configure makefile for different build platforms In-Reply-To: <41302A7145AC054FA7A96CFD03835A0A05EB62@EX10MBX02.EU.NEC.COM> References: <41302A7145AC054FA7A96CFD03835A0A05E5FF@EX10MBX02.EU.NEC.COM> <4E8CB561.8000202@wiggy.net> <41302A7145AC054FA7A96CFD03835A0A05EADF@EX10MBX02.EU.NEC.COM> <4E8DA641.8070309@wiggy.net> <41302A7145AC054FA7A96CFD03835A0A05EB1D@EX10MBX02.EU.NEC.COM> <4E8DB259.3090607@wiggy.net> <41302A7145AC054FA7A96CFD03835A0A05EB62@EX10MBX02.EU.NEC.COM> Message-ID: <4E8DBCD7.8070500@gmail.com> On 10/06/2011 10:02 AM, David Aldrich wrote: > > >> -----Original Message----- >> From: cplusplus-sig-bounces+david.aldrich=emea.nec.com at python.org >> [mailto:cplusplus-sig-bounces+david.aldrich=emea.nec.com at python.org] >> On Behalf Of Wichert Akkerman >> Sent: 06 October 2011 14:51 >> To: cplusplus-sig at python.org >> Subject: Re: [C++-sig] How to configure makefile for different build platforms >> >> On 10/06/2011 03:09 PM, David Aldrich wrote: >>>> pkg-config does not use Debian packages names. Try using "python" for >>>> the current standard python 2 version, or pythonX.Y for specific >>>> versions. Boost appears to be installed in a standard location, so >>>> doesn't need any special compiler or linker options. >>> Thanks but --list-all only lists notify-python and dbus-python. >> >> Odd. Try python2.6-config. > > No, that's not there either. > > http://bugs.python.org/issue3585 suggests that pkg-config support in Python was not added until at least 2.6. Perhaps it did not make it in the version I have. Besides, I also wanted it to detect Python 2.4. So this method won't work unfortunately. You can also extract this information from various methods in the distutils package. Even if you aren't using distutils to control the build, you could ask Python itself to print out the configuration variables. For instance: python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_inc()" That should give you the Python include directory. There are other methods to get library names, compiler flags, and other things. Jim From David.Aldrich at EMEA.NEC.COM Thu Oct 6 17:08:44 2011 From: David.Aldrich at EMEA.NEC.COM (David Aldrich) Date: Thu, 6 Oct 2011 15:08:44 +0000 Subject: [C++-sig] How to configure makefile for different build platforms Message-ID: <41302A7145AC054FA7A96CFD03835A0A05EBDE@EX10MBX02.EU.NEC.COM> > You can also extract this information from various methods in the distutils > package. Even if you aren't using distutils to control the build, you could ask > Python itself to print out the configuration variables. For instance: > > python -c "import distutils.sysconfig; print > distutils.sysconfig.get_python_inc()" > > That should give you the Python include directory. There are other methods > to get library names, compiler flags, and other things. Hi Jim That is very useful. Thank you. How could I get the major version number (e.g. 2.4) so that I can build the library type: EXTRA_LIBS_R+=-lpython2.4 BR David From wichert at wiggy.net Thu Oct 6 17:11:04 2011 From: wichert at wiggy.net (Wichert Akkerman) Date: Thu, 06 Oct 2011 17:11:04 +0200 Subject: [C++-sig] How to configure makefile for different build platforms In-Reply-To: <41302A7145AC054FA7A96CFD03835A0A05EBDE@EX10MBX02.EU.NEC.COM> References: <41302A7145AC054FA7A96CFD03835A0A05EBDE@EX10MBX02.EU.NEC.COM> Message-ID: <4E8DC508.2090805@wiggy.net> On 10/06/2011 05:08 PM, David Aldrich wrote: > >> You can also extract this information from various methods in the distutils >> package. Even if you aren't using distutils to control the build, you could ask >> Python itself to print out the configuration variables. For instance: >> >> python -c "import distutils.sysconfig; print >> distutils.sysconfig.get_python_inc()" >> >> That should give you the Python include directory. There are other methods >> to get library names, compiler flags, and other things. > Hi Jim > > That is very useful. Thank you. > > How could I get the major version number (e.g. 2.4) so that I can build the library type: > > EXTRA_LIBS_R+=-lpython2.4 sys.version_info From David.Aldrich at EMEA.NEC.COM Thu Oct 6 17:17:46 2011 From: David.Aldrich at EMEA.NEC.COM (David Aldrich) Date: Thu, 6 Oct 2011 15:17:46 +0000 Subject: [C++-sig] How to configure makefile for different build platforms In-Reply-To: <4E8DC508.2090805@wiggy.net> References: <41302A7145AC054FA7A96CFD03835A0A05EBDE@EX10MBX02.EU.NEC.COM> <4E8DC508.2090805@wiggy.net> Message-ID: <41302A7145AC054FA7A96CFD03835A0A05EC04@EX10MBX02.EU.NEC.COM> > sys.version_info How would I get that from the command line please? From wichert at wiggy.net Thu Oct 6 17:22:04 2011 From: wichert at wiggy.net (Wichert Akkerman) Date: Thu, 06 Oct 2011 17:22:04 +0200 Subject: [C++-sig] How to configure makefile for different build platforms In-Reply-To: <41302A7145AC054FA7A96CFD03835A0A05EC04@EX10MBX02.EU.NEC.COM> References: <41302A7145AC054FA7A96CFD03835A0A05EBDE@EX10MBX02.EU.NEC.COM> <4E8DC508.2090805@wiggy.net> <41302A7145AC054FA7A96CFD03835A0A05EC04@EX10MBX02.EU.NEC.COM> Message-ID: <4E8DC79C.3030202@wiggy.net> On 10/06/2011 05:17 PM, David Aldrich wrote: > > sys.version_info > > How would I get that from the command line please? python -c "import sys; print '%d.%d' % sys.version_info[:2]" From David.Aldrich at EMEA.NEC.COM Thu Oct 6 17:24:09 2011 From: David.Aldrich at EMEA.NEC.COM (David Aldrich) Date: Thu, 6 Oct 2011 15:24:09 +0000 Subject: [C++-sig] How to configure makefile for different build platforms In-Reply-To: <4E8DC79C.3030202@wiggy.net> References: <41302A7145AC054FA7A96CFD03835A0A05EBDE@EX10MBX02.EU.NEC.COM> <4E8DC508.2090805@wiggy.net> <41302A7145AC054FA7A96CFD03835A0A05EC04@EX10MBX02.EU.NEC.COM> <4E8DC79C.3030202@wiggy.net> Message-ID: <41302A7145AC054FA7A96CFD03835A0A05EC26@EX10MBX02.EU.NEC.COM> > > How would I get that from the command line please? > > python -c "import sys; print '%d.%d' % sys.version_info[:2]" Fantastic. Thank you very much. David From gr at componic.co.nz Thu Oct 6 22:36:54 2011 From: gr at componic.co.nz (Glenn Ramsey) Date: Fri, 07 Oct 2011 09:36:54 +1300 Subject: [C++-sig] Passing opaque pointers between modules Message-ID: <4E8E1166.20608@componic.co.nz> Is it possible to pass an opaque pointer from one bpl module to another? Following the documentation [1] I am able to successfully pass an opaque pointer from one function in a module to another function in that same module. Attempting to pass the opaque pointer to another module results in an error like this: Python argument types in Module1.SetOpaque(Class1, class opaque_ *) did not match C++ signature: SetOpaque(class Class2 {lvalue}, class opaque_ *) If I try the same using void * and relying on the built-in converters I get a similar failure: No registered converter was able to extract a C++ pointer to type void * from this Python object of type void * Is there a way to do this? Glenn [1] From wichert at wiggy.net Thu Oct 6 23:30:20 2011 From: wichert at wiggy.net (Wichert Akkerman) Date: Thu, 06 Oct 2011 23:30:20 +0200 Subject: [C++-sig] Custom smart pointer with const Types In-Reply-To: <4E8C8D24.9010602@gmail.com> References: <4E8C5F46.1040209@gmail.com> <4E8C8D24.9010602@gmail.com> Message-ID: <4E8E1DEC.7040500@wiggy.net> On 2011-10-5 19:00, Jim Bosch wrote: > On 10/05/2011 11:31 AM, Holger Brandsmeier wrote: >> Jim, >> >> how do you handle smart_ptr in boost python? Do you simply >> cast away the constness? >> >> >> For my custom smart pointer I provide a class extending >> to_python_converter, rcp_to_python, true> >> now I decided to also implement >> to_python_converter, rcp_to_python_const, true> >> where I simply cast away the constness and use the above implementation. >> >> This seems to be working so far. Did you provide a smarter >> implementation for shared_ptr? >> > > Personally, I pretty much always use shared_ptr, and I've written a > rather large extension to support constness on the Python side, and > dealing with shared_ptr is a side effect of that. > > I'm welcome to pass it on if you're interested, but I don't think it > really addresses your problem. I would be definitely be interested in seeing how you're handling const objects. Wichert. -- Wichert Akkerman It is simple to make things. http://www.wiggy.net/ It is hard to make things simple. From talljimbo at gmail.com Thu Oct 6 23:56:57 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Thu, 06 Oct 2011 17:56:57 -0400 Subject: [C++-sig] Custom smart pointer with const Types In-Reply-To: <4E8E1DEC.7040500@wiggy.net> References: <4E8C5F46.1040209@gmail.com> <4E8C8D24.9010602@gmail.com> <4E8E1DEC.7040500@wiggy.net> Message-ID: <4E8E2429.30807@gmail.com> On 10/06/2011 05:30 PM, Wichert Akkerman wrote: > On 2011-10-5 19:00, Jim Bosch wrote: >> >> Personally, I pretty much always use shared_ptr, and I've written a >> rather large extension to support constness on the Python side, and >> dealing with shared_ptr is a side effect of that. >> >> I'm welcome to pass it on if you're interested, but I don't think it >> really addresses your problem. > > I would be definitely be interested in seeing how you're handling const > objects. > You can find a reasonable up-to-date version at https://svn.boost.org/svn/boost/sandbox/python_extensions There are a lot of other little extensions there too; you'll be most interested in the const_aware subdirectories. It's highly likely the build system is broken, but the code itself should still work. The best documentation is probably the const_aware example in the tests. Adding this to Boost.Python proper is a big part of my plans for a Boost.Python v3, but that's still a ways off, and unfortunately the code I'm pointing you at may have suffered from a little bitrot as I haven't looked at it in a while. Jim From gr at componic.co.nz Fri Oct 7 01:56:37 2011 From: gr at componic.co.nz (Glenn Ramsey) Date: Fri, 07 Oct 2011 12:56:37 +1300 Subject: [C++-sig] Passing opaque pointers between modules In-Reply-To: <4E8E1166.20608@componic.co.nz> References: <4E8E1166.20608@componic.co.nz> Message-ID: <4E8E4035.5070103@componic.co.nz> On 07/10/11 09:36, Glenn Ramsey wrote: > Is it possible to pass an opaque pointer from one bpl module to another? > > Following the documentation [1] I am able to successfully pass an opaque pointer > from one function in a module to another function in that same module. > Attempting to pass the opaque pointer to another module results in an error like > this: > > Python argument types in > Module1.SetOpaque(Class1, class opaque_ *) > did not match C++ signature: > SetOpaque(class Class2 {lvalue}, class opaque_ *) > > If I try the same using void * and relying on the built-in converters I get a > similar failure: > > No registered converter was able to extract a C++ pointer to type void * from > this Python object of type void * > > Is there a way to do this? > > Glenn > > [1] > It works if BOOST_PYTHON_STATIC_LIB is not defined. Glenn From jonas.einarsson at gmail.com Tue Oct 11 16:39:58 2011 From: jonas.einarsson at gmail.com (Jonas Einarsson) Date: Tue, 11 Oct 2011 16:39:58 +0200 Subject: [C++-sig] Writing to numpy array: good practices? Message-ID: Dear list, First, sorry if this is a double-post, I got confused with the subscription. Anyhow, I seek an opinion on good practice. I'd like to write simple programs that 1) (In Python) allocates numpy array, 2) (In C/C++) fills said numpy array with data. To this end I use Boost.Python to compile an extension module. I use the (possibly obsolete?) boost/python/numeric.hpp to allow passing an ndarray to my C-functions. Then I use the numpy C API directly to extract a pointer to the underlying data. This seemingly works very well, and I can check for correct dimensions and data types, etcetera. As documentation is scarce, I ask you if this is an acceptable procedure? Any pitfalls nearby? Sample code: C++ void fill_array(numeric::array& y) { const int ndims = 2; // Get pointer to np array PyArrayObject* a = (PyArrayObject*)PyArray_FROM_O(y.ptr()); if (a == NULL) { throw std::exception("Could not get NP array."); } if (a->descr->elsize != sizeof(double)) { throw std::exception("Must be double ndarray"); } if (a->nd != ndims) { throw std::exception("Wrong dimension on array."); } int rows = *(a->dimensions); int cols = *(a->dimensions+1); double* data = (double*)a->data; for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { *(data + i*cols + j) = really_cool_function(i,j); } } } BOOST_PYTHON_MODULE(Practical01) { import_array(); boost::python::numeric::array::set_module_and_type("numpy", "ndarray"); def("fill_array",&fill_array); } And in python this could be used such as: import Practical01 import numpy import matplotlib.pyplot as plt import matplotlib.cm as colormaps import time w=500 h=500 large_array = numpy.ones( (h,w) ); t1 = time.time() Practical01.fill_array(large_array) t2 = time.time() print 'Horrible calculation took %0.3f ms' % ((t2-t1)*1000.0) plt.imshow(large_array,cmap=colormaps.gray) plt.show() Simplicity is a major factor for me. I don't want a complete wrapper for ndarrays, I just want to compute and shuffle data to Python for further processing. Letting Python handle allocation and garbage collection also seems like a good idea. Sincerely, Jonas Einarsson -------------- next part -------------- An HTML attachment was scrubbed... URL: From talljimbo at gmail.com Tue Oct 11 17:01:39 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Tue, 11 Oct 2011 11:01:39 -0400 Subject: [C++-sig] Writing to numpy array: good practices? In-Reply-To: References: Message-ID: <4E945A53.7070201@gmail.com> On 10/11/2011 10:39 AM, Jonas Einarsson wrote: > Dear list, > > First, sorry if this is a double-post, I got confused with the > subscription. Anyhow, I seek an opinion on good practice. > > I'd like to write simple programs that > 1) (In Python) allocates numpy array, > 2) (In C/C++) fills said numpy array with data. > > To this end I use Boost.Python to compile an extension module. I use the > (possibly obsolete?) boost/python/numeric.hpp to allow passing an > ndarray to my C-functions. Then I use the numpy C API directly to > extract a pointer to the underlying data. > > This seemingly works very well, and I can check for correct dimensions > and data types, etcetera. > > As documentation is scarce, I ask you if this is an acceptable > procedure? Any pitfalls nearby? This is very much an acceptable procedure. It is a fairly low-level one, so you may want to be a little more careful in some respects (see below, and take a closer look at the Numpy C-API documentation). But the principal is fine. > > Sample code: C++ > > void fill_array(numeric::array& y) I'd recommend just passing boost::python::object, and using PyArray_Check() to ensure that it is indeed an array; I really don't know how good the old numeric interface is at matching the right types. But maybe I'm unnecessarily distrustful on that point. Alternately, you could use one of the Numpy C-API functions to get an array from just about anything. > { > const int ndims = 2; > > // Get pointer to np array > PyArrayObject* a = (PyArrayObject*)PyArray_FROM_O(y.ptr()); You might be leaking memory by throwing exceptions after this point; I'd suggest making "a" a boost::python::handle<>, which will automatically propagate a raised Python exception if you pass it a null pointer. You should probably use something other than PyArray_FROM_O (PyArray_FROM_ANY or PyArray_FROM_OTF, for instance), to ensure that the flags on the numpy array are what you're expecting. You can also have numpy do a check on the number of dimensions and the data type at the same time. > if (a == NULL) { > throw std::exception("Could not get NP array."); > } > if (a->descr->elsize != sizeof(double)) > { > throw std::exception("Must be double ndarray"); > } > if (a->nd != ndims) > { > throw std::exception("Wrong dimension on array."); > } > int rows = *(a->dimensions); > int cols = *(a->dimensions+1); > double* data = (double*)a->data; > > for (int i = 0; i < rows; i++) > { > for (int j = 0; j < cols; j++) > { > *(data + i*cols + j) = really_cool_function(i,j); This works for most ndarrays (those that are C_CONTIGUOUS), but it won't work for all of them. It will fail if you pass in an array you've called transpose() on, for instance. What you really want to do is multiply the indices by the strides. There are macros to do this in the Numpy C-API (PyArray_GETPTR). I'd recommend you use those. > } > } > } > > > > Simplicity is a major factor for me. I don't want a complete wrapper for > ndarrays, I just want to compute and shuffle data to Python for further > processing. Letting Python handle allocation and garbage collection also > seems like a good idea. > This may be the best approach for you now in that case. There are also efforts underway to make the Numpy C-API available through a boost::python interface (https://svn.boost.org/svn/boost/sandbox/numpy), but it's not entirely stable yet. Jim From tuerke at cbs.mpg.de Thu Oct 13 17:34:00 2011 From: tuerke at cbs.mpg.de (=?ISO-8859-1?Q?Erik_T=FCrke?=) Date: Thu, 13 Oct 2011 17:34:00 +0200 Subject: [C++-sig] Inheritance problem Message-ID: <4E9704E8.8030608@cbs.mpg.de> Hi all, currently i am facing a problem regarding inheritance with boost::python Here is a simple code snippet: class Base { public: virtual void print() { std::cout << "hello" << std::endl; } }; class BaseWrapper : public Base, public wrapper { public: BaseWrapper( PyObject *p ) : self(p) {} BaseWrapper( PyObject *p, const Base &base) : Base(base), wrapper< Base >(), self(p) {} virtual void _print() { print(); } private: PyObject const *self; }; class Derived : public Base { public: Derived() {} Derived( PyObject *p ) : self(p){} private: PyObject const *self; }; BOOST_PYTHON_MODULE( my_module ) { class_( "Base", init<>() ) .def("printIt", &BaseWrapper::_print) ; class_ >( "Derived", init<>() ); } And in python i want to have the following reslut: >>import my_module >> derived = my_module.Derived() >> derived.printIt() Actually this should print "hello" but instead throws an error saying: derived.printIt() Boost.Python.ArgumentError: Python argument types in Base.printIt(Derived) did not match C++ signature: printIt(_Base {lvalue}) I tried a lot of modification but always getting this message. Does somebody of you know what i am missing? Thanks a lot in advance! Best regards! -- Erik T?rke Department of Neurophysics Max-Planck-Institute for Human Cognitive and Brain Sciences Stephanstrasse 1A 04103 Leipzig Germany Tel: +49 341 99 40-2440 Email: tuerke at cbs.mpg.de www.cbs.mpg.de From Holger.Joukl at LBBW.de Fri Oct 14 10:29:16 2011 From: Holger.Joukl at LBBW.de (Holger Joukl) Date: Fri, 14 Oct 2011 10:29:16 +0200 Subject: [C++-sig] Inheritance problem In-Reply-To: <4E9704E8.8030608@cbs.mpg.de> References: <4E9704E8.8030608@cbs.mpg.de> Message-ID: Hi, > currently i am facing a problem regarding inheritance with boost::python > > Here is a simple code snippet: > > > class Base > { > public: > virtual void print() { std::cout << "hello" << std::endl; } > > }; > > [...] > > And in python i want to have the following reslut: > > >>import my_module > >> derived = my_module.Derived() > >> derived.printIt() > > Actually this should print "hello" but instead throws an error saying: > > derived.printIt() > Boost.Python.ArgumentError: Python argument types in > Base.printIt(Derived) > did not match C++ signature: > printIt(_Base {lvalue}) Maybe I'm oversimplifying but if all you need is exposing some derived class then I don't see why you'd need all the BaseWrapper, self-pointer etc. stuff. S.th. as simple as that should work: // file cppcode.hpp #include class Base { public: virtual void print() { std::cout << "hello Base" << std::endl; } }; class Derived : public Base { public: virtual void print() { std::cout << "hello Derived" << std::endl; } }; // only to show callback-into-python-overrides necessities void callback(Base& base) { base.print(); } // file wrap.cpp #include #include "cppcode.hpp" namespace bp = boost::python; BOOST_PYTHON_MODULE(cppcode) { bp::class_("Base") .def("printIt", &Base::print) ; bp::class_ >("Derived"); bp::def("callback", &callback); }; When run: # file test.py import cppcode derived = cppcode.Derived() derived.printIt() cppcode.callback(derived) class PythonDerived(cppcode.Base): def printIt(self): print "hello PythonDerived" pyderived = PythonDerived() pyderived.printIt() cppcode.callback(pyderived) $ python2.7 -i ./test.py hello Derived hello Derived hello PythonDerived hello Base >>> Note that you'd need a Base wrapper class to actually make callbacks to Python method-overrides work, just as documented in http://www.boost.org/doc/libs/1_47_0/libs/python/doc/tutorial/doc/html/python/exposing.html#python.class_virtual_functions Holger Landesbank Baden-Wuerttemberg Anstalt des oeffentlichen Rechts Hauptsitze: Stuttgart, Karlsruhe, Mannheim, Mainz HRA 12704 Amtsgericht Stuttgart From tuerke at cbs.mpg.de Fri Oct 14 11:22:48 2011 From: tuerke at cbs.mpg.de (=?ISO-8859-1?Q?Erik_T=FCrke?=) Date: Fri, 14 Oct 2011 11:22:48 +0200 Subject: [C++-sig] Inheritance problem In-Reply-To: References: <4E9704E8.8030608@cbs.mpg.de> Message-ID: <4E97FF68.5000704@cbs.mpg.de> On 10/14/11 10:29, Holger Joukl wrote: > Hi, > >> currently i am facing a problem regarding inheritance with boost::python >> >> Here is a simple code snippet: >> >> >> class Base >> { >> public: >> virtual void print() { std::cout<< "hello"<< std::endl; } >> >> }; >> >> [...] >> >> And in python i want to have the following reslut: >> >> >>import my_module >> >> derived = my_module.Derived() >> >> derived.printIt() >> >> Actually this should print "hello" but instead throws an error saying: >> >> derived.printIt() >> Boost.Python.ArgumentError: Python argument types in >> Base.printIt(Derived) >> did not match C++ signature: >> printIt(_Base {lvalue}) > Maybe I'm oversimplifying but if all you need is exposing some derived > class then > I don't see why you'd need all the BaseWrapper, self-pointer etc. stuff. > > S.th. as simple as that should work: > > // file cppcode.hpp > > #include > > class Base > { > public: > virtual void print() { std::cout<< "hello Base"<< std::endl; } > > }; > > > class Derived : public Base > { > public: > virtual void print() { std::cout<< "hello Derived"<< std::endl; } > > > }; > > // only to show callback-into-python-overrides necessities > void callback(Base& base) { > base.print(); > } > > // file wrap.cpp > > #include > #include "cppcode.hpp" > > namespace bp = boost::python; > > > > BOOST_PYTHON_MODULE(cppcode) > { > bp::class_("Base") > .def("printIt",&Base::print) > ; > bp::class_ >("Derived"); > bp::def("callback",&callback); > }; > > > > When run: > > # file test.py > import cppcode > > derived = cppcode.Derived() > derived.printIt() > cppcode.callback(derived) > > class PythonDerived(cppcode.Base): > def printIt(self): > print "hello PythonDerived" > > pyderived = PythonDerived() > pyderived.printIt() > cppcode.callback(pyderived) > > $ python2.7 -i ./test.py > hello Derived > hello Derived > hello PythonDerived > hello Base > Note that you'd need a Base wrapper class to actually make callbacks to > Python method-overrides work, > just as documented in > http://www.boost.org/doc/libs/1_47_0/libs/python/doc/tutorial/doc/html/python/exposing.html#python.class_virtual_functions > > Holger > > > Landesbank Baden-Wuerttemberg > Anstalt des oeffentlichen Rechts > Hauptsitze: Stuttgart, Karlsruhe, Mannheim, Mainz > HRA 12704 > Amtsgericht Stuttgart > > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig Hi Holger, thanks for your response. Ok lets say my BaseClass has a member function called init( vector4 ): class Base { public: void init( vector4 &vec ) { //doWhatEver } //a lot of other functions }; Unfortunetaly i can not expose this init function directly to python so i am writing a BaseWrapper class BaseWrapper : public Base, public bp::wrapper { public: void _init( int first, int second, int third, int fourth) { init( makeVec(first, second, third, fourth) ); } // a lot of other wrapper functions }; And i have a derived class: class Derived : Base { public: //some more functions }; So when i am exposing Base and Derived like: BOOST_PYTHON_MODULE( my_module ) { class_( "Base", init<>() ) .def("init", &BaseWrapper::_init) ; class_ >( "Derived", init<>() ); } I want to have all functions for objects of Derived that are available in Base. The thing is, that e.g. ipython recognizes the functions. So in ipython, when i have an object of type Derived with tab completion i see the functions from Base. But when i try to call them i always get this "signature" error. So i do not know how to use those callback approach you suggested. Especially if you are using function overloading. And additionally, this would mean, that i have to write such a callback function for each function in my base class as a global function. I think there is a much simpler way. One thing i have to mention is, that it is perfectly working if i omit the BaseWrapper class. So if the functions of Base can be exposed without using a wrapper class: class Base { public: void init( int first, int second, int third, int fourth ) { //doWhatEver } //a lot of other functions }; class Derived : public Base { }; BOOST_PYTHON_MODULE( my_module ) { class_( "Base", init<>() ) .def("init", &Base::init) ; class_ >( "Derived", init<>() ); } In python: >>derived = my_module.Derived() >>derived.init(3,1,2,2) ...works. But unfortunately not with the BaseWrapper Class :-( Sorry for the long post... Best regards! -- Erik T?rke Department of Neurophysics Max-Planck-Institute for Human Cognitive and Brain Sciences Stephanstrasse 1A 04103 Leipzig Germany Tel: +49 341 99 40-2440 Email: tuerke at cbs.mpg.de www.cbs.mpg.de From Holger.Joukl at LBBW.de Fri Oct 14 16:48:02 2011 From: Holger.Joukl at LBBW.de (Holger Joukl) Date: Fri, 14 Oct 2011 16:48:02 +0200 Subject: [C++-sig] Inheritance problem In-Reply-To: <4E97FF68.5000704@cbs.mpg.de> References: <4E9704E8.8030608@cbs.mpg.de> <4E97FF68.5000704@cbs.mpg.de> Message-ID: Hi, > Ok lets say my BaseClass has a member function called init( vector4 ): > > class Base > { > public: > void init( vector4 &vec ) { //doWhatEver } > //a lot of other functions > }; > > Unfortunetaly i can not expose this init function directly to python so > i am writing a BaseWrapper Why's that? Can't you expose vector4 to Python? > So when i am exposing Base and Derived like: > > > BOOST_PYTHON_MODULE( my_module ) > { > > class_( "Base", init<>() ) > .def("init", &BaseWrapper::_init) > ; > class_ >( "Derived", init<>() ); > } > > I want to have all functions for objects of Derived that are available > in Base. > The thing is, that e.g. ipython recognizes the functions. > So in ipython, when i have an object of type Derived with tab completion > i see the functions from Base. > But when i try to call them i always get this "signature" error. I think the problem is that the Derived class doesn't actually have any inheritance relationship with BaseWrapper, i.e. Base / \ / \ / \ BaseWrapper Derived So in an example like this // file cppcode.hpp #include class Base { protected: int m_area; public: Base() : m_area(0) {} void init(int area) { m_area = area; } virtual void print() { std::cout << "hello Base " << m_area << std::endl; } }; class Derived : public Base { public: virtual void print() { std::cout << "hello Derived " << m_area << std::endl; } }; // only to show callback-into-python-overrides necessities void callback(Base& base) { base.print(); } // file wrap.cpp #include #include "cppcode.hpp" namespace bp = boost::python; class BaseWrapper : public Base, public bp::wrapper { public: void _init(int x, int y) { init(x * y); } }; BOOST_PYTHON_MODULE(cppcode) { bp::class_("Base") .def("init", &BaseWrapper::_init) .def("printIt", &Base::print) ; bp::class_ >("Derived"); bp::def("callback", &callback); }; #!/apps/local/gcc/4.5.1/bin/python2.7 # file test.py import cppcode print "---> base" base = cppcode.Base() base.printIt() base.init(3, 4) base.printIt() print "---> derived" derived = cppcode.Derived() derived.printIt() derived.init(3, 4) derived.printIt() cppcode.callback(derived) class PythonDerived(cppcode.Base): def printIt(self): print "hello PythonDerived" print "---> python derived" pyderived = PythonDerived() pyderived.printIt() cppcode.callback(pyderived) I run into this error when trying to call .init() on the Derived object: $ python2.7 ./test.py ---> base hello Base 0 hello Base 12 ---> derived hello Derived 0 Traceback (most recent call last): File "./test.py", line 23, in derived.init(3, 4) Boost.Python.ArgumentError: Python argument types in Base.init(Derived, int, int) did not match C++ signature: init(BaseWrapper {lvalue}, int, int) Which makes sense since Derived does not inherit from BaseWrapper. > So i do not know how to use those callback approach you suggested. > Especially if you are using function overloading. And additionally, this > would mean, that i have to write such a callback function for each > function in my base class as a global function. Never mind the callback, I might have just confused you. The callback is only for showing that you'd need a Wrapper class if you want to inherit in Python and be able to call back from C++ into Python and actually call methods overridden in Python. > One thing i have to mention is, that it is perfectly working if i omit > the BaseWrapper class. So if the functions of Base can be exposed > without using a wrapper class: > [...] > ...works. But unfortunately not with the BaseWrapper Class :-( Because now you don't have the problem that Derived has no inheritance relationship with BaseWrapper. Maybe you can just use a free function: // file wrap.cpp #include #include "cppcode.hpp" namespace bp = boost::python; void _init(Base& base, int x, int y) { base.init(x * y); } BOOST_PYTHON_MODULE(cppcode) { bp::class_("Base") .def("init", &_init) .def("printIt", &Base::print) ; bp::class_ >("Derived"); bp::def("callback", &callback); }; # file test.py import os import sys import cppcode print "---> base" base = cppcode.Base() base.printIt() base.init(3, 4) base.printIt() print "---> derived" derived = cppcode.Derived() derived.printIt() derived.init(3, 4) derived.printIt() cppcode.callback(derived) class PythonDerived(cppcode.Base): def printIt(self): print "hello PythonDerived" print "---> python derived" pyderived = PythonDerived() pyderived.printIt() # to make this invoke PythonDerived.printIt() you need a wrapper class cppcode.callback(pyderived) ===> $ python2.7 ./test.py ---> base hello Base 0 hello Base 12 ---> derived hello Derived 0 hello Derived 12 hello Derived 12 ---> python derived hello PythonDerived hello Base 0 >>> Holger Landesbank Baden-Wuerttemberg Anstalt des oeffentlichen Rechts Hauptsitze: Stuttgart, Karlsruhe, Mannheim, Mainz HRA 12704 Amtsgericht Stuttgart From tuerke at cbs.mpg.de Mon Oct 17 16:36:17 2011 From: tuerke at cbs.mpg.de (=?ISO-8859-1?Q?Erik_T=FCrke?=) Date: Mon, 17 Oct 2011 16:36:17 +0200 Subject: [C++-sig] Inheritance problem In-Reply-To: References: <4E9704E8.8030608@cbs.mpg.de> <4E97FF68.5000704@cbs.mpg.de> Message-ID: <4E9C3D61.7000606@cbs.mpg.de> On 10/14/11 16:48, Holger Joukl wrote: > Hi, > > > Ok lets say my BaseClass has a member function called init( vector4 ): >> class Base >> { >> public: >> void init( vector4&vec ) { //doWhatEver } >> //a lot of other functions >> }; >> >> Unfortunetaly i can not expose this init function directly to python so >> i am writing a BaseWrapper > Why's that? Can't you expose vector4 to Python? > > >> So when i am exposing Base and Derived like: >> >> >> BOOST_PYTHON_MODULE( my_module ) >> { >> >> class_( "Base", init<>() ) >> .def("init",&BaseWrapper::_init) >> ; >> class_ >( "Derived", init<>() ); >> } >> >> I want to have all functions for objects of Derived that are available >> in Base. >> The thing is, that e.g. ipython recognizes the functions. >> So in ipython, when i have an object of type Derived with tab completion >> i see the functions from Base. >> But when i try to call them i always get this "signature" error. > I think the problem is that the Derived class doesn't actually have any > inheritance > relationship with BaseWrapper, i.e. > Base > / \ > / \ > / \ > BaseWrapper Derived > > So in an example like this > > // file cppcode.hpp > > #include > > class Base > { > protected: > int m_area; > public: > Base() : m_area(0) {} > void init(int area) { > m_area = area; > } > virtual void print() { std::cout<< "hello Base "<< m_area<< > std::endl; } > > }; > > > class Derived : public Base > { > public: > virtual void print() { std::cout<< "hello Derived "<< m_area<< > std::endl; } > > > }; > > // only to show callback-into-python-overrides necessities > void callback(Base& base) { > base.print(); > } > > > // file wrap.cpp > > #include > #include "cppcode.hpp" > > namespace bp = boost::python; > > > class BaseWrapper : public Base, public bp::wrapper > { > public: > void _init(int x, int y) { > init(x * y); > } > }; > > > BOOST_PYTHON_MODULE(cppcode) > { > bp::class_("Base") > .def("init",&BaseWrapper::_init) > .def("printIt",&Base::print) > ; > bp::class_ >("Derived"); > bp::def("callback",&callback); > }; > > > #!/apps/local/gcc/4.5.1/bin/python2.7 > > # file test.py > > import cppcode > > print "---> base" > base = cppcode.Base() > base.printIt() > base.init(3, 4) > base.printIt() > > > print "---> derived" > derived = cppcode.Derived() > derived.printIt() > derived.init(3, 4) > derived.printIt() > cppcode.callback(derived) > > class PythonDerived(cppcode.Base): > def printIt(self): > print "hello PythonDerived" > > print "---> python derived" > pyderived = PythonDerived() > pyderived.printIt() > cppcode.callback(pyderived) > > I run into this error when trying to call .init() on the Derived object: > > $ python2.7 ./test.py > ---> base > hello Base 0 > hello Base 12 > ---> derived > hello Derived 0 > Traceback (most recent call last): > File "./test.py", line 23, in > derived.init(3, 4) > Boost.Python.ArgumentError: Python argument types in > Base.init(Derived, int, int) > did not match C++ signature: > init(BaseWrapper {lvalue}, int, int) > > > Which makes sense since Derived does not inherit from BaseWrapper. > >> So i do not know how to use those callback approach you suggested. >> Especially if you are using function overloading. And additionally, this >> would mean, that i have to write such a callback function for each >> function in my base class as a global function. > Never mind the callback, I might have just confused you. The callback is > only for showing > that you'd need a Wrapper class if you want to inherit in Python and be > able to call back > from C++ into Python and actually call methods overridden in Python. > >> One thing i have to mention is, that it is perfectly working if i omit >> the BaseWrapper class. So if the functions of Base can be exposed >> without using a wrapper class: >> [...] >> ...works. But unfortunately not with the BaseWrapper Class :-( > Because now you don't have the problem that Derived has no inheritance > relationship with BaseWrapper. > > Maybe you can just use a free function: > > // file wrap.cpp > > #include > #include "cppcode.hpp" > > namespace bp = boost::python; > > > void _init(Base& base, int x, int y) { > base.init(x * y); > } > > > BOOST_PYTHON_MODULE(cppcode) > { > bp::class_("Base") > .def("init",&_init) > .def("printIt",&Base::print) > ; > bp::class_ >("Derived"); > bp::def("callback",&callback); > }; > > > > # file test.py > > > import os > import sys > > > import cppcode > > print "---> base" > base = cppcode.Base() > base.printIt() > base.init(3, 4) > base.printIt() > > > print "---> derived" > derived = cppcode.Derived() > derived.printIt() > derived.init(3, 4) > derived.printIt() > cppcode.callback(derived) > > class PythonDerived(cppcode.Base): > def printIt(self): > print "hello PythonDerived" > > print "---> python derived" > pyderived = PythonDerived() > pyderived.printIt() > # to make this invoke PythonDerived.printIt() you need a wrapper class > cppcode.callback(pyderived) > > ===> > > $ python2.7 ./test.py > ---> base > hello Base 0 > hello Base 12 > ---> derived > hello Derived 0 > hello Derived 12 > hello Derived 12 > ---> python derived > hello PythonDerived > hello Base 0 > Holger > > Landesbank Baden-Wuerttemberg > Anstalt des oeffentlichen Rechts > Hauptsitze: Stuttgart, Karlsruhe, Mannheim, Mainz > HRA 12704 > Amtsgericht Stuttgart > > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig Hi Holger, well, ok, i understand now what the problem is. Many thanks for your help! Best Regards! -- Erik T?rke Department of Neurophysics Max-Planck-Institute for Human Cognitive and Brain Sciences Stephanstrasse 1A 04103 Leipzig Germany Tel: +49 341 99 40-2440 Email: tuerke at cbs.mpg.de www.cbs.mpg.de From rramaiyer at audience.com Fri Oct 21 01:57:00 2011 From: rramaiyer at audience.com (Ramesh Ramaiyer) Date: Thu, 20 Oct 2011 16:57:00 -0700 Subject: [C++-sig] Cannot build Boost.Python Message-ID: <53BA999806A7DC43BF443B1C7B4F9BA5017BB816E0@EXM.audience.local> Hi David, I am running to the same issue. I am new to Python as well. Can you let me know how you solved your issue ? Thanks Ramesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From David.Aldrich at EMEA.NEC.COM Fri Oct 21 11:52:39 2011 From: David.Aldrich at EMEA.NEC.COM (David Aldrich) Date: Fri, 21 Oct 2011 09:52:39 +0000 Subject: [C++-sig] Cannot build Boost.Python In-Reply-To: <53BA999806A7DC43BF443B1C7B4F9BA5017BB816E0@EXM.audience.local> References: <53BA999806A7DC43BF443B1C7B4F9BA5017BB816E0@EXM.audience.local> Message-ID: <41302A7145AC054FA7A96CFD03835A0A0784C9@EX10MBX02.EU.NEC.COM> > I am running to the same issue. I am new to Python as well. Can you let me know how you solved your issue ? Hi Ramesh I'm sorry, that was a long time ago. I suggest you start a new thread on the mailing list and explain your situation there with more detail. Best regards David From brandsmeier at gmx.de Tue Oct 25 12:50:04 2011 From: brandsmeier at gmx.de (Holger Brandsmeier) Date: Tue, 25 Oct 2011 12:50:04 +0200 Subject: [C++-sig] two pitfals when extending c++ classes from python In-Reply-To: References: Message-ID: Dear list, The boost_python Tutorial describes nicely and very brief how to subclass a boost::python exported class in python. I stumbeled over two pitfalls, which I did not expect by reading the tutorial. 1) When you put a wrapper class around a pure virtual class, you have to remove the "no_init" argument, e.g. for the example in the tutorial ?class_("Base", no_init) is wrong, it should be ?class_("Base") This is also what is shown in the tutorial, but it is not mentioned, that adding 'no_init' makes your wrapper unusable. If you don't override __init__ in your python class or if you call the super constructor in your overriden __init__, then you actually get an error when "no_init" is present: ?RuntimeError: This class cannot be instantiated from Python So, yes, you might argue that I should have been warned by this error. But unfortunately for me that error was misleading. Given this error I assumed that you didn't want me to call the super constructor in my __init__ if my super class is pure virtual (why not? pure virtual classes simply shouldn't have constructors). Then I spend a lot of time debugging, why I couldn't pass my class back to C++ any more until I found that the tutorial didn't have a "no_init" argument. 2) The method that I export as overridable in python takes on argument `node` of type ?`const parfem::Node&`. The class Node is a virtual class. If I simply pass `node` to python then I obtain the misleading message ?TypeError: No to_python (by-value) converter found for C++ type: parfem::Node This message actually doesn't mean that there is no such converter, there is certainly one present. This method probably has to do with the fact that Node is exported as "noncopyable" and "no_init". Admittedly the problem here is how to handle the lifetime of the object and after thinking about it I fully why boost::python has a problem here. ?My solution was then to pass a smart pointer instance. I just wanted to point out that the error message was very misleading here for me. -Holger -- Holger Brandsmeier, SAM, ETH Z?rich http://www.sam.math.ethz.ch/people/bholger From rwgrosse-kunstleve at lbl.gov Tue Oct 25 13:07:48 2011 From: rwgrosse-kunstleve at lbl.gov (Ralf Grosse-Kunstleve) Date: Tue, 25 Oct 2011 04:07:48 -0700 Subject: [C++-sig] two pitfals when extending c++ classes from python In-Reply-To: References: Message-ID: Hi Holger, chances that Dave, Joel, or me get to work on the tutorial are very small. But if you send me updated files I'll check them in. Maybe simpler would be to add to the FAQ. Ralf On Tue, Oct 25, 2011 at 3:50 AM, Holger Brandsmeier wrote: > Dear list, > > The boost_python Tutorial describes nicely and very brief how to > subclass a boost::python exported class in python. > > I stumbeled over two pitfalls, which I did not expect by reading the > tutorial. > > 1) When you put a wrapper class around a pure virtual class, you have > to remove the "no_init" argument, e.g. for the example in the tutorial > class_("Base", no_init) > is wrong, it should be > class_("Base") > This is also what is shown in the tutorial, but it is not mentioned, > that adding 'no_init' makes your wrapper unusable. > > If you don't override __init__ in your python class or if you call the > super constructor in your overriden __init__, then you actually get an > error when "no_init" is present: > RuntimeError: This class cannot be instantiated from Python > So, yes, you might argue that I should have been warned by this error. > But unfortunately for me that error was misleading. Given this error I > assumed that you didn't want me to call the super constructor in my > __init__ if my super class is pure virtual (why not? pure virtual > classes simply shouldn't have constructors). Then I spend a lot of > time debugging, why I couldn't pass my class back to C++ any more > until I found that the tutorial didn't have a "no_init" argument. > > 2) The method that I export as overridable in python takes on argument > `node` of type `const parfem::Node&`. The class Node is a virtual > class. If I simply pass `node` to python then I obtain the misleading > message > TypeError: No to_python (by-value) converter found for C++ type: > parfem::Node > This message actually doesn't mean that there is no such converter, > there is certainly one present. This method probably has to do with > the fact that Node is exported as "noncopyable" and "no_init". > Admittedly the problem here is how to handle the lifetime of the > object and after thinking about it I fully why boost::python has a > problem here. My solution was then to pass a smart pointer instance. > I just wanted to point out that the error message was very misleading > here for me. > > -Holger > > > -- > Holger Brandsmeier, SAM, ETH Z?rich > http://www.sam.math.ethz.ch/people/bholger > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From liguokong at gmail.com Wed Oct 26 15:54:46 2011 From: liguokong at gmail.com (Liguo Kong) Date: Wed, 26 Oct 2011 09:54:46 -0400 Subject: [C++-sig] failed to install boost.python on Lion Message-ID: Hello, I am trying to install boost.python on Mac Lion OS. I used python 2.6 installed by default and built boost.build with *./bootstrap.sh --set-toolset=gcc *and then *./b2 install --prefix=~/workspace/boost_build*. The gcc version is "gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)". I followed the instructions http://www.boost.org/doc/libs/1_41_0/more/getting_started/unix-variants.html#expected-build-output to install boost.python with bjam --build-dir=/Users/lkong/workspace/boost_python toolset=gcc --with-python stage It fails with the message: Component configuration: - chrono : not building - date_time : not building - exception : not building - filesystem : not building - graph : not building - graph_parallel : not building - iostreams : not building - math : not building - mpi : not building - program_options : not building - python : building - random : not building - regex : not building - serialization : not building - signals : not building - system : not building - test : not building - thread : not building - wave : not building ...patience... ...patience... ...found 1540 targets... ...updating 77 targets... common.mkdir stage common.mkdir stage/lib common.mkdir /Users/lkong/workspace/boost_python/boost common.mkdir /Users/lkong/workspace/boost_python/boost/bin.v2 common.mkdir /Users/lkong/workspace/boost_python/boost/bin.v2/libs common.mkdir /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python common.mkdir /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build common.mkdir /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1 common.mkdir /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release common.mkdir /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/numeric.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/list.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/long.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/dict.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/tuple.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/str.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/slice.o common.mkdir /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/converter gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/converter/from_python.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/converter/registry.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/converter/type_id.o common.mkdir /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/enum.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/class.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/function.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/inheritance.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/life_support.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/pickle_support.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/errors.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/module.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/converter/builtin_converters.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/converter/arg_to_python_base.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/iterator.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/stl_iterator.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object_protocol.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object_operators.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/wrapper.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/import.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/exec.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/function_doc_signature.o gcc.link.dll /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/libboost_python.dylib *ld: unknown option: -R* collect2: ld returned 1 exit status "g++" -L"/System/Library/Frameworks/Python.framework/Versions/2.6/lib" -L"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/config" -Wl,-R -Wl,"/System/Library/Frameworks/Python.framework/Versions/2.6/lib" -Wl,-R -Wl,"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/config" -o "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/libboost_python.dylib" -Wl,-h -Wl,libboost_python.dylib -shared -Wl,--start-group "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/numeric.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/list.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/long.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/dict.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/tuple.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/str.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/slice.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/converter/from_python.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/converter/registry.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/converter/type_id.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/enum.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/class.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/function.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/inheritance.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/life_support.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/pickle_support.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/errors.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/module.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/converter/builtin_converters.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/converter/arg_to_python_base.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/iterator.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/stl_iterator.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object_protocol.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object_operators.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/wrapper.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/import.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/exec.o" "/Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/object/function_doc_signature.o" -Wl,-Bstatic -Wl,-Bdynamic -lpython2.6 -Wl,--end-group ...failed gcc.link.dll /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/threading-multi/libboost_python.dylib... ...skipped libboost_python.dylib for lack of

libboost_python.dylib... common.mkdir /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static common.mkdir /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/numeric.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/list.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/long.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/dict.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/tuple.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/str.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/slice.o common.mkdir /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/converter gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/converter/from_python.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/converter/registry.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/converter/type_id.o common.mkdir /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/object gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/object/enum.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/object/class.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/object/function.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/object/inheritance.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/object/life_support.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/object/pickle_support.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/errors.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/module.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/converter/builtin_converters.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/converter/arg_to_python_base.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/object/iterator.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/object/stl_iterator.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/object_protocol.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/object_operators.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/wrapper.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/import.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/exec.o gcc.compile.c++ /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/object/function_doc_signature.o gcc.archive /Users/lkong/workspace/boost_python/boost/bin.v2/libs/python/build/gcc-4.2.1/release/link-static/threading-multi/libboost_python.a common.copy stage/lib/libboost_python.a ...failed updating 1 target... ...skipped 1 target... ...updated 75 targets... The problem "*ld: unknown option: -R*" also appeared when I tried the quickstart example as in http://www.boost.org/doc/libs/1_47_0/libs/python/doc/building.html Any ideas how to solve the problem? Thanks a lot. Liguo -------------- next part -------------- An HTML attachment was scrubbed... URL: From nat at lindenlab.com Wed Oct 26 21:43:56 2011 From: nat at lindenlab.com (Nat Goodspeed) Date: Wed, 26 Oct 2011 15:43:56 -0400 Subject: [C++-sig] failed to install boost.python on Lion In-Reply-To: References: Message-ID: <0E40F957-D900-4C20-9F34-E30E7AC42097@lindenlab.com> On Oct 26, 2011, at 9:54 AM, Liguo Kong wrote: > I am trying to install boost.python on Mac Lion OS. I used python 2.6 installed by default and > built boost.build with ./bootstrap.sh --set-toolset=gcc and then ./b2 install --prefix=~/workspace/boost_build. Uh, maybe toolset=darwin? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jvansanten at gmail.com Thu Oct 27 04:38:08 2011 From: jvansanten at gmail.com (Jakob van Santen) Date: Wed, 26 Oct 2011 21:38:08 -0500 Subject: [C++-sig] Memory corruption in exception translation on OS X 10.7 Message-ID: <46C096EC-A446-4CA4-9B99-C14F46DA6D76@gmail.com> Hi all, I've run across what I think is a strange memory corruption bug affecting C++/Python exception translation on OS X 10.7. A short program that reproduces the bug follows: #include static void throwme() { throw std::invalid_argument("Bork!"); } struct Bork { void throwy() { throwme(); } }; #ifdef BORKED BOOST_PYTHON_MODULE(borked) { boost::python::class_("Bork") .def("throwy", &Bork::throwy) ; #else BOOST_PYTHON_MODULE(good) { #endif boost::python::def("throwy", &throwme); } When compiled without the call to class_::def(), the exception is caught in handle_exception_impl() and presented to Python as ValueError, as expected: [jakob at i3-dhcp-172-16-55-176:tmp/pybork]$ python -c "import good; good.throwy()" Traceback (most recent call last): File "", line 1, in ValueError: Bork! When compiled with -DBORKED, however, I get a bad free() inside the std::logic_error destructor: [jakob at i3-dhcp-172-16-55-176:tmp/pybork]$ python -c "import borked; borked.throwy()" python(77633) malloc: *** error for object 0x7fec22c735a4: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug I first encountered this bug in a private fork of 1.38, but I can reproduce it with a freshly build copy of 1.47. It occurs when building with both llvm-gcc and gcc-gcc, regardless of the optimization level. I built the two test libraries like so: g++ bork.cxx -g -bundle -flat_namespace -undefined dynamic_lookup -Iboost/include -I/System/Library/Frameworks/Python.framework/Headers/ -Lboost/lib -lboost_python -DBORKED -o borked.so g++ bork.cxx -g -bundle -flat_namespace -undefined dynamic_lookup -Iboost/include -I/System/Library/Frameworks/Python.framework/Headers/ -Lboost/lib -lboost_python -o good.so Can anyone reproduce this? Cheers, Jakob From s_sourceforge at nedprod.com Thu Oct 27 13:32:07 2011 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Thu, 27 Oct 2011 12:32:07 +0100 Subject: [C++-sig] Memory corruption in exception translation on OS X 10.7 In-Reply-To: <46C096EC-A446-4CA4-9B99-C14F46DA6D76@gmail.com> References: <46C096EC-A446-4CA4-9B99-C14F46DA6D76@gmail.com> Message-ID: <4EA94137.15266.9FA4446D@s_sourceforge.nedprod.com> OS X 10.6 and later have a very aggressive memory allocator in them - it is superbly quick, but it's still quite young code. If your BPL example runs under valgrind on Linux fine - and I would suspect that it does - it'll be a bug in either OS X or Apple's port of GCC to OS X (which would hardly be the first time). If it does trip on valgrind on Linux you'll get a much more enthusiastic response here. Niall On 26 Oct 2011 at 21:38, Jakob van Santen wrote: > Hi all, > > I've run across what I think is a strange memory corruption bug affecting C++/Python exception translation on OS X 10.7. A short program that reproduces the bug follows: > > #include > > static void > throwme() { throw std::invalid_argument("Bork!"); } > > struct Bork { > void throwy() { throwme(); } > }; > > #ifdef BORKED > BOOST_PYTHON_MODULE(borked) > { > boost::python::class_("Bork") > .def("throwy", &Bork::throwy) > ; > #else > BOOST_PYTHON_MODULE(good) > { > #endif > boost::python::def("throwy", &throwme); > } > > When compiled without the call to class_::def(), the exception is caught in handle_exception_impl() and presented to Python as ValueError, as expected: > > [jakob at i3-dhcp-172-16-55-176:tmp/pybork]$ python -c "import good; good.throwy()" > Traceback (most recent call last): > File "", line 1, in > ValueError: Bork! > > When compiled with -DBORKED, however, I get a bad free() inside the std::logic_error destructor: > > [jakob at i3-dhcp-172-16-55-176:tmp/pybork]$ python -c "import borked; borked.throwy()" > python(77633) malloc: *** error for object 0x7fec22c735a4: pointer being freed was not allocated > *** set a breakpoint in malloc_error_break to debug > > I first encountered this bug in a private fork of 1.38, but I can reproduce it with a freshly build copy of 1.47. It occurs when building with both llvm-gcc and gcc-gcc, regardless of the optimization level. I built the two test libraries like so: > > g++ bork.cxx -g -bundle -flat_namespace -undefined dynamic_lookup -Iboost/include -I/System/Library/Frameworks/Python.framework/Headers/ -Lboost/lib -lboost_python -DBORKED -o borked.so > g++ bork.cxx -g -bundle -flat_namespace -undefined dynamic_lookup -Iboost/include -I/System/Library/Frameworks/Python.framework/Headers/ -Lboost/lib -lboost_python -o good.so > > Can anyone reproduce this? > > Cheers, > Jakob > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Company no: 472909. From grant.tang at gmail.com Thu Oct 27 17:48:08 2011 From: grant.tang at gmail.com (Grant Tang) Date: Thu, 27 Oct 2011 10:48:08 -0500 Subject: [C++-sig] Is there any class member method number limit for boost.python? Message-ID: Hi, I have a class wrapped to Python which has 334 member .def(). After I added one more method, i.e. .def(), it shows segfault when I import this class it in Python. Strangely, this only happens on 64 bit CentOS 4.8 with gcc 3.4.6. It works fine with the 335th member method on 32 bit CentOS 4.8 with gcc 3.4.6. And also works fine with both 32 and 64 bit CentOS 5.6 with gcc 4.1.2. I can comment any one of member methods to make the number of methods <= 334 in my class to make it work on the 64 bit CentOS 4.8. It looks like there is such member method number limit on this 64 bit gcc3.4.6 platform. Please help. Grant From dave at boostpro.com Sat Oct 29 02:57:42 2011 From: dave at boostpro.com (Dave Abrahams) Date: Fri, 28 Oct 2011 16:57:42 -0800 Subject: [C++-sig] Is there any class member method number limit for boost.python? References: Message-ID: on Thu Oct 27 2011, "Grant Tang" wrote: > Hi, > > I have a class wrapped to Python which has 334 member .def(). After I > added one more method, i.e. .def(), it shows segfault when I import > this class it in Python. Strangely, this only happens on 64 bit CentOS > 4.8 with gcc 3.4.6. It works fine with the 335th member method on 32 > bit CentOS 4.8 with gcc 3.4.6. And also works fine with both 32 and 64 > bit CentOS 5.6 with gcc 4.1.2. > > I can comment any one of member methods to make the number of methods > <= 334 in my class to make it work on the 64 bit CentOS 4.8. It looks > like there is such member method number limit on this 64 bit gcc3.4.6 > platform. Please help. My intuition is that you must be misinterpreting the results. If you can reproduce this problem in minimal code, I'd be highly surprised. -- Dave Abrahams BoostPro Computing http://www.boostpro.com