Re: [Python-Dev] GCC version compatibility
[Christoph, please keep the python-dev list in the loop here, at least until they get annoyed and decide we're off-topic. I think this is crucial to the way they package and deliver Python] Christoph Ludwig <cludwig@cdc.informatik.tu-darmstadt.de> writes:
On Thu, Jul 07, 2005 at 06:27:46PM -0400, David Abrahams wrote:
"Martin v. Löwis" <martin@v.loewis.de> writes:
David Abrahams wrote:
I'm wondering if there has been a well-known recent change either in Python or GCC that would account for these new reports. Any relevant information would be appreciated. [...] Python is linked with g++ if configure thinks this is necessary
Right. The question is, when should configure "think it's necessary?"
Just to add to the confusion... I encountered the case that configure decided to use gcc for linking but it should have used g++. (It is Python PR #1189330 <http://tinyurl.com/dlheb>. This was on a x86 Linux system with g++ 3.4.2.)
Background: The description of --with-cxx in the README of the Python 2.4.1 source distribution made me think that I need to configure my Python installation with --with-configure=/opt/gcc/gcc-3.4.2/bin/g++ if I plan to use C++ extensions built with this compiler. (That was possibly a misunderstanding on my part,
AFAICT, yes.
but Python should build with this option anyway.)
configure set `LINKCC=$(PURIFY) $(CC)'. The result was that make failed when linking the python executable due to an unresolved reference to __gxx_personality_v0. I had to replace CC by CXX in the definition of LINKCC to finish the build of Python.
When I looked into this problem I saw that configure in fact builds a test executable that included an object file compiled with g++. If the link step with gcc succeeds then LINKCC is set as above, otherwise CXX is used. Obviously, on my system this test was successful so configure decided to link with gcc. However, minimal changes to the source of the test program caused the link step to fail. It was not obvious to me at all why the latter source code should cause a dependency on the C++ runtime if the original code does not. My conclusion was that this test is fragile and should be skipped.
Sounds like it. I have never understood what the test was really checking for since the moment it was first described to me, FWIW.
If Python is built with --with-cxx then it should be linked with CXX as well.
U betcha.
I gather from posts on the Boost mailing lists that you can import Boost.Python extensions even if Python was configured --without-cxx.
Yes, all the tests are passing that way.
(On ELF based Linux/x86, at least.) That leaves me wondering
* when is --with-cxx really necessary?
I think it's plausible that if you set sys.dlopenflags to share symbols it *might* end up being necessary, but IIRC Ralf does use sys.dlopenflags with a standard build of Python (no --with-cxx)... right, Ralf?
* what happens if I import extensions built with different g++ versions? Will there be a conflict between the different versions of libstdc++ those extensions depend on?
Not unless you set sys.dlopenflags to share symbols. It's conceivable that they might conflict through their shared use of libboost_python.so, but I think you have to accept that an extension module and the libboost_python.so it is linked with have to be built with compatible ABIs anyway. So in that case you may need to make sure each group of extensions built with a given ABI use their own libboost_python.so (or just link statically to libboost_python.a if you don't need cross-module conversions). HTH, -- Dave Abrahams Boost Consulting www.boost-consulting.com
--- David Abrahams <dave@boost-consulting.com> wrote:
Yes, all the tests are passing that way.
(On ELF based Linux/x86, at least.) That leaves me wondering
* when is --with-cxx really necessary?
I think it's plausible that if you set sys.dlopenflags to share symbols it *might* end up being necessary, but IIRC Ralf does use sys.dlopenflags with a standard build of Python (no --with-cxx)... right, Ralf?
Yes, I am using sys.setdlopenflags like this: if (sys.platform == "linux2"): sys.setdlopenflags(0x100|0x2) /usr/include/bits/dlfcn.h:#define RTLD_GLOBAL 0x00100 /usr/include/bits/dlfcn.h:#define RTLD_NOW 0x00002 Note that the default Python 2.4.1 installation links python with g++. I've never had any problems with this configuration under any Linux version, at least: Redhat 7.3, 8.0, 9.0, WS3, SuSE 9.2, Fedora Core 3, and some other versions I am not sure about. Specifically for this posting I've installed Python 2.4.1 --without-cxx. All of our 50 Boost.Python extensions still work without a problem. Cheers, Ralf ____________________________________________________ Sell on Yahoo! Auctions no fees. Bid on great items. http://auctions.yahoo.com/
David Abrahams wrote:
When I looked into this problem I saw that configure in fact builds a test executable that included an object file compiled with g++. If the link step with gcc succeeds then LINKCC is set as above, otherwise CXX is used. Obviously, on my system this test was successful so configure decided to link with gcc. However, minimal changes to the source of the test program caused the link step to fail. It was not obvious to me at all why the latter source code should cause a dependency on the C++ runtime if the original code does not. My conclusion was that this test is fragile and should be skipped.
Sounds like it. I have never understood what the test was really checking for since the moment it was first described to me, FWIW.
I'll describe it once more: *If* a program is compiled with the C++ compiler, is it *then* possible to still link it with the C compiler? This is the question this test tries to answer.
If Python is built with --with-cxx then it should be linked with CXX as well.
U betcha.
Wrong. The test was introduced in response to complaints that Python unnecessarily links with libstdc++ on some Linux systems. On these Linux systems, it was well possible to build main() with a C++ compiler, and still link the entire thing with gcc. Since main() doesn't use any libstdc++ functionality, and since collect2/__main isn't used, one would indeed expect that linking with CXX is not necessary.
(On ELF based Linux/x86, at least.) That leaves me wondering
* when is --with-cxx really necessary?
I think it's plausible that if you set sys.dlopenflags
This has no relationship at all. --with-cxx is much older than sys.dlopenflags. It is used on systems where main() must be a C++ program for C++ extension modules to work (e.g. some Linux systems). Regards, Martin
"Martin v. Löwis" <martin@v.loewis.de> writes:
David Abrahams wrote:
When I looked into this problem I saw that configure in fact builds a test executable that included an object file compiled with g++. If the link step with gcc succeeds then LINKCC is set as above, otherwise CXX is used. Obviously, on my system this test was successful so configure decided to link with gcc. However, minimal changes to the source of the test program caused the link step to fail. It was not obvious to me at all why the latter source code should cause a dependency on the C++ runtime if the original code does not. My conclusion was that this test is fragile and should be skipped.
Sounds like it. I have never understood what the test was really checking for since the moment it was first described to me, FWIW.
I'll describe it once more: *If* a program is compiled with the C++ compiler, is it *then* possible to still link it with the C compiler? This is the question this test tries to answer.
Okay, I understand that. What I have never understood is why that should be an appropriate thing to test for the Python executable. There's no reason to compile any of Python with a C++ compiler.
If Python is built with --with-cxx then it should be linked with CXX as well.
U betcha.
Wrong. The test was introduced in response to complaints that Python unnecessarily links with libstdc++ on some Linux systems.
Apparently it still does.
On these Linux systems, it was well possible to build main() with a C++ compiler
Nobody would need to build Python's main() with a C++ compiler, if you'd just comment out the 'extern "C"'.
and still link the entire thing with gcc. Since main() doesn't use any libstdc++ functionality, and since collect2/__main isn't used, one would indeed expect that linking with CXX is not necessary.
It isn't.
(On ELF based Linux/x86, at least.) That leaves me wondering
* when is --with-cxx really necessary?
I think it's plausible that if you set sys.dlopenflags
This has no relationship at all. --with-cxx is much older than sys.dlopenflags. It is used on systems where main() must be a C++ program for C++ extension modules to work (e.g. some Linux systems).
I have tested Boost.Python and C++ extension modules on a wide variety of Linux systems, and have seen this phenomenon. Everyone who is testing it on Linux is finding that if they build Python --without-cxx, everything works. And, yes, the mechanisms at the very *core* of Boost.Python rely on static initializers being run properly, so if there were anything wrong with that mechanism the tests would be breaking left and right. I think either the ELF Linux loader changed substantially since 1995, or whoever introduced this test was just confused. C++ extension modules get their static initializers run when they are loaded by dlopen (or, strictly speaking, sometime between then and the time their code begins to execute) which happens long after main or __main are invoked. The executable doesn't know about these extension modules until they are loaded, and when it loads them it can't get its hooks into anything other than the initmodulename entry point. The executable does not trigger the static initializers; the dynamic loader does. Therefore, it doesn't matter whether the executable is linked with the C++ runtime. An appropriate C++ runtime is linked to the extension module and that is what gets invoked when the module is loaded. -- Dave Abrahams Boost Consulting www.boost-consulting.com
David Abrahams wrote:
I'll describe it once more: *If* a program is compiled with the C++ compiler, is it *then* possible to still link it with the C compiler? This is the question this test tries to answer.
Okay, I understand that. What I have never understood is why that should be an appropriate thing to test for the Python executable. There's no reason to compile any of Python with a C++ compiler.
I hope you understand now: it is necessary if you want to link C++ extension modules into a Python interpreter (statically, not dynamically).
Wrong. The test was introduced in response to complaints that Python unnecessarily links with libstdc++ on some Linux systems.
Apparently it still does.
Not "still", but "again", since recent changes in gcc.
On these Linux systems, it was well possible to build main() with a C++ compiler
Nobody would need to build Python's main() with a C++ compiler, if you'd just comment out the 'extern "C"'.
Wrong. People that statically link C++ extensions modules on systems where the C++ compiler requires main to be compiled as C++ need to do such a thing. One such compiler is gcc on systems where the binary format does not support .init sections.
and still link the entire thing with gcc. Since main() doesn't use any libstdc++ functionality, and since collect2/__main isn't used, one would indeed expect that linking with CXX is not necessary.
It isn't.
Hmmm. It is. Try for yourself. You get an unresolved gxx_personality_v0.
I have tested Boost.Python and C++ extension modules on a wide variety of Linux systems, and have seen this phenomenon.
Did you try on Linux systems released 1994? That don't use ELF as the binary format?
I think either the ELF Linux loader changed substantially since 1995, or whoever introduced this test was just confused.
No. The ELF loader was *introduced* since 1995. Dynamic loading was not available before. Regards, Martin
On Sat, Jul 09, 2005 at 12:08:08AM +0200, "Martin v. Löwis" wrote:
David Abrahams wrote:
When I looked into this problem I saw that configure in fact builds a test executable that included an object file compiled with g++. If the link step with gcc succeeds then LINKCC is set as above, otherwise CXX is used. Obviously, on my system this test was successful so configure decided to link with gcc. However, minimal changes to the source of the test program caused the link step to fail. It was not obvious to me at all why the latter source code should cause a dependency on the C++ runtime if the original code does not. My conclusion was that this test is fragile and should be skipped.
Sounds like it. I have never understood what the test was really checking for since the moment it was first described to me, FWIW.
I'll describe it once more: *If* a program is compiled with the C++ compiler, is it *then* possible to still link it with the C compiler? This is the question this test tries to answer.
The keyword here is "tries" - my bug report #1189330 exihibts that the test fails to do its job. And looking at the test that's certainly no surprise: configure writes the following conftest.cc on the disk (whitespaces added for better readability): cludwig@lap200:~/tmp/python-config> cat conftest.cc void foo(); int main() { foo(); } void foo() { } This TU is compiled with the C++ compiler and then linked with the C compiler: cludwig@lap200:~/tmp/python-config> g++ -o conftest.o -c conftest.cc cludwig@lap200:~/tmp/python-config> gcc -o conftest conftest.o Note that there is *no* reference to any symbol in another TU. The compiler can detect that foo() won't throw any exceptions, that there is no need for RTTI and whatever else the C++ runtime provides. Consequently, the object file produced by g++ does not contain any reference to symbols in libstdc++. However, python.c does reference a function from other TUs, in particular extern "C" int Py_Main(int, char**). The following test shows that in this situation you must link with g++: cludwig@lap200:~/tmp/python-config> cat conftest_a.cc extern "C" void foo(); int main() { foo(); } cludwig@lap200:~/tmp/python-config> cat conftest_b.c void foo() { } cludwig@lap200:~/tmp/python-config> g++ -o conftest_a.o -c conftest_a.cc cludwig@lap200:~/tmp/python-config> gcc -o conftest_b.o -c conftest_b.c cludwig@lap200:~/tmp/python-config> gcc -o conftest conftest_a.o conftest_b.o conftest_a.o(.eh_frame+0x11): undefined reference to `__gxx_personality_v0' collect2: ld gab 1 als Ende-Status zurück (I ran this test with g++ 3.3.1, 3.4.2, and 4.0.0 with identical results.) The reason is, I guess, that even though foo() is delared to have "C" linkage, it still can throw a C++ exception.
If Python is built with --with-cxx then it should be linked with CXX as well.
U betcha.
Wrong. The test was introduced in response to complaints that Python unnecessarily links with libstdc++ on some Linux systems. On these Linux systems, it was well possible to build main() with a C++ compiler, and still link the entire thing with gcc. Since main() doesn't use any libstdc++ functionality, and since collect2/__main isn't used, one would indeed expect that linking with CXX is not necessary.
Of course, if you insist on this "dependency optimization" then you can try to fix Python's configure.in by using the second test above. But I would still not trust it to cover all configurations on all platforms supported by Python.
(On ELF based Linux/x86, at least.) That leaves me wondering
* when is --with-cxx really necessary?
I think it's plausible that if you set sys.dlopenflags
This has no relationship at all. --with-cxx is much older than sys.dlopenflags. It is used on systems where main() must be a C++ program for C++ extension modules to work (e.g. some Linux systems).
Can you provide a concrete examples of such systems? The explanation of --with-cxx in the README mentions a.out. Are there other systems? Regards Christoph -- http://www.informatik.tu-darmstadt.de/TI/Mitarbeiter/cludwig.html LiDIA: http://www.informatik.tu-darmstadt.de/TI/LiDIA/Welcome.html
Christoph Ludwig wrote:
I'll describe it once more: *If* a program is compiled with the C++ compiler, is it *then* possible to still link it with the C compiler? This is the question this test tries to answer.
The keyword here is "tries"
Any such test would only "try": to really determine whether this is necessary for all possible programs, one would have to test all possible programs. Since there is an infinite number of programs, this test could take a while. The original test, on the original system, would cause __main to be undefined, and then decide to use C++. For a long time, on systems that don't use collect2, the test *correctly* determined that linking with g++ was not necessary. It is only recent changes to g++ that break the test, namely the introduction of this __gxx_personality_v0 thing.
- my bug report #1189330 exihibts that the test fails to do its job. And looking at the test that's certainly no surprise:
However, it *is* a surprise that your modified test fixes the problem.
Note that there is *no* reference to any symbol in another TU. The compiler can detect that foo() won't throw any exceptions, that there is no need for RTTI and whatever else the C++ runtime provides. Consequently, the object file produced by g++ does not contain any reference to symbols in libstdc++.
You are assuming implementation details here. I have seen implementations of C++ (eg. g++ with collect2) where the test determines that linking with C++ is necessary (because __main was undefined), as well as systems where the test decides *correctly* that linking with C++ is not necessary (e.g. gcc 2.x on an ELF system). That some C++ compiler introduces the C++ runtime if some C function may throw an exception is a very specific detail of this C++ compiler.
Of course, if you insist on this "dependency optimization" then you can try to fix Python's configure.in by using the second test above. But I would still not trust it to cover all configurations on all platforms supported by Python.
Of couse not. This is just autoconf: it does not allow magical porting to all possible future operating systems. Instead, from time to time, explicit porting activity is necessary. This is not just about this specific detail, but about many other details. Each new operation system, library, or compiler version might break the build process.
Can you provide a concrete examples of such systems? The explanation of --with-cxx in the README mentions a.out. Are there other systems?
I'm not sure. I think HP-UX (with OMF, and aCC) might have required the same code, as may have SysVR3 (with COFF). Regards, Martin
On Sun, Jul 10, 2005 at 09:45:25AM +0200, "Martin v. Löwis" wrote:
Christoph Ludwig wrote:
I'll describe it once more: *If* a program is compiled with the C++ compiler, is it *then* possible to still link it with the C compiler? This is the question this test tries to answer.
The keyword here is "tries"
Any such test would only "try": to really determine whether this is necessary for all possible programs, one would have to test all possible programs. Since there is an infinite number of programs, this test could take a while.
Sure. You cannot write a test that gives the correct result for all platforms you can think of, covering every compiler / linker quirk. I never claimed that is possible. My point is: The test implemented in the 2.4.1 configure script gives a wrong result if your platform happens to be x86 Linux with ELF binaries and g++ 4.0.
The original test, on the original system, would cause __main to be undefined, and then decide to use C++. For a long time, on systems that don't use collect2, the test *correctly* determined that linking with g++ was not necessary.
It is only recent changes to g++ that break the test, namely the introduction of this __gxx_personality_v0 thing.
The test broke due to a change in GCC 4.0, but the "__gxx_personality_v0 thing" was introduced long before. It is merely a symptom. I ran the tests with GCC 3.3.1, 3.4.2, and 4.0.0. Here are the results: GCC version 1 TU 2 TUs 3.3.1 g++ g++ 3.4.2 g++ g++ 4.0.0 gcc g++ (1 TU: test with one translation unit, as in Python 2.4.1. 2 TUs: test with two translation units, as in my last posting. g++ / gcc: test indicates linking the executable requires g++ / gcc, respectively.) With GCC 3.3.1 and 3.4.2, linking of the executable conftest in the 1 TU test fails because of an unresolved symbol __gxx_personality_v0. Therefore, python is linked with g++. The change that makes GCC 4.0.0 break the 1 TU test is that the compiler apparently does a better job eliminating unreachable code. In the 1 TU test, it recognizes __gxx_personality_v0 (or the code that refers to this symbol) is unreachable and removes it. It seems there are no other symbols left that depend on libstdc++ so suddenly conftest can be linked with gcc.
- my bug report #1189330 exihibts that the test fails to do its job. And looking at the test that's certainly no surprise:
However, it *is* a surprise that your modified test fixes the problem.
Note that there is *no* reference to any symbol in another TU. The compiler can detect that foo() won't throw any exceptions, that there is no need for RTTI and whatever else the C++ runtime provides. Consequently, the object file produced by g++ does not contain any reference to symbols in libstdc++.
You are assuming implementation details here. I have seen implementations of C++ (eg. g++ with collect2) where the test determines that linking with C++ is necessary (because __main was undefined), as well as systems where the test decides *correctly* that linking with C++ is not necessary (e.g. gcc 2.x on an ELF system). That some C++ compiler introduces the C++ runtime if some C function may throw an exception is a very specific detail of this C++ compiler.
I am not aware of any rule that makes the following program ill-formed: // in a.cc: extern "C" void foo(); int main() { foo(); } // in b.cc extern "C" void foo() { throw 1; } Provided the compiler does not do optimizations across translation units, it has no way to determine in a.cc whether foo() is really a C function (i.e., compiled by a C compiler) or a C++ function with "C" linkage. I think a conforming C++ compiler has to provide for the case that foo() might throw. It was a very specific detail of gcc 2.x if it failed to do so. (A venial omission, I admit.) But I digress. It's not that important for our discussion whether a C++ compiler must / should / is allowed to add exception handling code to the call of an extern "C" function. The point is that some do *unless* they see the function definition. I contend the test involving two TUs matches more closely the situation with ccpython.cc than the current test. I do not claim the 2 TUs test will cover all possible scenarios. I am not even sure this decision should be left to an automated test. Because if the test breaks for some reason then the user is left with a linker error that is time-consuming to track down.
Of course, if you insist on this "dependency optimization" then you can try to fix Python's configure.in by using the second test above. But I would still not trust it to cover all configurations on all platforms supported by Python.
Of couse not. This is just autoconf: it does not allow magical porting to all possible future operating systems. Instead, from time to time, explicit porting activity is necessary. This is not just about this specific detail, but about many other details. Each new operation system, library, or compiler version might break the build process.
Instead of having yet another test in configure.in that may break on a new platform and that needs maintenance wouldn't it be better to assume that --with-cxx implies linking with the C++ compiler and telling users how to override this assumption? Would it cause so much inconvenience to users provided the explanation of --with-cxx in the README would be modified? I think of an explanation along the lines of: --with-cxx=<compiler>: If you plan to use C++ extension modules, then on some platform you need to compile python's main() function with the C++ compiler. With this option, make will use <compiler> to compile main() *and* to link the python executable. It is likely that the resulting executable depends on the C++ runtime library of <compiler>. Note there are platforms that do not require you to build Python with a C++ compiler in order to use C++ extension modules. E.g., x86 Linux with ELF shared binaries and GCC 3.x, 4.x is such a platform. We recommend that you configure Python --without-cxx on those platforms to avoid unnecessary dependencies. If you need to compile main() with <compiler>, but your platform does not require that you also link the python executable with <compiler> (e.g., <example platform>), then set LINKCC='$(PURIFY) $(CC)' prior to calling make. Then the python executable will not depend on the C++ runtime library of <compiler>. BTW, I'd also change the short explanation output by `configure --help'. Something like: AC_HELP_STRING(--with-cxx=<compiler>, use <compiler> to compile and link main()) In Python 2.4.1, the help message says "enable C++ support". That made me use this option even though it turned out it is not necessary on my platform. Regards Christoph -- http://www.informatik.tu-darmstadt.de/TI/Mitarbeiter/cludwig.html LiDIA: http://www.informatik.tu-darmstadt.de/TI/LiDIA/Welcome.html
Christoph Ludwig <cludwig@cdc.informatik.tu-darmstadt.de> writes:
I do not claim the 2 TUs test will cover all possible scenarios. I am not even sure this decision should be left to an automated test. Because if the test breaks for some reason then the user is left with a linker error that is time-consuming to track down.
However, at least by the usual hierarchy of values, the sort of runtime error that results from the current needless linking with C++ on ELF/Linux is even worse. -- Dave Abrahams Boost Consulting www.boost-consulting.com
--- David Abrahams <dave@boost-consulting.com> wrote:
Christoph Ludwig <cludwig@cdc.informatik.tu-darmstadt.de> writes:
I do not claim the 2 TUs test will cover all possible scenarios. I am not even sure this decision should be left to an automated test. Because if the test breaks for some reason then the user is left with a linker error that is time-consuming to track down.
However, at least by the usual hierarchy of values, the sort of runtime error that results from the current needless linking with C++ on ELF/Linux is even worse.
Indeed. A few months ago the current configure behavior lead to a major loss of our time, probably a whole week between 4-5 people. The problem was that a Python compiled under RH 8.0 was used to build and run new C++ extensions under Fedroa Core 3. Some extensions ran OK, others crashed without warning after running to a certain point. It was very confusing. To avoid this situation in the future, we permanently added a test to our setup scripts, comparing the result of ldd python | grep libstdc++ to the corresponding output for extension modules. Cheers, Ralf ____________________________________________________ Sell on Yahoo! Auctions no fees. Bid on great items. http://auctions.yahoo.com/
On Sun, Jul 10, 2005 at 09:35:33AM -0400, David Abrahams wrote:
Christoph Ludwig <cludwig@cdc.informatik.tu-darmstadt.de> writes:
I do not claim the 2 TUs test will cover all possible scenarios. I am not even sure this decision should be left to an automated test. Because if the test breaks for some reason then the user is left with a linker error that is time-consuming to track down.
However, at least by the usual hierarchy of values, the sort of runtime error that results from the current needless linking with C++ on ELF/Linux is even worse.
Yes, but on ELF/Linux the default configuration should be --without-cxx in the first place. If the build instructions make it sufficiently clear that you should prefer this configuration whenever possible then this should be a non-issue on platforms like ELF/Linux. We learned that there are indeed platforms that require --with-cxx. There is not much we can do for user on platforms that that also require the final executable to be linked with the C++ compiler. They have to live with the dependency on the C++ runtime and the likely runtime errors if the import extension modules built with a different C++ compiler. What about the platforms that require compilation of main() with a C++ compiler but allow you to link with the C compiler - can you import a C++ extension module built with C++ compiler version X if the main() function of the Python interpreter was compiled with C++ compiler version Y, X != Y? If not, then we are back to the runtime error, no matter whether there is a dependency on the C++ runtime library or not. So the automated test in configure could spare users runtime errors if they must configure --with-cxx and if they can link with the C compiler and if the C++ compiler versions used for building the Python interpreter and the extension module do not need to coincide. I don't know how large the subset of platforms is that satisfy these conditions. Regards Christoph -- http://www.informatik.tu-darmstadt.de/TI/Mitarbeiter/cludwig.html LiDIA: http://www.informatik.tu-darmstadt.de/TI/LiDIA/Welcome.html
Christoph Ludwig wrote:
Yes, but on ELF/Linux the default configuration should be --without-cxx in the first place. If the build instructions make it sufficiently clear that you should prefer this configuration whenever possible then this should be a non-issue on platforms like ELF/Linux.
Some users will complain about this. Specifying --without-cxx also causes configure not to look for a C++ compiler, meaning that distutils won't know what the C++ compiler is, meaning that it will link extension modules with the C compiler instead. Regards, Martin
On Tue, Jul 12, 2005 at 01:07:56AM +0200, "Martin v. Löwis" wrote:
Christoph Ludwig wrote:
Yes, but on ELF/Linux the default configuration should be --without-cxx in the first place. If the build instructions make it sufficiently clear that you should prefer this configuration whenever possible then this should be a non-issue on platforms like ELF/Linux.
Some users will complain about this. Specifying --without-cxx also causes configure not to look for a C++ compiler, meaning that distutils won't know what the C++ compiler is, meaning that it will link extension modules with the C compiler instead.
If I understood Dave Abraham's reply somewhere above in this thread correctly then you can build different C++ extension modules with different C++ compilers on ELF/Linux. (I don't have the time right now to actually try it, sorry.) There is no need to fix the C++ compiler as soon as python is built. If distutils builds C++ extensions with the C compiler then I consider this a bug in distutils because it is unlikely to work. (Unless the compiler can figure out from the source file suffixes in the compilation step *and* some info in the object files in the linking step that it is supposed to act like a C++ compiler. None of the compilers I am familiar with does the latter.) distutils should rather look for a C++ compiler in the PATH or explicitly ask the user to specify the command that calls the C++ compiler. It is different if --with-cxx=<compiler> was used. I agree that in this case distutils should use <compiler> to build C++ extensions. (distutils does not behave correctly when building C++ extensions anyway. It calls the C compiler to compile the C++ source files and passes options that gcc accepts only in C mode. The compiler version I am using is docile and only issues warnings. But these warnings are unnecessary and and I would not blame gcc if the next compiler release refused to compile C++ sources if the command line contains C specific options. But the distutils mailing list is a better place to bring this eventually up, I guess.) Regards Christoph -- http://www.informatik.tu-darmstadt.de/TI/Mitarbeiter/cludwig.html LiDIA: http://www.informatik.tu-darmstadt.de/TI/LiDIA/Welcome.html
Christoph Ludwig wrote:
If I understood Dave Abraham's reply somewhere above in this thread correctly then you can build different C++ extension modules with different C++ compilers on ELF/Linux. (I don't have the time right now to actually try it, sorry.) There is no need to fix the C++ compiler as soon as python is built.
There is, somewhat: how do you know the name of the C++ compiler?
If distutils builds C++ extensions with the C compiler then I consider this a bug in distutils because it is unlikely to work. (Unless the compiler can figure out from the source file suffixes in the compilation step *and* some info in the object files in the linking step that it is supposed to act like a C++ compiler. None of the compilers I am familiar with does the latter.) distutils should rather look for a C++ compiler in the PATH or explicitly ask the user to specify the command that calls the C++ compiler.
How should it do that? The logic is quite involved, and currently, distutils relies on configure to figure it out. If you think this should be changed, please contribute a patch.
(distutils does not behave correctly when building C++ extensions anyway. It calls the C compiler to compile the C++ source files and passes options that gcc accepts only in C mode. The compiler version I am using is docile and only issues warnings. But these warnings are unnecessary and and I would not blame gcc if the next compiler release refused to compile C++ sources if the command line contains C specific options. But the distutils mailing list is a better place to bring this eventually up, I guess.)
The best way to "bring this up" is to contribute a patch. "Bringing it up" in the sense of sending an email message to some mailing list likely has no effect whatsoever. Regards, Martin
On 7/12/05, Christoph Ludwig <cludwig@cdc.informatik.tu-darmstadt.de> wrote:
If distutils builds C++ extensions with the C compiler then I consider this a bug in distutils because it is unlikely to work. (Unless the compiler can figure out from the source file suffixes in the compilation step *and* some info in the object files in the linking step that it is supposed to act like a C++ compiler. None of the compilers I am familiar with does the latter.) distutils should rather look for a C++ compiler in the PATH or explicitly ask the user to specify the command that calls the C++ compiler.
You practically always have to use --compiler with distutils when building C++ extensions anyhow, and even then it rarely does what I would consider 'The Right Thing(tm)'. The problem is the distutils core assumption that you want to build extension modules with the same compiler options that you built Python with, is in many cases the wrong thing to do for C++ extension modules, even if you built Python with --with-cxx. This is even worse on windows where the MSVC compiler, until very recently, was crap for C++, and you really needed to use another compiler for C++, but Python was always built using MSVC (unless you jumped through hoops of fire). The problem is that this is much more complicated than it seems - you can't just ask the user for the C++ compiler, you really need to provide an abstraction layer for all of the compiler and linker flags, so that a user could specify what those flags are for their compiler of choice. Of course, once you've done that, the user might as well have just written a new Compiler class for distutils, which wouldn't pay any attention to how Python was built (other than where Python.h is). -- Nick
Nicholas Bastin wrote:
You practically always have to use --compiler with distutils when building C++ extensions anyhow, and even then it rarely does what I would consider 'The Right Thing(tm)'.
I see. In that case, I think something should be done about distutils as well (assuming somebody volunteers): it would be best if this worked in the usual cases, with some easy way for the setup.py author to indicate the preferences. Regards, Martin
Christoph Ludwig <cludwig@cdc.informatik.tu-darmstadt.de> writes:
--with-cxx=<compiler>: If you plan to use C++ extension modules, then on some platform you need to compile python's main() function with the C++ compiler. With this option, make will use <compiler> to compile main() *and* to link the python executable. It is likely that the resulting executable depends on the C++ runtime library of <compiler>.
Note there are platforms that do not require you to build Python with a C++ compiler in order to use C++ extension modules. E.g., x86 Linux with ELF shared binaries and GCC 3.x, 4.x is such a platform. We recommend that you configure Python --without-cxx on those platforms to avoid unnecessary dependencies.
I don't think that's strong enough. What happens is that dynamically loaded Python extension modules built with other, ABI-compatible versions of G++ may *crash*.
If you need to compile main() with <compiler>, but your platform does not require that you also link the python executable with <compiler> (e.g., <example platform>), then set LINKCC='$(PURIFY) $(CC)' prior to calling make. Then the python executable will not depend on the C++ runtime library of <compiler>.
Are we sure we have an actual use case for the above? Doesn't --without-cxx cover all the actual cases we know about?
BTW, I'd also change the short explanation output by `configure --help'. Something like:
AC_HELP_STRING(--with-cxx=<compiler>, use <compiler> to compile and link main())
In Python 2.4.1, the help message says "enable C++ support". That made me use this option even though it turned out it is not necessary on my platform.
Your suggestion is simple and powerful; I like it! -- Dave Abrahams Boost Consulting www.boost-consulting.com
Christoph Ludwig wrote:
My point is: The test implemented in the 2.4.1 configure script gives a wrong result if your platform happens to be x86 Linux with ELF binaries and g++ 4.0.
Point well taken.
It is only recent changes to g++ that break the test, namely the introduction of this __gxx_personality_v0 thing.
The test broke due to a change in GCC 4.0, but the "__gxx_personality_v0 thing" was introduced long before. It is merely a symptom. I ran the tests with GCC 3.3.1, 3.4.2, and 4.0.0. Here are the results:
As I say: it's a recent change (GCC 3.3 *is* recent).
You are assuming implementation details here.
I am not aware of any rule that makes the following program ill-formed:
And I didn't suggest that such a rules exists. Your proposed modification to the test program is fine with me.
But I digress. It's not that important for our discussion whether a C++ compiler must / should / is allowed to add exception handling code to the call of an extern "C" function. The point is that some do *unless* they see the function definition. I contend the test involving two TUs matches more closely the situation with ccpython.cc than the current test.
Maybe. For Python 2.4, feel free to contribute a more complex test. For Python 2.5, I would prefer if the entire code around ccpython.cc was removed.
Instead of having yet another test in configure.in that may break on a new platform and that needs maintenance wouldn't it be better to assume that --with-cxx implies linking with the C++ compiler and telling users how to override this assumption? Would it cause so much inconvenience to users provided the explanation of --with-cxx in the README would be modified?
*If* the configure algorithm is modified, I think I would prefer if the feature was removed entirely.
In Python 2.4.1, the help message says "enable C++ support". That made me use this option even though it turned out it is not necessary on my platform.
For 2.4.2, things should essentially stay the way they are, except perhaps for fixing bugs. For 2.5, more radical changes are possible. Regards, Martin
On Sun, Jul 10, 2005 at 07:41:06PM +0200, "Martin v. Löwis" wrote:
Christoph Ludwig wrote:
My point is: The test implemented in the 2.4.1 configure script gives a wrong result if your platform happens to be x86 Linux with ELF binaries and g++ 4.0. [...] But I digress. It's not that important for our discussion whether a C++ compiler must / should / is allowed to add exception handling code to the call of an extern "C" function. The point is that some do *unless* they see the function definition. I contend the test involving two TUs matches more closely the situation with ccpython.cc than the current test.
Maybe. For Python 2.4, feel free to contribute a more complex test. For Python 2.5, I would prefer if the entire code around ccpython.cc was removed.
I submitted patch #1239112 that implements the test involving two TUs for Python 2.4. I plan to work on a more comprehensive patch for Python 2.5 but that will take some time. Regards Christoph -- http://www.informatik.tu-darmstadt.de/TI/Mitarbeiter/cludwig.html LiDIA: http://www.informatik.tu-darmstadt.de/TI/LiDIA/Welcome.html
Christoph Ludwig <cludwig@cdc.informatik.tu-darmstadt.de> writes:
I submitted patch #1239112 that implements the test involving two TUs for Python 2.4. I plan to work on a more comprehensive patch for Python 2.5 but that will take some time.
Thanks very much for your efforts, Christoph! -- Dave Abrahams Boost Consulting www.boost-consulting.com
On Saturday 16 July 2005 20:13, Christoph Ludwig wrote:
I submitted patch #1239112 that implements the test involving two TUs for Python 2.4. I plan to work on a more comprehensive patch for Python 2.5 but that will take some time.
I'm only vaguely aware of all of the issues here with linking, but if this is to be considered for 2.4.2, it needs to be low risk of breaking anything. 2.4.2 is a bugfix release, and I'd hate to have this break other systems that work... Anthony -- Anthony Baxter <anthony@interlink.com.au> It's never too late to have a happy childhood.
On Sun, Jul 17, 2005 at 04:01:20PM +1000, Anthony Baxter wrote:
On Saturday 16 July 2005 20:13, Christoph Ludwig wrote:
I submitted patch #1239112 that implements the test involving two TUs for Python 2.4. I plan to work on a more comprehensive patch for Python 2.5 but that will take some time.
I'm only vaguely aware of all of the issues here with linking, but if this is to be considered for 2.4.2, it needs to be low risk of breaking anything. 2.4.2 is a bugfix release, and I'd hate to have this break other systems that work...
I prepared the patch for 2.4.2 since it is indeed a bugfix. The current test produces wrong results if the compiler is GCC 4.0 which inhibits a successful build of Python 2.4. I tested the patch with GCC 2.95, 3.3 and 4.0 - those are the only compilers I have easy access to right now. I do not see how this patch could cause regressions on other platforms because it mimics the situation w.r.t. ccpython.cc: A C++ translation unit calls from main() an extern "C" function in a separate C translation unit. The test determines whether it is possible to produce an intact executable if the C compiler is used as linker driver. If this test causes problems on some platform then you'd expect trouble when linking the python executable out of ccpython.o and all the other C object modules anyway. But, of course, I might be wrong. I do not claim that I am familiar with every platform's peculiarities. That's why the patch is up for review. I'd appreciate if users on other platforms test it. Regards Christoph -- http://www.informatik.tu-darmstadt.de/TI/Mitarbeiter/cludwig.html LiDIA: http://www.informatik.tu-darmstadt.de/TI/LiDIA/Welcome.html
I prepared the patch for 2.4.2 since it is indeed a bugfix. The current test produces wrong results if the compiler is GCC 4.0 which inhibits a successful build of Python 2.4.
I should probably add that I'm not flagging that I think there's a problem here. I'm mostly urging caution - I hate having to cut brown-paper-bag releases <wink>. If possible, can the folks on c++-sig try this patch out and put their results in the patch discussion? If you're keen, you could try jumping onto HP's testdrive systems (http://www.testdrive.hp.com/). From what I recall, they have a bunch of systems with non-gcc C++ compilers, including the DEC^WDigital^Compaq^WHP one on the alphas, and the HP C++ compiler on the HP/UX boxes[1]. (and, it should be added, I very much appreciate the work you've put into fixing this problem) Anthony [1] dunno how useful the HP/UX C++ compiler is going to be - last time I was exposed to it, many years ago, it was... not good. -- Anthony Baxter <anthony@interlink.com.au> It's never too late to have a happy childhood.
Anthony Baxter wrote:
I should probably add that I'm not flagging that I think there's a problem here. I'm mostly urging caution - I hate having to cut brown-paper-bag releases <wink>. If possible, can the folks on c++-sig try this patch out and put their results in the patch discussion? If you're keen, you could try jumping onto HP's testdrive systems (http://www.testdrive.hp.com/).
From what I recall, they have a bunch of systems with non-gcc C++ compilers, including the DEC^WDigital^Compaq^WHP one on the alphas, and the HP C++ compiler on the HP/UX boxes[1].
I've looked at the patch, and it looks fairly safe, so I committed it. Regards, Martin
On Sun, Aug 07, 2005 at 11:11:56PM +0200, "Martin v. Löwis" wrote:
I've looked at the patch, and it looks fairly safe, so I committed it.
Thanks. I did not forget my promise to look into a more comprehensive approach to the C++ build issues. But I first need to better understand the potential impact on distutils. And, foremost, I need to finish my thesis whence my spare time projects go very slowly. Regards Christoph -- http://www.informatik.tu-darmstadt.de/TI/Mitarbeiter/cludwig.html LiDIA: http://www.informatik.tu-darmstadt.de/TI/LiDIA/Welcome.html
Hi, this is to continue a discussion started back in July by a posting by Dave Abrahams <url:http://thread.gmane.org/gmane.comp.python.devel/69651> regarding the compiler (C vs. C++) used to compile python's main() and to link the executable. On Sat, Jul 16, 2005 at 12:13:58PM +0200, Christoph Ludwig wrote:
On Sun, Jul 10, 2005 at 07:41:06PM +0200, "Martin v. Löwis" wrote:
Maybe. For Python 2.4, feel free to contribute a more complex test. For Python 2.5, I would prefer if the entire code around ccpython.cc was removed.
I submitted patch #1239112 that implements the test involving two TUs for Python 2.4. I plan to work on a more comprehensive patch for Python 2.5 but that will take some time.
I finally had the spare time to look into this problem again and submitted patch #1324762. The proposed patch implements the following: 1) The configure option --with-cxx is renamed --with-cxx-main. This was done to avoid surprising the user by the changed meaning. Furthermore, it is now possible that CXX has a different value than provided by --with-cxx-main, so the old name would have been confusing. 2) The compiler used to translate python's main() function is stored in the configure / Makefile variable MAINCC. By default, MAINCC=$(CC). If --with-cxx-main is given (without an appended compiler name), then MAINCC=$(CXX). If --with-cxx-main=<compiler> is on the configure command line, then MAINCC=<compiler>. Additionally, configure sets CXX=<compiler> unless CXX was already set on the configure command line. 3) The command used to link the python executable is (as before) stored in LINKCC. By default, LINKCC='$(PURIFY) $(MAINCC)', i.e. the linker front-end is the compiler used to translate main(). If necessary, LINKCC can be set on the configure command line in which case it won't be altered. 4) If CXX is not set by the user (on the command line or via --with-cxx-main), then configure tries several likely C++ compiler names. CXX is assigned the first name that refers to a callable program in the system. (CXX is set even if python is built with a C compiler only, so distutils can build C++ extensions.) 5) Modules/ccpython.cc is no longer used and can be removed. I think that makes it possible to build python appropriately on every platform: - By default, python is built with the C compiler only; CXX is assigned the name of a "likely" C++ compiler. This works fine, e.g., on ELF systems like x86 / Linux where python should not have any dependency on the C++ runtime to avoid conflicts with C++ extensions. distutils can still build C++ extensions since CXX names a callable C++ compiler. - On platforms that require main() to be a C++ function if C++ extensions are to be imported, the user can configure python --with-cxx-main. On platforms where one must compile main() with a C++ compiler, but does not need to link the executable with the same compiler, the user can specify both --with-cxx-main and LINKCC on the configure command line. Best regards Christoph -- http://www.informatik.tu-darmstadt.de/TI/Mitarbeiter/cludwig.html LiDIA: http://www.informatik.tu-darmstadt.de/TI/LiDIA/Welcome.html
Christoph Ludwig <cludwig@cdc.informatik.tu-darmstadt.de> writes:
Hi,
this is to continue a discussion started back in July by a posting by Dave Abrahams <url:http://thread.gmane.org/gmane.comp.python.devel/69651> regarding the compiler (C vs. C++) used to compile python's main() and to link the executable.
On Sat, Jul 16, 2005 at 12:13:58PM +0200, Christoph Ludwig wrote:
On Sun, Jul 10, 2005 at 07:41:06PM +0200, "Martin v. Löwis" wrote:
Maybe. For Python 2.4, feel free to contribute a more complex test. For Python 2.5, I would prefer if the entire code around ccpython.cc was removed.
I submitted patch #1239112 that implements the test involving two TUs for Python 2.4. I plan to work on a more comprehensive patch for Python 2.5 but that will take some time.
I finally had the spare time to look into this problem again and submitted patch #1324762. The proposed patch implements the following:
I just wanted to write to encourage some Python developers to look at (and accept!) Christoph's patch. This is really crucial for smooth interoperability between C++ and Python. Thank you, Dave -- Dave Abrahams Boost Consulting www.boost-consulting.com
David Abrahams wrote:
I just wanted to write to encourage some Python developers to look at (and accept!) Christoph's patch. This is really crucial for smooth interoperability between C++ and Python.
I did, and accepted the patch. If there is anything left to be done, please submit another patch. Regards, Martin
participants (6)
-
"Martin v. Löwis"
-
Anthony Baxter
-
Christoph Ludwig
-
David Abrahams
-
Nicholas Bastin
-
Ralf W. Grosse-Kunstleve