scipy.scons branch: building numpy and scipy with scons

Hi, I've just reached a first usable scipy.scons branch, so that scipy can be built entirely with scons (assuming you build numpy with scons too). You can get it from http://svn.scipy.org/svn/scipy/branches/scipy.scons. To build it, you just need to use numpy.scons branch instead of the trunk, and use setupscons.py instead of setup.py. Again, I would be happy to hear about failures, success (please report a ticket in this case), etc... Some of the most interesting things I can think of which work with scons: - you can control fortran and C flags from the command line: CFLAGS and FFLAGS won't override necessary flags, only optimization flags, so you can easily play with warning, optimization flags. For example: CFLAGS='-W -Wall -Wextra -DDEBUG' FFLAGS='-DDEBUG -W -Wall -Wextra' python setupscons build for debugging will work. No need to care about -fPIC and co, all this is handled automatically. - dependencies are handled correctly thanks to scons: for example, if you change a library (e.g. by using MKL=None to disable mkl), only link step will be redone. platforms known to work ----------------------- - linux with gcc/g77 or gcc/gfortran (both atlas and mkl 9 were tested). - linux with intel compilers (intel and gnu compilers can also be mixed, AFAIK). - solaris with sun compilers with sunperf, only tested on indiana. Notable non working things: --------------------------- - using netlib BLAS and LAPACK is not supported (only optimized ones are available: sunperf, atlas, mkl, and vecLib/Accelerate). - parallel build does NOT work (AFAICS, this is because f2py which do some things which are not thread-safe, but I have not yet found the exact problem). - I have not yet implemented umfpack checker, and as such umfpack cannot be built yet - I have not yet tweaked fortran compiler configurations for optimizations except for gnu compilers - c++ compilers configurations are not handled either. cheers, David

David, I tried building the scons numpy and scipy. Numpy apparently built fine since all tests pass. For scipy, I am having problem during the build. Something seems to be wrong with libdfftpack.a I'm attaching the terminal output. Ubuntu 7.10, Xeon 64. HTH, David This is great stuff by the way, consider me a scons convert. 2007/12/4, David Cournapeau <david@ar.media.kyoto-u.ac.jp>:
Hi,
I've just reached a first usable scipy.scons branch, so that scipy can be built entirely with scons (assuming you build numpy with scons too). You can get it from http://svn.scipy.org/svn/scipy/branches/scipy.scons. To build it, you just need to use numpy.scons branch instead of the trunk, and use setupscons.py instead of setup.py. Again, I would be happy to hear about failures, success (please report a ticket in this case), etc...
Some of the most interesting things I can think of which work with scons: - you can control fortran and C flags from the command line: CFLAGS and FFLAGS won't override necessary flags, only optimization flags, so you can easily play with warning, optimization flags. For example:
CFLAGS='-W -Wall -Wextra -DDEBUG' FFLAGS='-DDEBUG -W -Wall -Wextra' python setupscons build
for debugging will work. No need to care about -fPIC and co, all this is handled automatically. - dependencies are handled correctly thanks to scons: for example, if you change a library (e.g. by using MKL=None to disable mkl), only link step will be redone.
platforms known to work -----------------------
- linux with gcc/g77 or gcc/gfortran (both atlas and mkl 9 were tested). - linux with intel compilers (intel and gnu compilers can also be mixed, AFAIK). - solaris with sun compilers with sunperf, only tested on indiana.
Notable non working things: ---------------------------
- using netlib BLAS and LAPACK is not supported (only optimized ones are available: sunperf, atlas, mkl, and vecLib/Accelerate). - parallel build does NOT work (AFAICS, this is because f2py which do some things which are not thread-safe, but I have not yet found the exact problem). - I have not yet implemented umfpack checker, and as such umfpack cannot be built yet - I have not yet tweaked fortran compiler configurations for optimizations except for gnu compilers - c++ compilers configurations are not handled either.
cheers,
David _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion

This is great stuff by the way, consider me a scons convert.
I'm already convinced for some time. Now, we just need an egg Builder :| Matthieu -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher

On Dec 4, 2007 11:23 PM, Matthieu Brucher <matthieu.brucher@gmail.com> wrote:
This is great stuff by the way, consider me a scons convert.
I'm already convinced for some time. Now, we just need an egg Builder :|
No we don't, since we are still using distutils: scons really just handles compiled extensions, but everything else is still done by distutils, hence can be handled by setuptools. I have not tested it, but since I try to keep the interaction with numpy.distutils as minimal as possible, it should be quick to fix any issues regarding eggs, if any. David

On Dec 4, 2007 11:20 PM, David Huard <david.huard@gmail.com> wrote:
David,
I tried building the scons numpy and scipy. Numpy apparently built fine since all tests pass. For scipy, I am having problem during the build. Something seems to be wrong with libdfftpack.a
Grrrr, this is because of a scons bug. I temporarily fixed it using a quick hack in rev 4551 (the only clean way is to fix those upstream, but scons developers are not always quick to response to patches and suggestions, unfortunately) David

David Cournapeau wrote:
Some of the most interesting things I can think of which work with scons: - you can control fortran and C flags from the command line: CFLAGS and FFLAGS won't override necessary flags, only optimization flags, so you can easily play with warning, optimization flags. For example:
CFLAGS='-W -Wall -Wextra -DDEBUG' FFLAGS='-DDEBUG -W -Wall -Wextra' python setupscons build
for debugging will work. No need to care about -fPIC and co, all this is handled automatically.
Can I override the flags which are handled automatically without modifying numpy? I just spent much of last night trying to get Intel Fortran on OS X working, and I had to dive into numpy.distutils.fcompiler.intel a lot. This is mildly acceptable, if irritating, for a numpy developer, but not acceptable for even a sophisticated user. Even if we were to keep our knowledge-base of Fortran compiler flags immediately up-to-date with every release of every Fortran compiler we follow, people will still be stuck with older versions of numpy. numpy.distutils' behavior of using LDFLAGS, for example, to completely replace the flags instead of extending them mitigated this, somewhat. It allowed someone to work around the stale flags in numpy.distutils in order to get something built. This is a hack, and it causes confusion when this isn't the desired behavior, but it worked. But can we do something better with scons? One option which would work both with scons and the current numpy.distutils is to provide something like LDFLAGS_NO_REALLY which replaces the flags and let LDFLAGS just extend the flags. That doesn't help the ultimate problem, but it makes the workaround more user-friendly. Another option is to have our Fortran compiler "knowledge-base" separable from the rest of the package. scons could try to import them from, say, numpy_fcompilers first and then look inside numpy.distutils if numpy_fcompilers is not found. That way, a user could download a fresh "knowledge-base" into their source tree (and possibly tweak it) without the burden of installing a new numpy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco

On Dec 4, 2007 12:27 PM, Robert Kern <robert.kern@gmail.com> wrote:
user-friendly. Another option is to have our Fortran compiler "knowledge-base" separable from the rest of the package. scons could try to import them from, say, numpy_fcompilers first and then look inside numpy.distutils if numpy_fcompilers is not found. That way, a user could download a fresh "knowledge-base" into their source tree (and possibly tweak it) without the burden of installing a new numpy.
Is this something that really needs to be a code package? Why can't this knowledge (or at least the easily overridable part of it) be packaged in one or more .conf/.ini plaintext files? In that way, users could easily grab new data files or tweak the builtin ones, and at build time say setup.py install --compiler_conf=~/my_tweaked.conf Is that impossible/unreasonable for some reason? Cheers, f

Fernando Perez wrote:
On Dec 4, 2007 12:27 PM, Robert Kern <robert.kern@gmail.com> wrote:
user-friendly. Another option is to have our Fortran compiler "knowledge-base" separable from the rest of the package. scons could try to import them from, say, numpy_fcompilers first and then look inside numpy.distutils if numpy_fcompilers is not found. That way, a user could download a fresh "knowledge-base" into their source tree (and possibly tweak it) without the burden of installing a new numpy.
Is this something that really needs to be a code package? Why can't this knowledge (or at least the easily overridable part of it) be packaged in one or more .conf/.ini plaintext files? In that way, users could easily grab new data files or tweak the builtin ones, and at build time say
setup.py install --compiler_conf=~/my_tweaked.conf
Is that impossible/unreasonable for some reason?
It's not impossible, but there are at least a couple of places where it might be unreasonable. For example, look at the get_flags_arch() for Intel compilers: def get_flags_arch(self): v = self.get_version() opt = [] if cpu.has_fdiv_bug(): opt.append('-fdiv_check') if cpu.has_f00f_bug(): opt.append('-0f_check') if cpu.is_PentiumPro() or cpu.is_PentiumII() or cpu.is_PentiumIII(): opt.extend(['-tpp6']) elif cpu.is_PentiumM(): opt.extend(['-tpp7','-xB']) elif cpu.is_Pentium(): opt.append('-tpp5') elif cpu.is_PentiumIV() or cpu.is_Xeon(): opt.extend(['-tpp7','-xW']) if v and v <= '7.1': if cpu.has_mmx() and (cpu.is_PentiumII() or cpu.is_PentiumIII()): opt.append('-xM') elif v and v >= '8.0': if cpu.is_PentiumIII(): opt.append('-xK') if cpu.has_sse3(): opt.extend(['-xP']) elif cpu.is_PentiumIV(): opt.append('-xW') if cpu.has_sse2(): opt.append('-xN') elif cpu.is_PentiumM(): opt.extend(['-xB']) if (cpu.is_Xeon() or cpu.is_Core2() or cpu.is_Core2Extreme()) and cpu.getNCPUs()==2: opt.extend(['-xT']) if cpu.has_sse3() and (cpu.is_PentiumIV() or cpu.is_CoreDuo() or cpu.is_CoreSolo()): opt.extend(['-xP']) if cpu.has_sse2(): opt.append('-arch SSE2') elif cpu.has_sse(): opt.append('-arch SSE') return opt Expressing that without code could be hairy. That said, using configuration files as override mechanisms for each of the get_flags_*() methods would be feasible especially if there were a script to dump the current flag set to the configuration file. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco

On Dec 4, 2007 1:24 PM, Robert Kern <robert.kern@gmail.com> wrote:
Fernando Perez wrote:
Is this something that really needs to be a code package? Why can't this knowledge (or at least the easily overridable part of it) be packaged in one or more .conf/.ini plaintext files? In that way, users could easily grab new data files or tweak the builtin ones, and at build time say
setup.py install --compiler_conf=~/my_tweaked.conf
Is that impossible/unreasonable for some reason?
It's not impossible, but there are at least a couple of places where it might be unreasonable. For example, look at the get_flags_arch() for Intel compilers:
[...] I see. How about an alternate approach: exposing a simple api and allowing users to declare a *python* file to execfile() at load time looking for the config? Something like: setup.py install --compiler_conf=~/my_tweaked_config.py where the config file would be (sketch, not real code here): def make_flags(compiler, etc...): flags = [] .... return flags There could be a simple API for what functions the config file (down to their names and signatures) can declare, and if any of them are declared, they get called and their output is used. They get fed the default state of the same variables, so that they can choose to modify or outright replace them based on the user's need. The config code would then user_ns = {} execfile(user_config_filename,user_ns) for name,val in user_ns.items(): if name in approved_functions and callable(val): flags[name] = val(*approved_functions[name].default_args) What say you? f

Fernando Perez wrote:
On Dec 4, 2007 1:24 PM, Robert Kern <robert.kern@gmail.com> wrote:
Fernando Perez wrote:
Is this something that really needs to be a code package? Why can't this knowledge (or at least the easily overridable part of it) be packaged in one or more .conf/.ini plaintext files? In that way, users could easily grab new data files or tweak the builtin ones, and at build time say
setup.py install --compiler_conf=~/my_tweaked.conf
Is that impossible/unreasonable for some reason? It's not impossible, but there are at least a couple of places where it might be unreasonable. For example, look at the get_flags_arch() for Intel compilers:
[...]
I see. How about an alternate approach: exposing a simple api and allowing users to declare a *python* file to execfile() at load time looking for the config?
Something like:
setup.py install --compiler_conf=~/my_tweaked_config.py
where the config file would be (sketch, not real code here):
def make_flags(compiler, etc...): flags = [] .... return flags
There could be a simple API for what functions the config file (down to their names and signatures) can declare, and if any of them are declared, they get called and their output is used. They get fed the default state of the same variables, so that they can choose to modify or outright replace them based on the user's need.
The config code would then
user_ns = {} execfile(user_config_filename,user_ns) for name,val in user_ns.items(): if name in approved_functions and callable(val): flags[name] = val(*approved_functions[name].default_args)
What say you?
Well, like I said, for tweaking, a simple data file works better than code. There's no need to do all of those "if" tests since I know what platform I'm on. We should definitely have a simple data file that we can read flags from. It's just the general case that requires code. One thing in favor of numpy_fcompilers is that we can ship the updates to the general case more frequently. This means that other packages using Fortran (but not tied to a particular platform) can ship the updated code instead of telling all of their users to read their Fortran compiler manuals or ask on the mailing list for the correct settings to work around our old defects. Tweaking should be the province of the developer and the desperate. We have to be able to tweak, but we should spend more time on preventing the need for tweaking. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco

Robert Kern wrote:
Tweaking should be the province of the developer and the desperate. We have to be able to tweak, but we should spend more time on preventing the need for tweaking.
Yes, I can only agree 100 % with you. This is maybe *the* reason why I started this whole thing. Distutils "architecture" (I am talking about distutils in the python stdlib) makes tweaking totally impossible for the mere mortal because it modifies everything everywhere. We should be able to build out of the box without tweaking in more situations than now; tweaking should be possible, but not at the expense of the out-of-the-box experience. cheers, David

Robert Kern wrote:
David Cournapeau wrote:
Some of the most interesting things I can think of which work with scons: - you can control fortran and C flags from the command line: CFLAGS and FFLAGS won't override necessary flags, only optimization flags, so you can easily play with warning, optimization flags. For example:
CFLAGS='-W -Wall -Wextra -DDEBUG' FFLAGS='-DDEBUG -W -Wall -Wextra' python setupscons build
for debugging will work. No need to care about -fPIC and co, all this is handled automatically.
Can I override the flags which are handled automatically without modifying numpy? I just spent much of last night trying to get Intel Fortran on OS X working, and I had to dive into numpy.distutils.fcompiler.intel a lot. This is mildly acceptable, if irritating, for a numpy developer, but not acceptable for even a sophisticated user. Even if we were to keep our knowledge-base of Fortran compiler flags immediately up-to-date with every release of every Fortran compiler we follow, people will still be stuck with older versions of numpy.
numpy.distutils' behavior of using LDFLAGS, for example, to completely replace the flags instead of extending them mitigated this, somewhat. It allowed someone to work around the stale flags in numpy.distutils in order to get something built. This is a hack, and it causes confusion when this isn't the desired behavior, but it worked.
But can we do something better with scons? One option which would work both with scons and the current numpy.distutils is to provide something like LDFLAGS_NO_REALLY which replaces the flags and let LDFLAGS just extend the flags. That doesn't help the ultimate problem, but it makes the workaround more user-friendly. Another option is to have our Fortran compiler "knowledge-base" separable from the rest of the package. scons could try to import them from, say, numpy_fcompilers first and then look inside numpy.distutils if numpy_fcompilers is not found. That way, a user could download a fresh "knowledge-base" into their source tree (and possibly tweak it) without the burden of installing a new numpy.
First, let me briefly outline the basic flow of scons (for every package, I call scons; since they are totally independant, we just consider one package): - the distutils scons command find the compilers, their path, and set up the build directory, and then call scons with those parameters at the command line (scons runs in its own process). This is done in numpy\distutils\command\scons.py - I create an environment using the parameters given by distutils: I initialize scons CC, FORTRAN and CXX tools. Those tools define several flags, like -fPIC for shared objects, etc... this is done in numpy\distutils\scons\core\numpyenv.py (_GetNumpyEnvironment). - I customize the flags depending on the compiler used: this is done in numpy\distutils\scons\core\default.py. This is where the optimization flags, warning flags are set. I then merge those flags with scons environments, and those are the ones the user get (in SConstruct) in GetNumpyEnvironment. So you really have two places where flags are set: - optimization and warning are set in the default.py. We could use a file to override those quite easily. - flags like -fPIC are defined at the tool level. This is done inside scons; for cases where it does not work, I use my own modified tools (in numpy\distutils\scons\tools: any tool in this directory will be picked up first, before the scons ones). So to go back to your problem: if I understand correctly, what is needed is to update the scons tools. Since those are kept at one place, I think it would be safe to update them independently. But I don't understand exactly how this could be installed in the source tree without reinstalling numpy ? I think this is better than completely overriding the compilation flags, personally. But if you really want this possibility, I can add it, too, without too much trouble. Normally, the way I define and use compilation flags should be flexible enough to enable several approaches. I implemented the ones I most needed, but other are welcome. David

David Cournapeau wrote:
So to go back to your problem: if I understand correctly, what is needed is to update the scons tools. Since those are kept at one place, I think it would be safe to update them independently. But I don't understand exactly how this could be installed in the source tree without reinstalling numpy ? I think this is better than completely overriding the compilation flags, personally. But if you really want this possibility, I can add it, too, without too much trouble.
I don't think you could install it into an already installed numpy package. My suggestion is to keep the implementations of the tools inside the numpy package as they are now *except* that we look for another package first before going inside numpy.distutils.scons.tools . I called it "numpy_fcompilers" though I now suspect "numpy_scons_tools" might be more appropriate. If the package numpy_scons_tools doesn't exist, the implementations inside numpy.distutils.scons.tools are used. A variation on this would be to provide an --option to the scons command to provide a "package path" to look for tools. E.g. python setup.py scons --tool-path=my_local_fcompilers,site_fcompilers,numpy.distutils.scons.tools This, too, is a workaround for the less-than-desirable situation of having numpy's sizable build infrastructure bundled with the numpy package itself. If this stuff were an entirely separate package focused on providing this scons-based build infrastructure, then we wouldn't have a problem. We could just update it on its own release schedule. People would probably be more willing to use the development versions of it, too, instead of having to also buy into the development version of their array package as well. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco

On Dec 5, 2007 1:14 PM, Robert Kern <robert.kern@gmail.com> wrote:
David Cournapeau wrote:
So to go back to your problem: if I understand correctly, what is needed is to update the scons tools. Since those are kept at one place, I think it would be safe to update them independently. But I don't understand exactly how this could be installed in the source tree without reinstalling numpy ? I think this is better than completely overriding the compilation flags, personally. But if you really want this possibility, I can add it, too, without too much trouble.
I don't think you could install it into an already installed numpy package. My suggestion is to keep the implementations of the tools inside the numpy package as they are now *except* that we look for another package first before going inside numpy.distutils.scons.tools . I called it "numpy_fcompilers" though I now suspect "numpy_scons_tools" might be more appropriate. If the package numpy_scons_tools doesn't exist, the implementations inside numpy.distutils.scons.tools are used. A variation on this would be to provide an --option to the scons command to provide a "package path" to look for tools. E.g.
python setup.py scons --tool-path=my_local_fcompilers,site_fcompilers,numpy.distutils.scons.tools
Ok, I was confused by numpy_fcompilers, I thought you were talking about the numpy.distutils.fcompiler module. A priori, I don't think it would cause any trouble to add this option. Internally, I already use a list of directories to look for paths, so this should be a matter of pre-pending the new directory to this list.
This, too, is a workaround for the less-than-desirable situation of having numpy's sizable build infrastructure bundled with the numpy package itself. If this stuff were an entirely separate package focused on providing this scons-based build infrastructure, then we wouldn't have a problem. We could just update it on its own release schedule. People would probably be more willing to use the development versions of it, too, instead of having to also buy into the development version of their array package as well.
The only problem I see is that this increases the chance of losing synchronization. I don't know if this is significant. IMHO, the only real solution would be to fix distutils (how many people want a shared library builder in python community, for example ?), but well, it is not gonna happen in a foreseeable future, unfortunately cheers, David

David Cournapeau wrote:
On Dec 5, 2007 1:14 PM, Robert Kern <robert.kern@gmail.com> wrote:
This, too, is a workaround for the less-than-desirable situation of having numpy's sizable build infrastructure bundled with the numpy package itself. If this stuff were an entirely separate package focused on providing this scons-based build infrastructure, then we wouldn't have a problem. We could just update it on its own release schedule. People would probably be more willing to use the development versions of it, too, instead of having to also buy into the development version of their array package as well.
The only problem I see is that this increases the chance of losing synchronization. I don't know if this is significant.
The problem I see is that numpy-the-array-library and numpy.distutils-the-build-infrastructure are two related packages with *over*-synchronized cycles. We aren't going to push out a micro-release of numpy-the-array-library just because a new version of Intel Fortran comes out and changes its flags.
IMHO, the only real solution would be to fix distutils (how many people want a shared library builder in python community, for example ?), but well, it is not gonna happen in a foreseeable future, unfortunately
I don't see how that's relevant to the problem I raised. Supporting Fortran in the standard library would make the problem even worse. distutils is certainly not broken because it doesn't support Fortran. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco

On Dec 5, 2007 3:07 PM, Robert Kern <robert.kern@gmail.com> wrote:
I don't see how that's relevant to the problem I raised. Supporting Fortran in the standard library would make the problem even worse. distutils is certainly not broken because it doesn't support Fortran.
Fortran support is indeed certainly not in the scope of distutils. I was just answering to the general problem that we have a huge build infrastructure, not to the particular fortran problem. Having an infrastructure for adding easily new tools, that can be distributed separately, is something that distutils severely lacks IMHO. David

David Cournapeau wrote:
On Dec 5, 2007 3:07 PM, Robert Kern <robert.kern@gmail.com> wrote:
I don't see how that's relevant to the problem I raised. Supporting Fortran in the standard library would make the problem even worse. distutils is certainly not broken because it doesn't support Fortran.
Fortran support is indeed certainly not in the scope of distutils. I was just answering to the general problem that we have a huge build infrastructure, not to the particular fortran problem. Having an infrastructure for adding easily new tools, that can be distributed separately, is something that distutils severely lacks IMHO.
Ah, yes. I agree. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco

On Dec 5, 2007 1:14 PM, Robert Kern <robert.kern@gmail.com> wrote:
David Cournapeau wrote:
So to go back to your problem: if I understand correctly, what is needed is to update the scons tools. Since those are kept at one place, I think it would be safe to update them independently. But I don't understand exactly how this could be installed in the source tree without reinstalling numpy ? I think this is better than completely overriding the compilation flags, personally. But if you really want this possibility, I can add it, too, without too much trouble.
I don't think you could install it into an already installed numpy package. My suggestion is to keep the implementations of the tools inside the numpy package as they are now *except* that we look for another package first before going inside numpy.distutils.scons.tools . I called it "numpy_fcompilers" though I now suspect "numpy_scons_tools" might be more appropriate. If the package numpy_scons_tools doesn't exist, the implementations inside numpy.distutils.scons.tools are used. A variation on this would be to provide an --option to the scons command to provide a "package path" to look for tools. E.g.
python setup.py scons --tool-path=my_local_fcompilers,site_fcompilers,numpy.distutils.scons.tools
Done in 4558 (I named the option --scons-tool-path). The problem I can foresee with this, though, is that if a custom tool is buggy, scons will fail, and it is not always obvious why. cheers, David

David Cournapeau wrote:
- I have not yet tweaked fortran compiler configurations for optimizations except for gnu compilers
Can you give us a brief overview about how to do this? For example, the Intel Fortran compiler's SHLINKFLAGS in scons-local/.../SCons/Tool/ifort.py are incorrect for version 10 on OS X. Would I copy that file to scons/tool/ and make my edits there? Do I then add 'ifort' to the list in scons/core/default.py? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco

Robert Kern wrote:
David Cournapeau wrote:
- I have not yet tweaked fortran compiler configurations for optimizations except for gnu compilers
Can you give us a brief overview about how to do this? For example, the Intel Fortran compiler's SHLINKFLAGS in scons-local/.../SCons/Tool/ifort.py are incorrect for version 10 on OS X. Would I copy that file to scons/tool/ and make my edits there? Do I then add 'ifort' to the list in scons/core/default.py?
The basic rule is: if the code cannot run without a flag, the flag should be put in a tool, or at worse (but really if you have no choice) in numpyenv.py. If the flag is optimization, warning, etc... then it should be put into default.py. Basically, tools are not always up-to-date in scons, perticularly for fortran. So I provided a way to override the tools: as you noticed, you can put tools in .../scons/tools/, those will be picked up first. This is independent from adding ifort in scons/core/default.py. For Mac OS X, you may be bitten by -undefined dynamic_lookup. This is my fault: this flag is added at the wrong place, I put it temporarily in the python extension builder, but this is not where it should be put. Depending on its meaning, I can put it at the right place: does it give the traditional unix semantic of enabling unresolved symbols instead of the default one, which is similar to windows (even for shared code, every symbol must be resolved) ? cheers, David

David Cournapeau wrote:
Robert Kern wrote:
David Cournapeau wrote:
- I have not yet tweaked fortran compiler configurations for optimizations except for gnu compilers
Can you give us a brief overview about how to do this? For example, the Intel Fortran compiler's SHLINKFLAGS in scons-local/.../SCons/Tool/ifort.py are incorrect for version 10 on OS X. Would I copy that file to scons/tool/ and make my edits there? Do I then add 'ifort' to the list in scons/core/default.py?
The basic rule is: if the code cannot run without a flag, the flag should be put in a tool, or at worse (but really if you have no choice) in numpyenv.py. If the flag is optimization, warning, etc... then it should be put into default.py. Basically, tools are not always up-to-date in scons, perticularly for fortran. So I provided a way to override the tools: as you noticed, you can put tools in .../scons/tools/, those will be picked up first. This is independent from adding ifort in scons/core/default.py.
Right. In this case, "-shared" needs to be "-dynamiclib" on OS X, so this should go into the tool.
For Mac OS X, you may be bitten by -undefined dynamic_lookup. This is my fault: this flag is added at the wrong place, I put it temporarily in the python extension builder, but this is not where it should be put. Depending on its meaning, I can put it at the right place: does it give the traditional unix semantic of enabling unresolved symbols instead of the default one, which is similar to windows (even for shared code, every symbol must be resolved) ?
That's the basic idea. Rigorously, it's probably a bit more involved when you start considering two-level namespaces and framework. One thing to note is that this option is only valid for GNU compilers. Linking with ifort, I need to use -Wl,-undefined,dynamic_lookup . -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco

David Cournapeau wrote:
Robert Kern wrote:
David Cournapeau wrote:
- I have not yet tweaked fortran compiler configurations for optimizations except for gnu compilers
Can you give us a brief overview about how to do this? For example, the Intel Fortran compiler's SHLINKFLAGS in scons-local/.../SCons/Tool/ifort.py are incorrect for version 10 on OS X. Would I copy that file to scons/tool/ and make my edits there? Do I then add 'ifort' to the list in scons/core/default.py?
The basic rule is: if the code cannot run without a flag, the flag should be put in a tool, or at worse (but really if you have no choice) in numpyenv.py. If the flag is optimization, warning, etc... then it should be put into default.py. Basically, tools are not always up-to-date in scons, perticularly for fortran. So I provided a way to override the tools: as you noticed, you can put tools in .../scons/tools/, those will be picked up first. This is independent from adding ifort in scons/core/default.py.
Right. In this case, "-shared" needs to be "-dynamiclib" on OS X, so this should go into the tool. That's strange: -shared should not be used at all on mac os X. Either -bundle or -dynamiclib should be used (this is in applelink tool, so
On Dec 5, 2007 1:19 PM, Robert Kern <robert.kern@gmail.com> wrote: this is independant from the compiler used, normally). But I may have done something wrong, because I don't know much about mac os X idiosyncraties on this: basically, what's the difference between -dynamiclib and -bundle ? When I build python extension, I used the module scons builder, which is the same than shared library except on mac os X (on mac os X, shared libraries use -dynamiclib, modules use -bundle). I must confess that I used the thing which worked in thise case, without knowing exactly what i was doing.
That's the basic idea. Rigorously, it's probably a bit more involved when you start considering two-level namespaces and framework.
scons handles frameworks, but I feel that it is gcc specific.
One thing to note is that this option is only valid for GNU compilers. Linking with ifort, I need to use -Wl,-undefined,dynamic_lookup . Can't we just add a linker flag instead of using it from the compiler ? We still use the apple linker with ifort/icc, no ?
To sum it up: I think that implementation-wise, scons on mac os X has many gcc-only idiosyncraties; fortunately, once we know exactly what the flags should be in which cases, this is only a matter of changing the intel tools (for intel compilers on mac os X). OTOH, API-wise, there is no gcc idiodyncraties, which is the thing which matters in the mid-term: scons tools abstraction is quite good IMHO, and you don't have to fear breaking unrelated things like you do with distutils. All this should be modified to be sent upstream to scons, also. cheers, David

David Cournapeau wrote:
David Cournapeau wrote:
Robert Kern wrote:
David Cournapeau wrote:
- I have not yet tweaked fortran compiler configurations for optimizations except for gnu compilers
Can you give us a brief overview about how to do this? For example, the Intel Fortran compiler's SHLINKFLAGS in scons-local/.../SCons/Tool/ifort.py are incorrect for version 10 on OS X. Would I copy that file to scons/tool/ and make my edits there? Do I then add 'ifort' to the list in scons/core/default.py?
The basic rule is: if the code cannot run without a flag, the flag should be put in a tool, or at worse (but really if you have no choice) in numpyenv.py. If the flag is optimization, warning, etc... then it should be put into default.py. Basically, tools are not always up-to-date in scons, perticularly for fortran. So I provided a way to override the tools: as you noticed, you can put tools in .../scons/tools/, those will be picked up first. This is independent from adding ifort in scons/core/default.py. Right. In this case, "-shared" needs to be "-dynamiclib" on OS X, so this should go into the tool. That's strange: -shared should not be used at all on mac os X. Either -bundle or -dynamiclib should be used (this is in applelink tool, so
On Dec 5, 2007 1:19 PM, Robert Kern <robert.kern@gmail.com> wrote: this is independant from the compiler used, normally).
I was only reading code; I haven't tested building Fortran extensions, yet. However, using a generic link tool would be the wrong thing to do for most Fortran extensions, I think. Where does it get the correct Fortran runtime libraries from? Some Fortran compilers really like to be the linker when mixing languages.
But I may have done something wrong, because I don't know much about mac os X idiosyncraties on this: basically, what's the difference between -dynamiclib and -bundle ?
When I build python extension, I used the module scons builder, which is the same than shared library except on mac os X (on mac os X, shared libraries use -dynamiclib, modules use -bundle). I must confess that I used the thing which worked in thise case, without knowing exactly what i was doing.
ifort only supports -dynamiclib. For the regular linker, -bundle is correct for building Python extensions; I may have to rethink about using ifort to link, then. Basically, a bundle can be dynamically loaded while dylibs can't, so Python uses bundles for extension modules. http://www.finkproject.org/doc/porting/shared.php What confuses me is that I successfully built some Fortran modules last night using numpy.distutils and ifort -dynamiclib. Hmm.
One thing to note is that this option is only valid for GNU compilers. Linking with ifort, I need to use -Wl,-undefined,dynamic_lookup . Can't we just add a linker flag instead of using it from the compiler ? We still use the apple linker with ifort/icc, no ?
I don't know. We'd have to locate all of the Fortran runtime libraries and add them. How do we do that? Or is that already done? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco

On Dec 5, 2007 3:11 PM, Robert Kern <robert.kern@gmail.com> wrote:
I was only reading code; I haven't tested building Fortran extensions, yet. However, using a generic link tool would be the wrong thing to do for most Fortran extensions, I think. Where does it get the correct Fortran runtime libraries from? Some Fortran compilers really like to be the linker when mixing languages.
Yes. SCons does not know how to do that, so I did it the "autoconf" way: I implemented a mini library to get those linker flags, so that I can still link C and Fortran code with gcc, not with gfortran or g77 (for example). The relevant code (+ tests) is in scons/fortran.py, scons/fortran_scons.py + scons/tests. I have tested the parsing part with g77, gfortran, ifort and sunfort.
But I may have done something wrong, because I don't know much about mac os X idiosyncraties on this: basically, what's the difference between -dynamiclib and -bundle ?
When I build python extension, I used the module scons builder, which is the same than shared library except on mac os X (on mac os X, shared libraries use -dynamiclib, modules use -bundle). I must confess that I used the thing which worked in thise case, without knowing exactly what i was doing.
ifort only supports -dynamiclib. For the regular linker, -bundle is correct for building Python extensions; I may have to rethink about using ifort to link, then. Basically, a bundle can be dynamically loaded while dylibs can't, so Python uses bundles for extension modules.
Ah, that rings a bell, I remember now.
What confuses me is that I successfully built some Fortran modules last night using numpy.distutils and ifort -dynamiclib. Hmm.
One thing to note is that this option is only valid for GNU compilers. Linking with ifort, I need to use -Wl,-undefined,dynamic_lookup . Can't we just add a linker flag instead of using it from the compiler ? We still use the apple linker with ifort/icc, no ?
I don't know. We'd have to locate all of the Fortran runtime libraries and add them. How do we do that? Or is that already done?
Yes, this is already done. To see how it works concretely (from the package developer POW), you could take a look there, for example: http://projects.scipy.org/scipy/scipy/browser/branches/scipy.scons/scipy/int... You use CheckF77Clib during the configuration stage, and if successfull, this put all the relevant link flags into env['F77_LDFLAGS']. This is more "hackish" than defining fortran runtime and co in the tools, but also more robust. What I like with this approach is that it is testable (you can have trivial unit tests to test those checkers, without the need for the tested tools to be present). cheers, David
participants (6)
-
David Cournapeau
-
David Cournapeau
-
David Huard
-
Fernando Perez
-
Matthieu Brucher
-
Robert Kern