From numpy-svn at scipy.org Tue May 1 01:59:54 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 1 May 2007 00:59:54 -0500 (CDT) Subject: [Numpy-svn] r3733 - trunk/numpy/doc Message-ID: <20070501055954.8C40C39C00B@new.scipy.org> Author: jarrod.millman Date: 2007-05-01 00:59:49 -0500 (Tue, 01 May 2007) New Revision: 3733 Modified: trunk/numpy/doc/DISTUTILS.txt Log: adding toc to distutils docs Modified: trunk/numpy/doc/DISTUTILS.txt =================================================================== --- trunk/numpy/doc/DISTUTILS.txt 2007-04-30 22:11:03 UTC (rev 3732) +++ trunk/numpy/doc/DISTUTILS.txt 2007-05-01 05:59:49 UTC (rev 3733) @@ -9,6 +9,8 @@ :Revision: $LastChangedRevision$ :SVN source: $HeadURL$ +.. contents:: + SciPy structure ''''''''''''''' From numpy-svn at scipy.org Tue May 1 07:43:45 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 1 May 2007 06:43:45 -0500 (CDT) Subject: [Numpy-svn] r3734 - in branches/multicore: . numpy numpy/core numpy/core/src numpy/core/tests numpy/distutils numpy/distutils/command numpy/distutils/fcompiler numpy/distutils/tests numpy/doc numpy/f2py numpy/testing Message-ID: <20070501114345.791FB39C03C@new.scipy.org> Author: eric Date: 2007-05-01 06:43:40 -0500 (Tue, 01 May 2007) New Revision: 3734 Added: branches/multicore/numpy/distutils/tests/test_fcompiler_gnu.py Modified: branches/multicore/ branches/multicore/COMPATIBILITY branches/multicore/numpy/__init__.py branches/multicore/numpy/core/numeric.py branches/multicore/numpy/core/src/arrayobject.c branches/multicore/numpy/core/src/multiarraymodule.c branches/multicore/numpy/core/src/umathmodule.c.src branches/multicore/numpy/core/tests/test_multiarray.py branches/multicore/numpy/core/tests/test_regression.py branches/multicore/numpy/core/tests/test_umath.py branches/multicore/numpy/distutils/ccompiler.py branches/multicore/numpy/distutils/command/build_clib.py branches/multicore/numpy/distutils/command/build_ext.py branches/multicore/numpy/distutils/fcompiler/__init__.py branches/multicore/numpy/distutils/fcompiler/gnu.py branches/multicore/numpy/distutils/fcompiler/intel.py branches/multicore/numpy/distutils/fcompiler/vast.py branches/multicore/numpy/distutils/system_info.py branches/multicore/numpy/doc/DISTUTILS.txt branches/multicore/numpy/f2py/f90mod_rules.py branches/multicore/numpy/testing/numpytest.py Log: Merged revisions 3719-3733 via svnmerge from http://svn.scipy.org/svn/numpy/trunk ........ r3720 | stefan | 2007-04-20 07:12:43 -0500 (Fri, 20 Apr 2007) | 2 lines Fix pointer size for F90 allocatable arrays on 64-bit platform. Closes ticket #147. ........ r3721 | oliphant | 2007-04-20 15:27:01 -0500 (Fri, 20 Apr 2007) | 1 line Fix byte-swapping error on conversion to Object array from big-endian array (byte-swapping was happening twice in that case). This fixes #503. ........ r3722 | oliphant | 2007-04-21 23:29:42 -0500 (Sat, 21 Apr 2007) | 1 line Add loadtxt and savetxt adapted from matplotlib. ........ r3723 | cookedm | 2007-04-22 15:57:58 -0500 (Sun, 22 Apr 2007) | 9 lines Better version handling for gnu and intel Fortran compilers - gnu compilers check if the version is >= 4, in which case it's gfortran - add a test file for gnu compiler check - simplify version matching on intel compilers to be more flexible - add FCompiler.find_executables so that subclasses can find executables at .customize() time, instead of when the class is created. ........ r3724 | cookedm | 2007-04-22 16:12:57 -0500 (Sun, 22 Apr 2007) | 14 lines Some distutils work: - Add better support for C++ in numpy.distutils. Instead of munging the C compiler command, build_clib and build_ext call the new Compiler.cxx_compiler() method to get a version of the compiler suitable for C++ (this also takes care of the special needs of AIX). - If config_fc is specified in the Extension definition, merge that info instead of replacing it (otherwise, the name of the Fortran compiler is overwritten). This is done at the key level (ex., compiler options are replaced instead of appended). - clean up compiler.py a bit - clean up linking in build_ext ........ r3725 | cookedm | 2007-04-22 16:18:20 -0500 (Sun, 22 Apr 2007) | 4 lines NumpyTest.test() takes an extra argument, all, which, if true, makes it act like NumpyTest.testall(). This comes from some refactoring to remove duplicate code in .test and .testall(). ........ r3726 | oliphant | 2007-04-24 14:15:33 -0500 (Tue, 24 Apr 2007) | 1 line Add patch to system_info for building with MKL on Win32 #504 ........ r3727 | oliphant | 2007-04-24 16:56:06 -0500 (Tue, 24 Apr 2007) | 1 line Restore invariant of (x == (x/y)*y + (x%y)) by making integer division with mixed-sign operands match Python. ........ r3728 | cookedm | 2007-04-25 04:12:45 -0500 (Wed, 25 Apr 2007) | 2 lines Add test case for integer division ........ r3729 | oliphant | 2007-04-27 15:37:55 -0500 (Fri, 27 Apr 2007) | 1 line Fix silly initialization of input variable. ........ r3730 | stefan | 2007-04-30 03:53:00 -0500 (Mon, 30 Apr 2007) | 2 lines Add regression test. Fix order of arguments in test_multiarray. ........ r3731 | rkern | 2007-04-30 11:29:51 -0500 (Mon, 30 Apr 2007) | 1 line Fix typo. ........ r3733 | jarrod.millman | 2007-05-01 00:59:49 -0500 (Tue, 01 May 2007) | 2 lines adding toc to distutils docs ........ Property changes on: branches/multicore ___________________________________________________________________ Name: svnmerge-integrated - /branches/distutils-revamp:1-2752 /trunk:1-3718 + /branches/distutils-revamp:1-2752 /trunk:1-3733 Modified: branches/multicore/COMPATIBILITY =================================================================== --- branches/multicore/COMPATIBILITY 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/COMPATIBILITY 2007-05-01 11:43:40 UTC (rev 3734) @@ -15,7 +15,7 @@ If you used typecode characters: -'c' -> 'S1' +'c' -> 'S1' or 'c' 'b' -> 'B' '1' -> 'b' 's' -> 'h' @@ -35,12 +35,12 @@ a->descr->zero --> PyArray_Zero(a) a->descr->one --> PyArray_One(a) -Numeric/arrayobject.h --> numpy/arrayobject.h +Numeric/arrayobject.h --> numpy/oldnumeric.h # These will actually work and are defines for PyArray_BYTE, # but you really should change it in your code -PyArray_CHAR --> PyArray_BYTE +PyArray_CHAR --> PyArray_CHAR (or PyArray_STRING which is more flexible) PyArray_SBYTE --> PyArray_BYTE Modified: branches/multicore/numpy/__init__.py =================================================================== --- branches/multicore/numpy/__init__.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/__init__.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -84,11 +84,8 @@ testing --> NumpyTest """ - def test(level=1, verbosity=1): - if level <= 10: - return NumpyTest().test(level, verbosity) - else: - return NumpyTest().testall(level, verbosity) + def test(*args, **kw): + return NumpyTest().test(*args, **kw) test.__doc__ = NumpyTest.test.__doc__ import add_newdocs Modified: branches/multicore/numpy/core/numeric.py =================================================================== --- branches/multicore/numpy/core/numeric.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/core/numeric.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -12,7 +12,7 @@ 'array_repr', 'array_str', 'set_string_function', 'little_endian', 'require', 'fromiter', 'array_equal', 'array_equiv', - 'indices', 'fromfunction', + 'indices', 'fromfunction', 'loadtxt', 'savetxt', 'load', 'loads', 'isscalar', 'binary_repr', 'base_repr', 'ones', 'identity', 'allclose', 'compare_chararrays', 'putmask', 'seterr', 'geterr', 'setbufsize', 'getbufsize', @@ -610,6 +610,177 @@ file = _file(file,"rb") return _cload(file) +# Adapted from matplotlib + +def _getconv(dtype): + typ = dtype.type + if issubclass(typ, bool_): + return lambda x: bool(int(x)) + if issubclass(typ, integer): + return int + elif issubclass(typ, floating): + return float + elif issubclass(typ, complex): + return complex + else: + return str + + +def _string_like(obj): + try: obj + '' + except (TypeError, ValueError): return 0 + return 1 + +def loadtxt(fname, dtype=float, comments='#', delimiter=None, converters=None, + skiprows=0, usecols=None, unpack=False): + """ + Load ASCII data from fname into an array and return the array. + + The data must be regular, same number of values in every row + + fname can be a filename or a file handle. Support for gzipped files is + automatic, if the filename ends in .gz + + See scipy.loadmat to read and write matfiles. + + Example usage: + + X = loadtxt('test.dat') # data in two columns + t = X[:,0] + y = X[:,1] + + Alternatively, you can do the same with "unpack"; see below + + X = loadtxt('test.dat') # a matrix of data + x = loadtxt('test.dat') # a single column of data + + + dtype - the data-type of the resulting array. If this is a + record data-type, the the resulting array will be 1-d and each row will + be interpreted as an element of the array. The number of columns + used must match the number of fields in the data-type in this case. + + comments - the character used to indicate the start of a comment + in the file + + delimiter is a string-like character used to seperate values in the + file. If delimiter is unspecified or none, any whitespace string is + a separator. + + converters, if not None, is a dictionary mapping column number to + a function that will convert that column to a float. Eg, if + column 0 is a date string: converters={0:datestr2num} + + skiprows is the number of rows from the top to skip + + usecols, if not None, is a sequence of integer column indexes to + extract where 0 is the first column, eg usecols=(1,4,5) to extract + just the 2nd, 5th and 6th columns + + unpack, if True, will transpose the matrix allowing you to unpack + into named arguments on the left hand side + + t,y = load('test.dat', unpack=True) # for two column data + x,y,z = load('somefile.dat', usecols=(3,5,7), unpack=True) + + """ + + if _string_like(fname): + if fname.endswith('.gz'): + import gzip + fh = gzip.open(fname) + else: + fh = file(fname) + elif hasattr(fname, 'seek'): + fh = fname + else: + raise ValueError('fname must be a string or file handle') + X = [] + + dtype = multiarray.dtype(dtype) + defconv = _getconv(dtype) + converterseq = None + if converters is None: + converters = {} + if dtype.names is not None: + converterseq = [_getconv(dtype.fields[name][0]) \ + for name in dtype.names] + + for i,line in enumerate(fh): + if i 0) != (y > 0)) && (x % y != 0)) tmp--; + *((@typ@ *)op)= tmp; + } + } +} +/**end repeat**/ + + +/**begin repeat +#TYP=BYTE,UBYTE,SHORT,USHORT,INT,UINT,LONG,ULONG,LONGLONG,ULONGLONG# +#typ=char, ubyte, short, ushort, int, uint, long, ulong, longlong, ulonglong# +#otyp=float*4, double*6# +*/ +static void @TYP at _true_divide(char **args, intp *dimensions, intp *steps, void *func) { register intp i, is1=steps[0],is2=steps[1],os=steps[2],n=dimensions[0]; @@ -1078,6 +1112,8 @@ #define @TYP at _floor_divide @TYP at _divide /**end repeat**/ + + /**begin repeat #TYP=FLOAT,DOUBLE,LONGDOUBLE# #typ=float,double,longdouble# Modified: branches/multicore/numpy/core/tests/test_multiarray.py =================================================================== --- branches/multicore/numpy/core/tests/test_multiarray.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/core/tests/test_multiarray.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -380,7 +380,7 @@ byteorder = '=' if x.dtype.byteorder == '|': byteorder = '|' - assert_equal(byteorder,x.dtype.byteorder) + assert_equal(x.dtype.byteorder,byteorder) self._check_range(x,expected_min,expected_max) return x Modified: branches/multicore/numpy/core/tests/test_regression.py =================================================================== --- branches/multicore/numpy/core/tests/test_regression.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/core/tests/test_regression.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -645,5 +645,13 @@ r = N.array([['abc']], dtype=[('var1', '|S20')]) assert str(r['var1'][0][0]) == 'abc' + def check_take_output(self, level=rlevel): + """Ensure that 'take' honours output parameter.""" + x = N.arange(12).reshape((3,4)) + a = N.take(x,[0,2],axis=1) + b = N.zeros_like(a) + N.take(x,[0,2],axis=1,out=b) + assert_array_equal(a,b) + if __name__ == "__main__": NumpyTest().run() Modified: branches/multicore/numpy/core/tests/test_umath.py =================================================================== --- branches/multicore/numpy/core/tests/test_umath.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/core/tests/test_umath.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -5,6 +5,14 @@ from numpy import zeros, ndarray, array, choose restore_path() +class test_division(NumpyTestCase): + def check_division_int(self): + # int division should return the floor of the result, a la Python + x = array([5, 10, 90, 100, -5, -10, -90, -100, -120]) + assert_equal(x / 100, [0, 0, 0, 1, -1, -1, -1, -1, -2]) + assert_equal(x // 100, [0, 0, 0, 1, -1, -1, -1, -1, -2]) + assert_equal(x % 100, [5, 10, 90, 0, 95, 90, 10, 0, 80]) + class test_power(NumpyTestCase): def check_power_float(self): x = array([1., 2., 3.]) Modified: branches/multicore/numpy/distutils/ccompiler.py =================================================================== --- branches/multicore/numpy/distutils/ccompiler.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/distutils/ccompiler.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -21,6 +21,10 @@ distutils.sysconfig._config_vars['OPT'] = '-Wall -g -O0' #distutils.sysconfig._init_posix = _new_init_posix +def replace_method(klass, method_name, func): + m = new.instancemethod(func, None, klass) + setattr(klass, method_name, m) + # Using customized CCompiler.spawn. def CCompiler_spawn(self, cmd, display=None): if display is None: @@ -37,8 +41,9 @@ print o raise DistutilsExecError,\ 'Command "%s" failed with exit status %d' % (cmd, s) -CCompiler.spawn = new.instancemethod(CCompiler_spawn,None,CCompiler) +replace_method(CCompiler, 'spawn', CCompiler_spawn) + def CCompiler_object_filenames(self, source_filenames, strip_dir=0, output_dir=''): if output_dir is None: output_dir = '' @@ -63,8 +68,7 @@ obj_names.append(obj_name) return obj_names -CCompiler.object_filenames = new.instancemethod(CCompiler_object_filenames, - None,CCompiler) +replace_method(CCompiler, 'object_filenames', CCompiler_object_filenames) def CCompiler_compile(self, sources, output_dir=None, macros=None, include_dirs=None, debug=0, extra_preargs=None, @@ -114,7 +118,7 @@ # Return *all* object filenames, not just the ones we just built. return objects -CCompiler.compile = new.instancemethod(CCompiler_compile,None,CCompiler) +replace_method(CCompiler, 'compile', CCompiler_compile) def CCompiler_customize_cmd(self, cmd): """ Customize compiler using distutils command. @@ -139,8 +143,7 @@ self.set_link_objects(cmd.link_objects) return -CCompiler.customize_cmd = new.instancemethod(\ - CCompiler_customize_cmd,None,CCompiler) +replace_method(CCompiler, 'customize_cmd', CCompiler_customize_cmd) def _compiler_to_string(compiler): props = [] @@ -179,10 +182,8 @@ print _compiler_to_string(self) print '*'*80 -CCompiler.show_customization = new.instancemethod(\ - CCompiler_show_customization,None,CCompiler) +replace_method(CCompiler, 'show_customization', CCompiler_show_customization) - def CCompiler_customize(self, dist, need_cxx=0): # See FCompiler.customize for suggested usage. log.info('customize %s' % (self.__class__.__name__)) @@ -203,7 +204,7 @@ if hasattr(self,'compiler') and self.compiler[0].find('cc')>=0: if not self.compiler_cxx: - if self.compiler[0][:3] == 'gcc': + if self.compiler[0].startswith('gcc'): a, b = 'gcc', 'g++' else: a, b = 'cc', 'c++' @@ -215,10 +216,23 @@ log.warn('Missing compiler_cxx fix for '+self.__class__.__name__) return -CCompiler.customize = new.instancemethod(\ - CCompiler_customize,None,CCompiler) +replace_method(CCompiler, 'customize', CCompiler_customize) -def simple_version_match(pat=r'[-.\d]+', ignore=None, start=''): +def simple_version_match(pat=r'[-.\d]+', ignore='', start=''): + """ + Simple matching of version numbers, for use in CCompiler and FCompiler + classes. + + :Parameters: + pat : regex matching version numbers. + ignore : false or regex matching expressions to skip over. + start : false or regex matching the start of where to start looking + for version numbers. + + :Returns: + A function that is appropiate to use as the .version_match + attribute of a CCompiler class. + """ def matcher(self, version_string): pos = 0 if start: @@ -271,15 +285,26 @@ self.version = version return version -CCompiler.get_version = new.instancemethod(\ - CCompiler_get_version,None,CCompiler) +replace_method(CCompiler, 'get_version', CCompiler_get_version) +def CCompiler_cxx_compiler(self): + cxx = copy(self) + cxx.compiler_so = [cxx.compiler_cxx[0]] + cxx.compiler_so[1:] + if sys.platform.startswith('aix') and 'ld_so_aix' in cxx.linker_so[0]: + # AIX needs the ld_so_aix script included with Python + cxx.linker_so = [cxx.linker_so[0]] + cxx.compiler_cxx[0] \ + + cxx.linker_so[2:] + else: + cxx.linker_so = [cxx.compiler_cxx[0]] + cxx.linker_so[1:] + return cxx + +replace_method(CCompiler, 'cxx_compiler', CCompiler_cxx_compiler) + compiler_class['intel'] = ('intelccompiler','IntelCCompiler', "Intel C Compiler for 32-bit applications") compiler_class['intele'] = ('intelccompiler','IntelItaniumCCompiler', "Intel C Itanium Compiler for Itanium-based applications") -ccompiler._default_compilers = ccompiler._default_compilers \ - + (('linux.*','intel'),('linux.*','intele')) +ccompiler._default_compilers += (('linux.*','intel'),('linux.*','intele')) if sys.platform == 'win32': compiler_class['mingw32'] = ('mingw32ccompiler', 'Mingw32CCompiler', Modified: branches/multicore/numpy/distutils/command/build_clib.py =================================================================== --- branches/multicore/numpy/distutils/command/build_clib.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/distutils/command/build_clib.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -21,13 +21,11 @@ def initialize_options(self): old_build_clib.initialize_options(self) self.fcompiler = None - return def finalize_options(self): old_build_clib.finalize_options(self) self.set_undefined_options('build_ext', ('fcompiler', 'fcompiler')) - return def have_f_sources(self): for (lib_name, build_info) in self.libraries: @@ -84,7 +82,6 @@ self.fcompiler.show_customization() self.build_libraries(self.libraries) - return def get_source_files(self): self.check_library_list(self.libraries) @@ -94,8 +91,6 @@ return filenames def build_libraries(self, libraries): - - for (lib_name, build_info) in libraries: # default compilers compiler = self.compiler @@ -109,8 +104,6 @@ "a list of source filenames") % lib_name sources = list(sources) - - lib_file = compiler.library_filename(lib_name, output_dir=self.build_clib) @@ -124,9 +117,9 @@ config_fc = build_info.get('config_fc',{}) if fcompiler is not None and config_fc: - log.info('using setup script specified config_fc '\ + log.info('using additional config_fc from setup script '\ 'for fortran compiler: %s' \ - % (config_fc)) + % (config_fc,)) from numpy.distutils.fcompiler import new_fcompiler requiref90 = build_info.get('language','c')=='f90' fcompiler = new_fcompiler(compiler=self.fcompiler.compiler_type, @@ -134,7 +127,10 @@ dry_run=self.dry_run, force=self.force, requiref90=requiref90) - fcompiler.customize(config_fc) + dist = self.distribution + base_config_fc = dist.get_option_dict('config_fc').copy() + base_config_fc.update(config_fc) + fcompiler.customize(base_config_fc) macros = build_info.get('macros') include_dirs = build_info.get('include_dirs') @@ -165,19 +161,15 @@ if cxx_sources: log.info("compiling C++ sources") - old_compiler = self.compiler.compiler_so[0] - self.compiler.compiler_so[0] = self.compiler.compiler_cxx[0] - - cxx_objects = compiler.compile(cxx_sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=extra_postargs) + cxx_compiler = compiler.cxx_compiler() + cxx_objects = cxx_compiler.compile(cxx_sources, + output_dir=self.build_temp, + macros=macros, + include_dirs=include_dirs, + debug=self.debug, + extra_postargs=extra_postargs) objects.extend(cxx_objects) - self.compiler.compiler_so[0] = old_compiler - if f_sources: log.info("compiling Fortran sources") f_objects = fcompiler.compile(f_sources, @@ -193,10 +185,8 @@ debug=self.debug) clib_libraries = build_info.get('libraries',[]) - for lname,binfo in libraries: + for lname, binfo in libraries: if lname in clib_libraries: clib_libraries.extend(binfo[1].get('libraries',[])) if clib_libraries: build_info['libraries'] = clib_libraries - - return Modified: branches/multicore/numpy/distutils/command/build_ext.py =================================================================== --- branches/multicore/numpy/distutils/command/build_ext.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/distutils/command/build_ext.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -201,17 +201,15 @@ if cxx_sources: log.info("compiling C++ sources") - old_compiler = self.compiler.compiler_so[0] - self.compiler.compiler_so[0] = self.compiler.compiler_cxx[0] + cxx_compiler = self.compiler.cxx_compiler() - c_objects += self.compiler.compile(cxx_sources, + c_objects += cxx_compiler.compile(cxx_sources, output_dir=output_dir, macros=macros, include_dirs=include_dirs, debug=self.debug, extra_postargs=extra_args, **kws) - self.compiler.compiler_so[0] = old_compiler check_for_f90_modules = not not fmodule_sources @@ -272,10 +270,7 @@ objects.extend(ext.extra_objects) extra_args = ext.extra_link_args or [] - try: - old_linker_so_0 = self.compiler.linker_so[0] - except: - pass + linker = self.compiler.link_shared_object use_fortran_linker = getattr(ext,'language','c') in ['f77','f90'] \ and self.fcompiler is not None @@ -308,38 +303,31 @@ if cxx_sources: # XXX: Which linker should be used, Fortran or C++? log.warn('mixing Fortran and C++ is untested') - link = self.fcompiler.link_shared_object + linker = self.fcompiler.link_shared_object language = ext.language or self.fcompiler.detect_language(f_sources) else: - link = self.compiler.link_shared_object + linker = self.compiler.link_shared_object if sys.version[:3]>='2.3': language = ext.language or self.compiler.detect_language(sources) else: language = ext.language if cxx_sources: - self.compiler.linker_so[0] = self.compiler.compiler_cxx[0] + linker = self.compiler.cxx_compiler().link_shared_object if sys.version[:3]>='2.3': kws = {'target_lang':language} else: kws = {} - link(objects, ext_filename, - libraries=self.get_libraries(ext) + c_libraries + clib_libraries, - library_dirs=ext.library_dirs + c_library_dirs + clib_library_dirs, - runtime_library_dirs=ext.runtime_library_dirs, - extra_postargs=extra_args, - export_symbols=self.get_export_symbols(ext), - debug=self.debug, - build_temp=self.build_temp,**kws) + linker(objects, ext_filename, + libraries=self.get_libraries(ext) + c_libraries + clib_libraries, + library_dirs=ext.library_dirs+c_library_dirs+clib_library_dirs, + runtime_library_dirs=ext.runtime_library_dirs, + extra_postargs=extra_args, + export_symbols=self.get_export_symbols(ext), + debug=self.debug, + build_temp=self.build_temp,**kws) - try: - self.compiler.linker_so[0] = old_linker_so_0 - except: - pass - - return - def _libs_with_msvc_and_fortran(self, c_libraries, c_library_dirs): # Always use system linker when using MSVC compiler. f_lib_dirs = [] Modified: branches/multicore/numpy/distutils/fcompiler/__init__.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/__init__.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/distutils/fcompiler/__init__.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -28,7 +28,7 @@ Methods that subclasses may redefine: - get_version_cmd(), get_linker_so(), get_version() + find_executables(), get_version_cmd(), get_linker_so(), get_version() get_flags(), get_flags_opt(), get_flags_arch(), get_flags_debug() get_flags_f77(), get_flags_opt_f77(), get_flags_arch_f77(), get_flags_debug_f77(), get_flags_f90(), get_flags_opt_f90(), @@ -113,6 +113,11 @@ ## They are private to FCompiler class and may return unexpected ## results if used elsewhere. So, you have been warned.. + def find_executables(self): + """Modify self.executables to hold found executables, instead of + searching for them at class creation time.""" + pass + def get_version_cmd(self): """ Compiler command to print out version information. """ f77 = self.executables['compiler_f77'] @@ -267,6 +272,7 @@ noarch = conf.get('noarch',[None,noopt])[1] debug = conf.get('debug',[None,0])[1] + self.find_executables() f77 = self.__get_cmd('compiler_f77','F77',(conf,'f77exec')) f90 = self.__get_cmd('compiler_f90','F90',(conf,'f90exec')) Modified: branches/multicore/numpy/distutils/fcompiler/gnu.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/gnu.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/distutils/fcompiler/gnu.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -4,7 +4,6 @@ import warnings from numpy.distutils.cpuinfo import cpu -from numpy.distutils.ccompiler import simple_version_match from numpy.distutils.fcompiler import FCompiler from numpy.distutils.exec_command import exec_command, find_executable from numpy.distutils.misc_util import mingw32, msvc_runtime_library @@ -12,8 +11,32 @@ class GnuFCompiler(FCompiler): compiler_type = 'gnu' - version_match = simple_version_match(start=r'GNU Fortran (?!95)') + def gnu_version_match(self, version_string): + """Handle the different versions of GNU fortran compilers""" + m = re.match(r'GNU Fortran', version_string) + if not m: + return None + m = re.match(r'GNU Fortran\s+95.*?([0-9-.]+)', version_string) + if m: + return ('gfortran', m.group(1)) + m = re.match(r'GNU Fortran.*?([0-9-.]+)', version_string) + if m: + v = m.group(1) + if v.startswith('0') or v.startswith('2') or v.startswith('3'): + # the '0' is for early g77's + return ('g77', v) + else: + # at some point in the 4.x series, the ' 95' was dropped + # from the version string + return ('gfortran', v) + + def version_match(self, version_string): + v = self.gnu_version_match(version_string) + if not v or v[0] != 'g77': + return None + return v[1] + # 'g77 --version' results # SunOS: GNU Fortran (GCC 3.2) 3.2 20020814 (release) # Debian: GNU Fortran (GCC) 3.3.3 20040110 (prerelease) (Debian) @@ -21,18 +44,15 @@ # GNU Fortran 0.5.25 20010319 (prerelease) # Redhat: GNU Fortran (GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)) 3.2.2 20030222 (Red Hat Linux 3.2.2-5) - for fc_exe in map(find_executable,['g77','f77']): - if os.path.isfile(fc_exe): - break executables = { - 'version_cmd' : [fc_exe,"--version"], - 'compiler_f77' : [fc_exe, "-g", "-Wall","-fno-second-underscore"], + 'version_cmd' : ["g77", "--version"], + 'compiler_f77' : ["g77", "-g", "-Wall","-fno-second-underscore"], 'compiler_f90' : None, # Use --fcompiler=gnu95 for f90 codes 'compiler_fix' : None, - 'linker_so' : [fc_exe, "-g", "-Wall"], + 'linker_so' : ["g77", "-g", "-Wall"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"], - 'linker_exe' : [fc_exe, "-g", "-Wall"] + 'linker_exe' : ["g77", "-g", "-Wall"] } module_dir_switch = None module_include_switch = None @@ -51,6 +71,13 @@ suggested_f90_compiler = 'gnu95' + def find_executables(self): + for fc_exe in [find_executable(c) for c in ['g77','f77']]: + if os.path.isfile(fc_exe): + break + for key in ['version_cmd', 'compiler_f77', 'linker_so', 'linker_exe']: + self.executables[key][0] = fc_exe + #def get_linker_so(self): # # win32 linking should be handled by standard linker # # Darwin g77 cannot be used as a linker. @@ -218,12 +245,11 @@ if gnu_ver >= '3.4.4': if cpu.is_PentiumM(): march_opt = '-march=pentium-m' - # Future: # if gnu_ver >= '4.3': # if cpu.is_Core2(): # march_opt = '-march=core2' - + # Note: gcc 3.2 on win32 has breakage with -march specified if '3.1.1' <= gnu_ver <= '3.4' and sys.platform=='win32': march_opt = '' @@ -250,26 +276,32 @@ class Gnu95FCompiler(GnuFCompiler): compiler_type = 'gnu95' - version_match = simple_version_match(start='GNU Fortran (95|\(GCC\))') + def version_match(self, version_string): + v = self.gnu_version_match(version_string) + if not v or v[0] != 'gfortran': + return None + return v[1] + # 'gfortran --version' results: + # XXX is the below right? # Debian: GNU Fortran 95 (GCC 4.0.3 20051023 (prerelease) (Debian 4.0.2-3)) + # GNU Fortran 95 (GCC) 4.1.2 20061115 (prerelease) (Debian 4.1.1-21) # OS X: GNU Fortran 95 (GCC) 4.1.0 # GNU Fortran 95 (GCC) 4.2.0 20060218 (experimental) # GNU Fortran (GCC) 4.3.0 20070316 (experimental) - for fc_exe in map(find_executable,['gfortran','f95']): - if os.path.isfile(fc_exe): - break executables = { - 'version_cmd' : [fc_exe,"--version"], - 'compiler_f77' : [fc_exe,"-Wall","-ffixed-form","-fno-second-underscore"], - 'compiler_f90' : [fc_exe,"-Wall","-fno-second-underscore"], - 'compiler_fix' : [fc_exe,"-Wall","-ffixed-form","-fno-second-underscore"], - 'linker_so' : [fc_exe,"-Wall"], + 'version_cmd' : ["gfortran", "--version"], + 'compiler_f77' : ["gfortran", "-Wall", "-ffixed-form", + "-fno-second-underscore"], + 'compiler_f90' : ["gfortran", "-Wall", "-fno-second-underscore"], + 'compiler_fix' : ["gfortran", "-Wall", "-ffixed-form", + "-fno-second-underscore"], + 'linker_so' : ["gfortran", "-Wall"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"], - 'linker_exe' : [fc_exe,"-Wall"] + 'linker_exe' : ["gfortran", "-Wall"] } # use -mno-cygwin flag for g77 when Python is not Cygwin-Python @@ -283,6 +315,14 @@ g2c = 'gfortran' + def find_executables(self): + for fc_exe in [find_executable(c) for c in ['gfortran','f95']]: + if os.path.isfile(fc_exe): + break + for key in ['version_cmd', 'compiler_f77', 'compiler_f90', + 'compiler_fix', 'linker_so', 'linker_exe']: + self.executables[key][0] = fc_exe + def get_libraries(self): opt = GnuFCompiler.get_libraries(self) if sys.platform == 'darwin': Modified: branches/multicore/numpy/distutils/fcompiler/intel.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/intel.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/distutils/fcompiler/intel.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -1,3 +1,6 @@ +# -*- encoding: iso-8859-1 -*- +# above encoding b/c there's a non-ASCII character in the sample output +# of intele # http://developer.intel.com/software/products/compilers/flin/ import os @@ -4,14 +7,18 @@ import sys from numpy.distutils.cpuinfo import cpu +from numpy.distutils.ccompiler import simple_version_match from numpy.distutils.fcompiler import FCompiler, dummy_fortran_file from numpy.distutils.exec_command import find_executable +def intel_version_match(type): + # Match against the important stuff in the version string + return simple_version_match(start=r'Intel.*?Fortran.*?%s.*?Version' % (type,)) + class IntelFCompiler(FCompiler): compiler_type = 'intel' - version_pattern = r'Intel\(R\) Fortran Compiler for 32-bit '\ - 'applications, Version (?P[^\s*]*)' + version_match = intel_version_match('32-bit') for fc_exe in map(find_executable,['ifort','ifc']): if os.path.isfile(fc_exe): @@ -74,10 +81,9 @@ class IntelItaniumFCompiler(IntelFCompiler): compiler_type = 'intele' - version_pattern = r'Intel\(R\) Fortran (90 Compiler Itanium\(TM\)|Itanium\(R\)) Compiler'\ - ' for (the Itanium\(TM\)|Itanium\(R\))-based applications(,|)'\ - '\s+Version (?P[^\s*]*)' + version_match = intel_version_match('Itanium') + #Intel(R) Fortran Itanium(R) Compiler for Itanium(R)-based applications #Version 9.1? ? Build 20060928 Package ID: l_fc_c_9.1.039 #Copyright (C) 1985-2006 Intel Corporation.? All rights reserved. @@ -101,8 +107,7 @@ class IntelEM64TFCompiler(IntelFCompiler): compiler_type = 'intelem' - version_pattern = r'Intel\(R\) Fortran Compiler for Intel\(R\) EM64T-based '\ - 'applications, Version (?P[^\s*]*)' + version_match = intel_version_match('EM64T-based') for fc_exe in map(find_executable,['ifort','efort','efc']): if os.path.isfile(fc_exe): @@ -125,11 +130,13 @@ opt.extend(['-tpp7', '-xW']) return opt +# Is there no difference in the version string between the above compilers +# and the Visual compilers? + class IntelVisualFCompiler(FCompiler): compiler_type = 'intelv' - version_pattern = r'Intel\(R\) Fortran Compiler for 32-bit applications, '\ - 'Version (?P[^\s*]*)' + version_match = intel_version_match('32-bit') ar_exe = 'lib.exe' fc_exe = 'ifl' @@ -181,9 +188,7 @@ class IntelItaniumVisualFCompiler(IntelVisualFCompiler): compiler_type = 'intelev' - version_pattern = r'Intel\(R\) Fortran (90 Compiler Itanium\(TM\)|Itanium\(R\)) Compiler'\ - ' for (the Itanium\(TM\)|Itanium\(R\))-based applications(,|)'\ - '\s+Version (?P[^\s*]*)' + version_match = intel_version_match('Itanium') fc_exe = 'efl' # XXX this is a wild guess ar_exe = IntelVisualFCompiler.ar_exe Modified: branches/multicore/numpy/distutils/fcompiler/vast.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/vast.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/distutils/fcompiler/vast.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -26,6 +26,9 @@ module_dir_switch = None #XXX Fix me module_include_switch = None #XXX Fix me + def find_executables(self): + pass + def get_version_cmd(self): f90 = self.compiler_f90[0] d,b = os.path.split(f90) Modified: branches/multicore/numpy/distutils/system_info.py =================================================================== --- branches/multicore/numpy/distutils/system_info.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/distutils/system_info.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -811,9 +811,12 @@ info = {} dict_append(info,**mkl) dict_append(info, - libraries = ['pthread'], define_macros=[('SCIPY_MKL_H',None)], include_dirs = incl_dirs) + if sys.platform == 'win32': + pass # win32 has no pthread library + else: + dict_append(info, libraries=['pthread']) self.set_info(**info) class lapack_mkl_info(mkl_info): @@ -822,7 +825,11 @@ mkl = get_info('mkl') if not mkl: return - lapack_libs = self.get_libs('lapack_libs',['mkl_lapack32','mkl_lapack64']) + if sys.platform == 'win32': + lapack_libs = self.get_libs('lapack_libs',['mkl_lapack']) + else: + lapack_libs = self.get_libs('lapack_libs',['mkl_lapack32','mkl_lapack64']) + info = {'libraries': lapack_libs} dict_append(info,**mkl) self.set_info(**info) Copied: branches/multicore/numpy/distutils/tests/test_fcompiler_gnu.py (from rev 3733, trunk/numpy/distutils/tests/test_fcompiler_gnu.py) Modified: branches/multicore/numpy/doc/DISTUTILS.txt =================================================================== --- branches/multicore/numpy/doc/DISTUTILS.txt 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/doc/DISTUTILS.txt 2007-05-01 11:43:40 UTC (rev 3734) @@ -9,6 +9,8 @@ :Revision: $LastChangedRevision$ :SVN source: $HeadURL$ +.. contents:: + SciPy structure ''''''''''''''' Modified: branches/multicore/numpy/f2py/f90mod_rules.py =================================================================== --- branches/multicore/numpy/f2py/f90mod_rules.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/f2py/f90mod_rules.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -24,6 +24,7 @@ show=pprint.pprint from auxfuncs import * +import numpy as N import capi_maps import cfuncs import rules @@ -44,7 +45,8 @@ fgetdims1 = """\ external f2pysetdata logical ns - integer s(*),r,i,j + integer r,i,j + integer(%d) s(*) ns = .FALSE. if (allocated(d)) then do i=1,r @@ -56,7 +58,7 @@ deallocate(d) end if end if - if ((.not.allocated(d)).and.(s(1).ge.1)) then""" + if ((.not.allocated(d)).and.(s(1).ge.1)) then""" % N.intp().itemsize fgetdims2="""\ end if Modified: branches/multicore/numpy/testing/numpytest.py =================================================================== --- branches/multicore/numpy/testing/numpytest.py 2007-05-01 05:59:49 UTC (rev 3733) +++ branches/multicore/numpy/testing/numpytest.py 2007-05-01 11:43:40 UTC (rev 3734) @@ -11,7 +11,7 @@ __all__ = ['set_package_path', 'set_local_path', 'restore_path', 'IgnoreException', 'NumpyTestCase', 'NumpyTest', 'ScipyTestCase', 'ScipyTest', # for backward compatibility - 'importall' + 'importall', ] DEBUG=0 @@ -338,7 +338,7 @@ short_module_name = self._rename_map.get(short_module_name,short_module_name) return short_module_name - def _get_module_tests(self,module,level,verbosity): + def _get_module_tests(self, module, level, verbosity): mstr = self._module_str short_module_name = self._get_short_module_name(module) @@ -395,9 +395,8 @@ def _get_suite_list(self, test_module, level, module_name='__main__', verbosity=1): - mstr = self._module_str suite_list = [] - if hasattr(test_module,'test_suite'): + if hasattr(test_module, 'test_suite'): suite_list.extend(test_module.test_suite(level)._tests) for name in dir(test_module): obj = getattr(test_module, name) @@ -410,44 +409,14 @@ if getattr(suite,'isrunnable',lambda mthname:1)(mthname): suite_list.append(suite) if verbosity>=0: - self.info(' Found %s tests for %s' % (len(suite_list),module_name)) + self.info(' Found %s tests for %s' % (len(suite_list), module_name)) return suite_list - def test(self,level=1,verbosity=1): - """ Run Numpy module test suite with level and verbosity. - - level: - None --- do nothing, return None - < 0 --- scan for tests of level=abs(level), - don't run them, return TestSuite-list - > 0 --- scan for tests of level, run them, - return TestRunner - - verbosity: - >= 0 --- show information messages - > 1 --- show warnings on missing tests - - It is assumed that package tests suite follows the following - convention: for each package module, there exists file - /tests/test_.py that defines - TestCase classes (with names having prefix 'test_') with methods - (with names having prefixes 'check_' or 'bench_'); each of - these methods are called when running unit tests. - """ - if level is None: # Do nothing. - return - - if isinstance(self.package, str): - exec 'import %s as this_package' % (self.package) - else: - this_package = self.package - + def _test_suite_from_modules(self, this_package, level, verbosity): package_name = this_package.__name__ - modules = [] for name, module in sys.modules.items(): - if package_name != name[:len(package_name)] \ - or module is None: + if not name.startswith(package_name) or module is None: continue if not hasattr(module,'__file__'): continue @@ -465,59 +434,18 @@ suites.extend(self._get_suite_list(sys.modules[package_name], abs(level), verbosity=verbosity)) + return unittest.TestSuite(suites) - all_tests = unittest.TestSuite(suites) - if level<0: - return all_tests - - runner = unittest.TextTestRunner(verbosity=verbosity) - # Use the builtin displayhook. If the tests are being run - # under IPython (for instance), any doctest test suites will - # fail otherwise. - old_displayhook = sys.displayhook - sys.displayhook = sys.__displayhook__ - try: - runner.run(all_tests) - finally: - sys.displayhook = old_displayhook - return runner - - def testall(self,level=1,verbosity=1): - """ Run Numpy module test suite with level and verbosity. - - level: - None --- do nothing, return None - < 0 --- scan for tests of level=abs(level), - don't run them, return TestSuite-list - > 0 --- scan for tests of level, run them, - return TestRunner - - verbosity: - >= 0 --- show information messages - > 1 --- show warnings on missing tests - - Different from .test(..) method, this method looks for - TestCase classes from all files in /tests/ - directory and no assumptions are made for naming the - TestCase classes or their methods. - """ - if level is None: # Do nothing. - return - - if isinstance(self.package, str): - exec 'import %s as this_package' % (self.package) - else: - this_package = self.package + def _test_suite_from_all_tests(self, this_package, level, verbosity): + importall(this_package) package_name = this_package.__name__ - importall(this_package) - + # Find all tests/ directories under the package test_dirs_names = {} for name, module in sys.modules.items(): - if package_name != name[:len(package_name)] \ - or module is None: + if not name.startswith(package_name) or module is None: continue - if not hasattr(module,'__file__'): + if not hasattr(module, '__file__'): continue d = os.path.dirname(module.__file__) if os.path.basename(d)=='tests': @@ -532,8 +460,10 @@ test_dirs = test_dirs_names.keys() test_dirs.sort() + # For each file in each tests/ directory with a test case in it, + # import the file, and add the test cases to our list suite_list = [] - testcase_match = re.compile(r'\s*class\s+[_\w]+\s*\(.*TestCase').match + testcase_match = re.compile(r'\s*class\s+\w+\s*\(.*TestCase').match for test_dir in test_dirs: test_dir_module = test_dirs_names[test_dir] @@ -547,9 +477,9 @@ f = os.path.join(test_dir, fn) # check that file contains TestCase class definitions: - fid = open(f) + fid = open(f, 'r') skip = True - for line in fid.readlines(): + for line in fid: if testcase_match(line): skip = False break @@ -559,31 +489,73 @@ # import the test file n = test_dir_module + '.' + base - sys.path.insert(0,test_dir) # in case test files import local modules + # in case test files import local modules + sys.path.insert(0, test_dir) + fo = None try: - test_module = imp.load_module(n, - open(f), - f, - ('.py', 'U', 1)) - except Exception, msg: - print 'Failed importing %s: %s' % (f,msg) + try: + fo = open(f) + test_module = imp.load_module(n, fo, f, + ('.py', 'U', 1)) + except Exception, msg: + print 'Failed importing %s: %s' % (f,msg) + continue + finally: + if fo: + fo.close() del sys.path[0] - continue - del sys.path[0] - for name in dir(test_module): - obj = getattr(test_module, name) - if type(obj) is not type(unittest.TestCase) \ - or not issubclass(obj, unittest.TestCase) \ - or not self.check_testcase_name(obj.__name__): - continue - for mthname in self._get_method_names(obj,abs(level)): - suite = obj(mthname) - if getattr(suite,'isrunnable',lambda mthname:1)(mthname): - suite_list.append(suite) + suites = self._get_suite_list(test_module, level, + module_name=n, + verbosity=verbosity) + suite_list.extend(suites) all_tests = unittest.TestSuite(suite_list) - if level<0: + return all_tests + + def test(self, level=1, verbosity=1, all=False): + """Run Numpy module test suite with level and verbosity. + + level: + None --- do nothing, return None + < 0 --- scan for tests of level=abs(level), + don't run them, return TestSuite-list + > 0 --- scan for tests of level, run them, + return TestRunner + > 10 --- run all tests (same as specifying all=True). + (backward compatibility). + + verbosity: + >= 0 --- show information messages + > 1 --- show warnings on missing tests + + all: + True --- run all test files (like self.testall()) + False (default) --- only run test files associated with a module + + It is assumed (when all=False) that package tests suite follows the + following convention: for each package module, there exists file + /tests/test_.py that defines TestCase classes + (with names having prefix 'test_') with methods (with names having + prefixes 'check_' or 'bench_'); each of these methods are called when + running unit tests. + """ + if level is None: # Do nothing. + return + + if isinstance(self.package, str): + exec 'import %s as this_package' % (self.package) + else: + this_package = self.package + + if all: + all_tests = self._test_suite_from_all_tests(this_package, + level, verbosity) + else: + all_tests = self._test_suite_from_modules(this_package, + level, verbosity) + + if level < 0: return all_tests runner = unittest.TextTestRunner(verbosity=verbosity) @@ -598,6 +570,27 @@ sys.displayhook = old_displayhook return runner + def testall(self, level=1,verbosity=1): + """ Run Numpy module test suite with level and verbosity. + + level: + None --- do nothing, return None + < 0 --- scan for tests of level=abs(level), + don't run them, return TestSuite-list + > 0 --- scan for tests of level, run them, + return TestRunner + + verbosity: + >= 0 --- show information messages + > 1 --- show warnings on missing tests + + Different from .test(..) method, this method looks for + TestCase classes from all files in /tests/ + directory and no assumptions are made for naming the + TestCase classes or their methods. + """ + return self.test(level=level, verbosity=verbosity, all=True) + def run(self): """ Run Numpy module test suite with level and verbosity taken from sys.argv. Requires optparse module. From numpy-svn at scipy.org Fri May 4 00:08:22 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 3 May 2007 23:08:22 -0500 (CDT) Subject: [Numpy-svn] r3735 - trunk/numpy/oldnumeric Message-ID: <20070504040822.A268839C191@new.scipy.org> Author: oliphant Date: 2007-05-03 23:08:15 -0500 (Thu, 03 May 2007) New Revision: 3735 Modified: trunk/numpy/oldnumeric/mlab.py Log: Fix compatibility layer definition of std Modified: trunk/numpy/oldnumeric/mlab.py =================================================================== --- trunk/numpy/oldnumeric/mlab.py 2007-05-01 11:43:40 UTC (rev 3734) +++ trunk/numpy/oldnumeric/mlab.py 2007-05-04 04:08:15 UTC (rev 3735) @@ -55,7 +55,8 @@ return _Nprod(x, axis) def std(x, axis=0): - return _Nstd(x, axis) + N = asarray(x).shape[axis] + return _Nstd(x, axis)*sqrt(N/(N-1.)) def mean(x, axis=0): return _Nmean(x, axis) From numpy-svn at scipy.org Fri May 4 17:17:26 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 4 May 2007 16:17:26 -0500 (CDT) Subject: [Numpy-svn] r3736 - trunk/numpy/testing Message-ID: <20070504211726.BCD1C39C00F@new.scipy.org> Author: rkern Date: 2007-05-04 16:17:20 -0500 (Fri, 04 May 2007) New Revision: 3736 Modified: trunk/numpy/testing/utils.py Log: assert_approx_equal used significant digit more than requested. Modified: trunk/numpy/testing/utils.py =================================================================== --- trunk/numpy/testing/utils.py 2007-05-04 04:08:15 UTC (rev 3735) +++ trunk/numpy/testing/utils.py 2007-05-04 21:17:20 UTC (rev 3736) @@ -180,7 +180,7 @@ header='Items are not equal to %d significant digits:' % significant, verbose=verbose) - assert math.fabs(sc_desired - sc_actual) < pow(10.,-1*significant), msg + assert math.fabs(sc_desired - sc_actual) < pow(10.,-(significant-1)), msg def assert_array_compare(comparison, x, y, err_msg='', verbose=True, header=''): From numpy-svn at scipy.org Wed May 9 12:27:09 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 9 May 2007 11:27:09 -0500 (CDT) Subject: [Numpy-svn] r3737 - trunk/numpy/core Message-ID: <20070509162709.9937B39C012@new.scipy.org> Author: oliphant Date: 2007-05-09 11:27:06 -0500 (Wed, 09 May 2007) New Revision: 3737 Modified: trunk/numpy/core/records.py Log: Change recarray attribute getting to return a view using the class instead of pure recarray when fields are present. Modified: trunk/numpy/core/records.py =================================================================== --- trunk/numpy/core/records.py 2007-05-04 21:17:20 UTC (rev 3736) +++ trunk/numpy/core/records.py 2007-05-09 16:27:06 UTC (rev 3737) @@ -136,7 +136,7 @@ # if it's a string return 'SU' return a chararray # otherwise return a normal array if obj.dtype.fields: - return obj.view(recarray) + return obj.view(obj.__class__) if obj.dtype.char in 'SU': return obj.view(chararray) return obj From numpy-svn at scipy.org Thu May 10 13:23:58 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 10 May 2007 12:23:58 -0500 (CDT) Subject: [Numpy-svn] r3738 - trunk/numpy/core/code_generators Message-ID: <20070510172358.B2E6839C0B0@new.scipy.org> Author: cookedm Date: 2007-05-10 12:23:52 -0500 (Thu, 10 May 2007) New Revision: 3738 Modified: trunk/numpy/core/code_generators/genapi.py Log: Add docstrings to numpy/core/code_generators/genapi.py Modified: trunk/numpy/core/code_generators/genapi.py =================================================================== --- trunk/numpy/core/code_generators/genapi.py 2007-05-09 16:27:06 UTC (rev 3737) +++ trunk/numpy/core/code_generators/genapi.py 2007-05-10 17:23:52 UTC (rev 3738) @@ -1,3 +1,10 @@ +""" +Get API information encoded in C files. + +See ``find_function`` for how functions should be formatted, and +``read_order`` for how the order of the functions should be +specified. +""" import sys, os, re import md5 import textwrap @@ -2,2 +9,5 @@ +__docformat__ = 'restructuredtext' + +# The files under src/ that are scanned for API functions API_FILES = ['arraymethods.c', @@ -128,6 +138,27 @@ def find_functions(filename, tag='API'): + """ + Scan the file, looking for tagged functions. + + Assuming ``tag=='API'``, a tagged function looks like:: + + /*API*/ + static returntype* + function_name(argtype1 arg1, argtype2 arg2) + { + } + + where the return type must be on a separate line, the function + name must start the line, and the opening ``{`` must start the line. + + An optional documentation comment in ReST format may follow the tag, + as in:: + + /*API + This function does foo... + */ + """ fo = open(filename, 'r') functions = [] return_type = None @@ -191,6 +222,11 @@ return functions def read_order(order_file): + """ + Read the order of the API functions from a file. + + Comments can be put on lines starting with # + """ fo = open(order_file, 'r') order = {} i = 0 From numpy-svn at scipy.org Thu May 10 13:24:46 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 10 May 2007 12:24:46 -0500 (CDT) Subject: [Numpy-svn] r3739 - trunk/numpy/testing Message-ID: <20070510172446.CD63939C0B0@new.scipy.org> Author: cookedm Date: 2007-05-10 12:24:44 -0500 (Thu, 10 May 2007) New Revision: 3739 Modified: trunk/numpy/testing/numpytest.py Log: Better warning when using ScipyTest Modified: trunk/numpy/testing/numpytest.py =================================================================== --- trunk/numpy/testing/numpytest.py 2007-05-10 17:23:52 UTC (rev 3738) +++ trunk/numpy/testing/numpytest.py 2007-05-10 17:24:44 UTC (rev 3739) @@ -627,7 +627,7 @@ class ScipyTest(NumpyTest): def __init__(self, package=None): warnings.warn("ScipyTest is now called NumpyTest; please update your code", - DeprecationWarning) + DeprecationWarning, stacklevel=2) NumpyTest.__init__(self, package) From numpy-svn at scipy.org Thu May 10 13:26:21 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 10 May 2007 12:26:21 -0500 (CDT) Subject: [Numpy-svn] r3740 - trunk/numpy/distutils Message-ID: <20070510172621.00B0F39C0B0@new.scipy.org> Author: cookedm Date: 2007-05-10 12:26:20 -0500 (Thu, 10 May 2007) New Revision: 3740 Modified: trunk/numpy/distutils/core.py Log: Use a try/finally instead of try/except Exception for cleanup in numpy/distutils/core.py Modified: trunk/numpy/distutils/core.py =================================================================== --- trunk/numpy/distutils/core.py 2007-05-10 17:24:44 UTC (rev 3739) +++ trunk/numpy/distutils/core.py 2007-05-10 17:26:20 UTC (rev 3740) @@ -130,19 +130,17 @@ distutils.core._setup_stop_after = "commandline" try: dist = setup(**new_attr) + finally: distutils.core._setup_distribution = old_dist distutils.core._setup_stop_after = old_stop - except Exception,msg: - distutils.core._setup_distribution = old_dist - distutils.core._setup_stop_after = old_stop - raise msg if dist.help or not _command_line_ok(): # probably displayed help, skip running any commands return dist # create setup dictionary and append to new_attr config = configuration() - if hasattr(config,'todict'): config = config.todict() + if hasattr(config,'todict'): + config = config.todict() _dict_append(new_attr, **config) # Move extension source libraries to libraries From numpy-svn at scipy.org Thu May 10 14:14:38 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 10 May 2007 13:14:38 -0500 (CDT) Subject: [Numpy-svn] r3741 - in trunk/numpy/core: include/numpy src tests Message-ID: <20070510181438.31C9239C0BE@new.scipy.org> Author: cookedm Date: 2007-05-10 13:14:29 -0500 (Thu, 10 May 2007) New Revision: 3741 Modified: trunk/numpy/core/include/numpy/ndarrayobject.h trunk/numpy/core/src/arraytypes.inc.src trunk/numpy/core/src/multiarraymodule.c trunk/numpy/core/tests/test_multiarray.py Log: Improvement of separator handling for fromstring and fromfile. * fromstring and fromfile should behave identically on text. * added more test cases for fromstring * the dtype gets passed to the C code doing the type-specific string conversions. We don't use it, but someone making their own dtype could. * separator handling for fromfile is moved out of the type-specific conversion. I've left the argument in for backwards compatibility; when the API version is next bumped up, it can be removed. * separator handling in fromfile is now safe (no fscanf(fp, sep) anymore) Modified: trunk/numpy/core/include/numpy/ndarrayobject.h =================================================================== --- trunk/numpy/core/include/numpy/ndarrayobject.h 2007-05-10 17:26:20 UTC (rev 3740) +++ trunk/numpy/core/include/numpy/ndarrayobject.h 2007-05-10 18:14:29 UTC (rev 3741) @@ -9,17 +9,18 @@ extern "C" CONFUSE_EMACS #undef CONFUSE_EMACS #undef CONFUSE_EMACS2 -/* ... otherwise a semi-smart idententer (like emacs) tries to indent +/* ... otherwise a semi-smart identer (like emacs) tries to indent everything when you're typing */ #endif /* This is auto-generated by the installer */ #include "config.h" -/* There are several places in the code where an array of dimensions is */ -/* allocated statically. This is the size of that static allocation. */ -/* The array creation itself could have arbitrary dimensions but - * all the places where static allocation is used would need to - * be changed to dynamic (including inside of several structures) +/* There are several places in the code where an array of dimensions is + * allocated statically. This is the size of that static allocation. + * + * The array creation itself could have arbitrary dimensions but + * all the places where static allocation is used would need to + * be changed to dynamic (including inside of several structures) */ #define NPY_MAXDIMS 32 @@ -1004,6 +1005,8 @@ #define PyDimMem_RENEW(ptr,size) \ ((npy_intp *)PyArray_realloc(ptr,size*sizeof(npy_intp))) +/* forward declaration */ +struct _PyArray_Descr; /* These must deal with unaligned and swapped data if necessary */ typedef PyObject * (PyArray_GetItemFunc) (void *, void *); @@ -1028,8 +1031,12 @@ typedef void (PyArray_VectorUnaryFunc)(void *, void *, npy_intp, void *, void *); -typedef int (PyArray_ScanFunc)(FILE *, void *, void *, void *); -typedef int (PyArray_FromStrFunc)(char *, void *, char **, void *); +/* XXX the ignore argument should be removed next time the API version + is bumped. It used to be the separator. */ +typedef int (PyArray_ScanFunc)(FILE *fp, void *dptr, + char *ignore, struct _PyArray_Descr *); +typedef int (PyArray_FromStrFunc)(char *s, void *dptr, char **endptr, + struct _PyArray_Descr *); typedef int (PyArray_FillFunc)(void *, npy_intp, void *); @@ -1157,7 +1164,7 @@ PyDataType_FLAGCHK(dtype, NPY_ITEM_REFCOUNT) /* Change dtype hasobject to 32-bit in 1.1 and change its name */ -typedef struct { +typedef struct _PyArray_Descr { PyObject_HEAD PyTypeObject *typeobj; /* the type object representing an instance of this type -- should not Modified: trunk/numpy/core/src/arraytypes.inc.src =================================================================== --- trunk/numpy/core/src/arraytypes.inc.src 2007-05-10 17:26:20 UTC (rev 3740) +++ trunk/numpy/core/src/arraytypes.inc.src 2007-05-10 18:14:29 UTC (rev 3741) @@ -11,7 +11,7 @@ if (mylong == NULL) return (longlong) -1; vv = mylong; } - else Py_INCREF(vv); + else Py_INCREF(vv); ret = PyLong_AsLongLong(vv); Py_DECREF(vv); @@ -867,18 +867,9 @@ /****************** scan *************************************/ -#define _ENDSCAN \ - if (num != 1) { \ - if (num == 0) return -3; \ - if (num == EOF) return -4; \ - return -5; \ - } \ - if (sep != NULL) { \ - num = fscanf(fp, sep); \ - if (num == 0) return 0; \ - if (num == EOF) return -1; \ - } \ - return 0 +/* The first ignore argument is for backwards compatibility. + Should be removed when the API version is bumped up. + */ /**begin repeat @@ -887,11 +878,9 @@ #format="hd","hu","d","u","ld","lu",LONGLONG_FMT,ULONGLONG_FMT,"f","lf","Lf"# */ static int - at fname@_scan (FILE *fp, @type@ *ip, char *sep, void *ignore) + at fname@_scan (FILE *fp, @type@ *ip, void *ignore, PyArray_Descr *ignore2) { - int num; - num = fscanf(fp, "%"@format@, ip); - _ENDSCAN; + return fscanf(fp, "%"@format@, ip); } /**end repeat**/ @@ -903,24 +892,24 @@ #format="d","u"# */ static int - at fname@_scan (FILE *fp, @type@ *ip, char *sep, void *ignore) + at fname@_scan (FILE *fp, @type@ *ip, void *ignore, PyArray_Descr *ignore2) { @btype@ temp; int num; num = fscanf(fp, "%"@format@, &temp); *ip = (@type@) temp; - _ENDSCAN; + return num; } /**end repeat**/ static int -BOOL_scan (FILE *fp, Bool *ip, char *sep, void *ignore) +BOOL_scan (FILE *fp, Bool *ip, void *ignore, PyArray_Descr *ignore2) { int temp; int num; num = fscanf(fp, "%d", &temp); *ip = (Bool) (temp != 0); - _ENDSCAN; + return num; } /**begin repeat @@ -929,8 +918,6 @@ #define @fname at _scan NULL /**end repeat**/ -#undef _ENDSCAN - /****************** fromstr *************************************/ /**begin repeat @@ -940,7 +927,7 @@ #btype=(long,ulong)*5# */ static int - at fname@_fromstr(char *str, @type@ *ip, char **endptr, void *ignore) + at fname@_fromstr(char *str, @type@ *ip, char **endptr, PyArray_Descr *ignore) { @btype@ result; @@ -956,7 +943,7 @@ */ #if (PY_VERSION_HEX >= 0x02040000) || defined(PyOS_ascii_strtod) static int - at fname@_fromstr(char *str, @type@ *ip, char **endptr, void *ignore) + at fname@_fromstr(char *str, @type@ *ip, char **endptr, PyArray_Descr *ignore) { double result; Modified: trunk/numpy/core/src/multiarraymodule.c =================================================================== --- trunk/numpy/core/src/multiarraymodule.c 2007-05-10 17:26:20 UTC (rev 3740) +++ trunk/numpy/core/src/multiarraymodule.c 2007-05-10 18:14:29 UTC (rev 3741) @@ -5828,34 +5828,237 @@ return Py_None; } + +/* Reading from a file or a string. + + As much as possible, we try to use the same code for both files and strings, + so the semantics for fromstring and fromfile are the same, especially with + regards to the handling of text representations. + */ + + +typedef int (*next_element)(void **, void *, PyArray_Descr *, void *); +typedef int (*skip_separator)(void **, const char *, void *); + static int -_skip_sep(char **ptr, char *sep) +fromstr_next_element(char **s, void *dptr, PyArray_Descr *dtype, + const char *end) { - char *a; - int n; - n = strlen(sep); - a = *ptr; - while(*a != '\0' && (strncmp(a, sep, n) != 0)) - a++; - if (*a == '\0') return -1; - *ptr = a+strlen(sep); - return 0; + int r = dtype->f->fromstr(*s, dptr, s, dtype); + if (end != NULL && *s > end) { + return -1; + } + return r; } -/* steals a reference to dtype -- accepts NULL */ -/*OBJECT_API*/ +static int +fromfile_next_element(FILE **fp, void *dptr, PyArray_Descr *dtype, + void *stream_data) +{ + /* the NULL argument is for backwards-compatibility */ + return dtype->f->scanfunc(*fp, dptr, NULL, dtype); +} + +/* Remove multiple whitespace from the separator, and add a space to the + beginning and end. This simplifies the separator-skipping code below. +*/ +static char * +swab_separator(char *sep) +{ + int skip_space = 0; + char *s, *start; + s = start = malloc(strlen(sep)+3); + /* add space to front if there isn't one */ + if (*sep != '\0' && !isspace(*sep)) { + *s = ' '; s++; + } + while (*sep != '\0') { + if (isspace(*sep)) { + if (skip_space) { + sep++; + } else { + *s = ' '; + s++; sep++; + skip_space = 1; + } + } else { + *s = *sep; + s++; sep++; + skip_space = 0; + } + } + /* add space to end if there isn't one */ + if (s != start && s[-1] == ' ') { + *s = ' '; + s++; + } + *s = '\0'; + return start; +} + +/* Assuming that the separator is the next bit in the string (file), skip it. + + Single spaces in the separator are matched to arbitrary-long sequences + of whitespace in the input. + + If we can't match the separator, return -2. + If we hit the end of the string (file), return -1. + Otherwise, return 0. + */ + +static int +fromstr_skip_separator(char **s, const char *sep, const char *end) +{ + char *string = *s; + int result = 0; + while (1) { + char c = *string; + if (c == '\0' || (end != NULL && string >= end)) { + result = -1; + break; + } else if (*sep == '\0') { + /* matched separator */ + result = 0; + break; + } else if (*sep == ' ') { + if (!isspace(c)) { + sep++; + continue; + } + } else if (*sep != c) { + result = -2; + break; + } else { + sep++; + } + string++; + } + *s = string; + return result; +} + +static int +fromfile_skip_separator(FILE **fp, const char *sep, void *stream_data) +{ + int result = 0; + while (1) { + int c = fgetc(*fp); + if (c == EOF) { + result = -1; + break; + } else if (*sep == '\0') { + /* matched separator */ + ungetc(c, *fp); + result = 0; + break; + } else if (*sep == ' ') { + if (!isspace(c)) { + sep++; + ungetc(c, *fp); + } + } else if (*sep != c) { + ungetc(c, *fp); + result = -2; + break; + } else { + sep++; + } + } + return result; +} + +/* Create an array by reading from the given stream, using the passed + next_element and skip_separator functions. + */ + +#define FROM_BUFFER_SIZE 4096 +static PyArrayObject * +array_from_text(PyArray_Descr *dtype, intp num, char *sep, size_t *nread, + void *stream, next_element next, skip_separator skip_sep, + void *stream_data) +{ + PyArrayObject *r; + intp i; + char *dptr, *clean_sep; + + intp thisbuf = 0; + intp size; + intp bytes, totalbytes; + + size = (num >= 0) ? num : FROM_BUFFER_SIZE; + + r = (PyArrayObject *) + PyArray_NewFromDescr(&PyArray_Type, + dtype, + 1, &size, + NULL, NULL, + 0, NULL); + if (r == NULL) return NULL; + clean_sep = swab_separator(sep); + NPY_BEGIN_ALLOW_THREADS; + totalbytes = bytes = size * dtype->elsize; + dptr = r->data; + for (i=0; num < 0 || i < num; i++) { + if (next(&stream, dptr, dtype, stream_data) < 0) + break; + *nread += 1; + thisbuf += 1; + dptr += dtype->elsize; + if (num < 0 && thisbuf == size) { + totalbytes += bytes; + r->data = PyDataMem_RENEW(r->data, totalbytes); + dptr = r->data + (totalbytes - bytes); + thisbuf = 0; + } + if (skip_sep(&stream, clean_sep, stream_data) < 0) + break; + } + if (num < 0) { + r->data = PyDataMem_RENEW(r->data, (*nread)*dtype->elsize); + PyArray_DIM(r,0) = *nread; + } + NPY_END_ALLOW_THREADS; + free(clean_sep); + if (PyErr_Occurred()) { + Py_DECREF(r); + return NULL; + } + return r; +} +#undef FROM_BUFFER_SIZE + +/*OBJECT_API + + Given a pointer to a string ``data``, a string length ``slen``, and + a ``PyArray_Descr``, return an array corresponding to the data + encoded in that string. + + If the dtype is NULL, the default array type is used (double). + If non-null, the reference is stolen. + + If ``slen`` is < 0, then the end of string is used for text data. + It is an error for ``slen`` to be < 0 for binary data (since embedded NULLs + would be the norm). + + The number of elements to read is given as ``num``; if it is < 0, then + then as many as possible are read. + + If ``sep`` is NULL or empty, then binary data is assumed, else + text data, with ``sep`` as the separator between elements. Whitespace in + the separator matches any length of whitespace in the text, and a match + for whitespace around the separator is added. + */ static PyObject * PyArray_FromString(char *data, intp slen, PyArray_Descr *dtype, - intp n, char *sep) + intp num, char *sep) { int itemsize; PyArrayObject *ret; Bool binary; - if (dtype == NULL) dtype=PyArray_DescrFromType(PyArray_DEFAULT); - + if (PyDataType_FLAGCHK(dtype, NPY_ITEM_IS_POINTER)) { PyErr_SetString(PyExc_ValueError, "Cannot create an object array from" \ @@ -5874,7 +6077,7 @@ binary = ((sep == NULL) || (strlen(sep) == 0)); if (binary) { - if (n < 0 ) { + if (num < 0 ) { if (slen % itemsize != 0) { PyErr_SetString(PyExc_ValueError, "string size must be a "\ @@ -5882,9 +6085,9 @@ Py_DECREF(dtype); return NULL; } - n = slen/itemsize; + num = slen/itemsize; } else { - if (slen < n*itemsize) { + if (slen < num*itemsize) { PyErr_SetString(PyExc_ValueError, "string is smaller than " \ "requested size"); @@ -5893,111 +6096,40 @@ } } - if ((ret = (PyArrayObject *)\ - PyArray_NewFromDescr(&PyArray_Type, dtype, - 1, &n, NULL, NULL, - 0, NULL)) == NULL) - return NULL; - memcpy(ret->data, data, n*dtype->elsize); - return (PyObject *)ret; - } - else { /* read from character-based string */ - char *ptr; - PyArray_FromStrFunc *fromstr; - char *dptr; - intp nread=0; - intp index; - - fromstr = dtype->f->fromstr; - if (fromstr == NULL) { + ret = (PyArrayObject *) + PyArray_NewFromDescr(&PyArray_Type, dtype, + 1, &num, NULL, NULL, + 0, NULL); + if (ret == NULL) return NULL; + memcpy(ret->data, data, num*dtype->elsize); + } else { + /* read from character-based string */ + size_t nread = 0; + char *end; + if (dtype->f->scanfunc == NULL) { PyErr_SetString(PyExc_ValueError, "don't know how to read " \ - "character strings for given " \ + "character strings with that " \ "array type"); Py_DECREF(dtype); return NULL; } - - if (n!=-1) { - ret = (PyArrayObject *) \ - PyArray_NewFromDescr(&PyArray_Type, - dtype, 1, &n, NULL, - NULL, 0, NULL); - if (ret == NULL) return NULL; - NPY_BEGIN_ALLOW_THREADS - ptr = data; - dptr = ret->data; - for (index=0; index < n; index++) { - if (fromstr(ptr, dptr, &ptr, ret) < 0) - break; - nread += 1; - dptr += dtype->elsize; - if (_skip_sep(&ptr, sep) < 0) - break; - } - if (nread < n) { - fprintf(stderr, "%ld items requested but "\ - "only %ld read\n", - (long) n, (long) nread); - ret->data = \ - PyDataMem_RENEW(ret->data, - nread * \ - ret->descr->elsize); - PyArray_DIM(ret,0) = nread; - - } - NPY_END_ALLOW_THREADS + if (slen < 0) { + end = NULL; + } else { + end = data + slen; } - else { -#define _FILEBUFNUM 4096 - intp thisbuf=0; - intp size = _FILEBUFNUM; - intp bytes; - intp totalbytes; - char *end; - int val; - - ret = (PyArrayObject *)\ - PyArray_NewFromDescr(&PyArray_Type, - dtype, - 1, &size, - NULL, NULL, - 0, NULL); - if (ret==NULL) return NULL; - NPY_BEGIN_ALLOW_THREADS - totalbytes = bytes = size * dtype->elsize; - dptr = ret->data; - ptr = data; - end = data+slen; - while (ptr < end) { - val = fromstr(ptr, dptr, &ptr, ret); - if (val < 0) break; - nread += 1; - val = _skip_sep(&ptr, sep); - if (val < 0) break; - thisbuf += 1; - dptr += dtype->elsize; - if (thisbuf == size) { - totalbytes += bytes; - ret->data = PyDataMem_RENEW(ret->data, - totalbytes); - dptr = ret->data + \ - (totalbytes - bytes); - thisbuf = 0; - } - } - ret->data = PyDataMem_RENEW(ret->data, - nread*ret->descr->elsize); - PyArray_DIM(ret,0) = nread; -#undef _FILEBUFNUM - NPY_END_ALLOW_THREADS - } + ret = array_from_text(dtype, num, sep, &nread, + data, + (next_element) fromstr_next_element, + (skip_separator) fromstr_skip_separator, + end); } return (PyObject *)ret; } static PyObject * -array_fromString(PyObject *ignored, PyObject *args, PyObject *keywds) +array_fromstring(PyObject *ignored, PyObject *args, PyObject *keywds) { char *data; Py_ssize_t nin=-1; @@ -6018,6 +6150,148 @@ } + +static PyArrayObject * +array_fromfile_binary(FILE *fp, PyArray_Descr *dtype, intp num, size_t *nread) +{ + PyArrayObject *r; + intp start, numbytes; + + if (num < 0) { + int fail=0; + start = (intp )ftell(fp); + if (start < 0) fail=1; + if (fseek(fp, 0, SEEK_END) < 0) fail=1; + numbytes = (intp) ftell(fp); + if (numbytes < 0) fail=1; + numbytes -= start; + if (fseek(fp, start, SEEK_SET) < 0) fail=1; + if (fail) { + PyErr_SetString(PyExc_IOError, + "could not seek in file"); + Py_DECREF(dtype); + return NULL; + } + num = numbytes / dtype->elsize; + } + r = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type, + dtype, + 1, &num, + NULL, NULL, + 0, NULL); + if (r==NULL) return NULL; + NPY_BEGIN_ALLOW_THREADS; + *nread = fread(r->data, dtype->elsize, num, fp); + NPY_END_ALLOW_THREADS; + return r; +} + +/*OBJECT_API + + Given a ``FILE *`` pointer ``fp``, and a ``PyArray_Descr``, return an + array corresponding to the data encoded in that file. + + If the dtype is NULL, the default array type is used (double). + If non-null, the reference is stolen. + + The number of elements to read is given as ``num``; if it is < 0, then + then as many as possible are read. + + If ``sep`` is NULL or empty, then binary data is assumed, else + text data, with ``sep`` as the separator between elements. Whitespace in + the separator matches any length of whitespace in the text, and a match + for whitespace around the separator is added. + + For memory-mapped files, use the buffer interface. No more data than + necessary is read by this routine. +*/ +static PyObject * +PyArray_FromFile(FILE *fp, PyArray_Descr *dtype, intp num, char *sep) +{ + PyArrayObject *ret; + size_t nread = 0; + + if (PyDataType_REFCHK(dtype)) { + PyErr_SetString(PyExc_ValueError, + "cannot read into object array"); + Py_DECREF(dtype); + return NULL; + } + if (dtype->elsize == 0) { + PyErr_SetString(PyExc_ValueError, "0-sized elements."); + Py_DECREF(dtype); + return NULL; + } + + if ((sep == NULL) || (strlen(sep) == 0)) { + ret = array_fromfile_binary(fp, dtype, num, &nread); + } else { + if (dtype->f->scanfunc == NULL) { + PyErr_SetString(PyExc_ValueError, + "don't know how to read " \ + "character files with that " \ + "array type"); + Py_DECREF(dtype); + return NULL; + } + ret = array_from_text(dtype, num, sep, &nread, + fp, + (next_element) fromfile_next_element, + (skip_separator) fromfile_skip_separator, + NULL); + } + if (((intp) nread) < num) { + fprintf(stderr, "%ld items requested but only %ld read\n", + (long) num, (long) nread); + ret->data = PyDataMem_RENEW(ret->data, + nread * ret->descr->elsize); + PyArray_DIM(ret,0) = nread; + } + return (PyObject *)ret; +} + +static PyObject * +array_fromfile(PyObject *ignored, PyObject *args, PyObject *keywds) +{ + PyObject *file=NULL, *ret; + FILE *fp; + char *sep=""; + Py_ssize_t nin=-1; + static char *kwlist[] = {"file", "dtype", "count", "sep", NULL}; + PyArray_Descr *type=NULL; + + if (!PyArg_ParseTupleAndKeywords(args, keywds, + "O|O&" NPY_SSIZE_T_PYFMT "s", + kwlist, + &file, + PyArray_DescrConverter, &type, + &nin, &sep)) { + return NULL; + } + + if (type == NULL) type = PyArray_DescrFromType(PyArray_DEFAULT); + + if (PyString_Check(file) || PyUnicode_Check(file)) { + file = PyObject_CallFunction((PyObject *)&PyFile_Type, + "Os", file, "rb"); + if (file==NULL) return NULL; + } + else { + Py_INCREF(file); + } + fp = PyFile_AsFile(file); + if (fp == NULL) { + PyErr_SetString(PyExc_IOError, + "first argument must be an open file"); + Py_DECREF(file); + return NULL; + } + ret = PyArray_FromFile(fp, type, (intp) nin, sep); + Py_DECREF(file); + return ret; +} + + /* steals a reference to dtype (which cannot be NULL) */ /*OBJECT_API */ static PyObject * @@ -6108,7 +6382,7 @@ } static PyObject * -array_fromIter(PyObject *ignored, PyObject *args, PyObject *keywds) +array_fromiter(PyObject *ignored, PyObject *args, PyObject *keywds) { PyObject *iter; Py_ssize_t nin=-1; @@ -6128,210 +6402,8 @@ } - - -/* This needs an open file object and reads it in directly. - memory-mapped files handled differently through buffer interface. - -file pointer number in resulting 1d array -(can easily reshape later, -1 for to end of file) -type of array -sep is a separator string for character-based data (or NULL for binary) - " " means whitespace -*/ - /*OBJECT_API*/ static PyObject * -PyArray_FromFile(FILE *fp, PyArray_Descr *typecode, intp num, char *sep) -{ - PyArrayObject *r; - size_t nread = 0; - PyArray_ScanFunc *scan; - Bool binary; - - if (PyDataType_REFCHK(typecode)) { - PyErr_SetString(PyExc_ValueError, "cannot read into" - "object array"); - Py_DECREF(typecode); - return NULL; - } - if (typecode->elsize == 0) { - PyErr_SetString(PyExc_ValueError, "0-sized elements."); - Py_DECREF(typecode); - return NULL; - } - - binary = ((sep == NULL) || (strlen(sep) == 0)); - if (num == -1 && binary) { /* Get size for binary file*/ - intp start, numbytes; - int fail=0; - start = (intp )ftell(fp); - if (start < 0) fail=1; - if (fseek(fp, 0, SEEK_END) < 0) fail=1; - numbytes = (intp) ftell(fp); - if (numbytes < 0) fail=1; - numbytes -= start; - if (fseek(fp, start, SEEK_SET) < 0) fail=1; - if (fail) { - PyErr_SetString(PyExc_IOError, - "could not seek in file"); - Py_DECREF(typecode); - return NULL; - } - num = numbytes / typecode->elsize; - } - - if (binary) { /* binary data */ - r = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type, - typecode, - 1, &num, - NULL, NULL, - 0, NULL); - if (r==NULL) return NULL; - NPY_BEGIN_ALLOW_THREADS - nread = fread(r->data, typecode->elsize, num, fp); - NPY_END_ALLOW_THREADS - } - else { /* character reading */ - intp i; - char *dptr; - int done=0; - - scan = typecode->f->scanfunc; - if (scan == NULL) { - PyErr_SetString(PyExc_ValueError, - "don't know how to read " \ - "character files with that " \ - "array type"); - Py_DECREF(typecode); - return NULL; - } - - if (num != -1) { /* number to read is known */ - r = (PyArrayObject *)\ - PyArray_NewFromDescr(&PyArray_Type, - typecode, - 1, &num, - NULL, NULL, - 0, NULL); - if (r==NULL) return NULL; - NPY_BEGIN_ALLOW_THREADS - dptr = r->data; - for (i=0; i < num; i++) { - if (done) break; - done = scan(fp, dptr, sep, NULL); - if (done < -2) break; - nread += 1; - dptr += r->descr->elsize; - } - NPY_END_ALLOW_THREADS - if (PyErr_Occurred()) { - Py_DECREF(r); - return NULL; - } - } - else { /* we have to watch for the end of the file and - reallocate at the end */ -#define _FILEBUFNUM 4096 - intp thisbuf=0; - intp size = _FILEBUFNUM; - intp bytes; - intp totalbytes; - - r = (PyArrayObject *)\ - PyArray_NewFromDescr(&PyArray_Type, - typecode, - 1, &size, - NULL, NULL, - 0, NULL); - if (r==NULL) return NULL; - NPY_BEGIN_ALLOW_THREADS - totalbytes = bytes = size * typecode->elsize; - dptr = r->data; - while (!done) { - done = scan(fp, dptr, sep, NULL); - - /* end of file reached trying to - scan value. done is 1 or 2 - if end of file reached trying to - scan separator. Still good value. - */ - if (done < -2) break; - thisbuf += 1; - nread += 1; - dptr += r->descr->elsize; - if (!done && thisbuf == size) { - totalbytes += bytes; - r->data = PyDataMem_RENEW(r->data, - totalbytes); - dptr = r->data + (totalbytes - bytes); - thisbuf = 0; - } - } - r->data = PyDataMem_RENEW(r->data, nread*r->descr->elsize); - PyArray_DIM(r,0) = nread; - num = nread; - NPY_END_ALLOW_THREADS -#undef _FILEBUFNUM - } - if (PyErr_Occurred()) { - Py_DECREF(r); - return NULL; - } - - } - if (((intp) nread) < num) { - fprintf(stderr, "%ld items requested but only %ld read\n", - (long) num, (long) nread); - r->data = PyDataMem_RENEW(r->data, nread * r->descr->elsize); - PyArray_DIM(r,0) = nread; - } - return (PyObject *)r; -} - -static PyObject * -array_fromfile(PyObject *ignored, PyObject *args, PyObject *keywds) -{ - PyObject *file=NULL, *ret; - FILE *fp; - char *sep=""; - Py_ssize_t nin=-1; - static char *kwlist[] = {"file", "dtype", "count", "sep", NULL}; - PyArray_Descr *type=NULL; - - if (!PyArg_ParseTupleAndKeywords(args, keywds, - "O|O&" NPY_SSIZE_T_PYFMT "s", - kwlist, - &file, - PyArray_DescrConverter, &type, - &nin, &sep)) { - return NULL; - } - - if (type == NULL) type = PyArray_DescrFromType(PyArray_DEFAULT); - - if (PyString_Check(file) || PyUnicode_Check(file)) { - file = PyObject_CallFunction((PyObject *)&PyFile_Type, - "Os", file, "rb"); - if (file==NULL) return NULL; - } - else { - Py_INCREF(file); - } - fp = PyFile_AsFile(file); - if (fp == NULL) { - PyErr_SetString(PyExc_IOError, - "first argument must be an open file"); - Py_DECREF(file); - return NULL; - } - ret = PyArray_FromFile(fp, type, (intp) nin, sep); - Py_DECREF(file); - return ret; -} - -/*OBJECT_API*/ -static PyObject * PyArray_FromBuffer(PyObject *buf, PyArray_Descr *type, intp count, intp offset) { @@ -7213,9 +7285,9 @@ METH_VARARGS | METH_KEYWORDS, NULL}, {"putmask", (PyCFunction)array_putmask, METH_VARARGS | METH_KEYWORDS, NULL}, - {"fromstring",(PyCFunction)array_fromString, + {"fromstring",(PyCFunction)array_fromstring, METH_VARARGS|METH_KEYWORDS, NULL}, - {"fromiter",(PyCFunction)array_fromIter, + {"fromiter",(PyCFunction)array_fromiter, METH_VARARGS|METH_KEYWORDS, NULL}, {"concatenate", (PyCFunction)array_concatenate, METH_VARARGS|METH_KEYWORDS, NULL}, Modified: trunk/numpy/core/tests/test_multiarray.py =================================================================== --- trunk/numpy/core/tests/test_multiarray.py 2007-05-10 17:26:20 UTC (rev 3740) +++ trunk/numpy/core/tests/test_multiarray.py 2007-05-10 18:14:29 UTC (rev 3741) @@ -116,9 +116,28 @@ a = fromstring('\x00\x00\x80?\x00\x00\x00@\x00\x00@@\x00\x00\x80@',dtype=' Author: oliphant Date: 2007-05-10 16:49:22 -0500 (Thu, 10 May 2007) New Revision: 3742 Modified: trunk/numpy/core/src/arrayobject.c trunk/numpy/core/src/multiarraymodule.c Log: Remove wasteful check. Fix problem with PyArray_Transpose for large arrays. Modified: trunk/numpy/core/src/arrayobject.c =================================================================== --- trunk/numpy/core/src/arrayobject.c 2007-05-10 18:14:29 UTC (rev 3741) +++ trunk/numpy/core/src/arrayobject.c 2007-05-10 21:49:22 UTC (rev 3742) @@ -5368,7 +5368,7 @@ return NULL; } size *= dims[i]; - if (size <=0 || size > largest) { + if (size > largest) { PyErr_SetString(PyExc_ValueError, "dimensions too large."); Py_DECREF(descr); Modified: trunk/numpy/core/src/multiarraymodule.c =================================================================== --- trunk/numpy/core/src/multiarraymodule.c 2007-05-10 18:14:29 UTC (rev 3741) +++ trunk/numpy/core/src/multiarraymodule.c 2007-05-10 21:49:22 UTC (rev 3742) @@ -1875,7 +1875,7 @@ ret = (PyArrayObject *)\ PyArray_NewFromDescr(ap->ob_type, ap->descr, - n, permutation, + n, ap->dimensions, NULL, ap->data, ap->flags, (PyObject *)ap); if (ret == NULL) return NULL; @@ -1884,6 +1884,7 @@ ret->base = (PyObject *)ap; Py_INCREF(ap); + /* fix the dimensions and strides of the return-array */ for(i=0; idimensions[i] = ap->dimensions[permutation[i]]; ret->strides[i] = ap->strides[permutation[i]]; From numpy-svn at scipy.org Thu May 10 18:42:52 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 10 May 2007 17:42:52 -0500 (CDT) Subject: [Numpy-svn] r3743 - trunk/numpy/core/src Message-ID: <20070510224252.3557039C15C@new.scipy.org> Author: oliphant Date: 2007-05-10 17:42:48 -0500 (Thu, 10 May 2007) New Revision: 3743 Modified: trunk/numpy/core/src/arrayobject.c trunk/numpy/core/src/multiarraymodule.c Log: Fix ticket #514 (and probably others) due to inappropriate fixing of largest string type on common type conversion. Modified: trunk/numpy/core/src/arrayobject.c =================================================================== --- trunk/numpy/core/src/arrayobject.c 2007-05-10 21:49:22 UTC (rev 3742) +++ trunk/numpy/core/src/arrayobject.c 2007-05-10 22:42:48 UTC (rev 3743) @@ -6937,15 +6937,13 @@ else { outtype = PyArray_DescrFromType(outtype_num); } - if (PyTypeNum_ISEXTENDED(outtype->type_num) && \ - (PyTypeNum_ISEXTENDED(mintype->type_num) || \ - mintype->type_num==0)) { + if (PyTypeNum_ISEXTENDED(outtype->type_num)) { int testsize = outtype->elsize; register int chksize, minsize; chksize = chktype->elsize; minsize = mintype->elsize; /* Handle string->unicode case separately - because string itemsize is twice as large */ + because string itemsize is 4* as large */ if (outtype->type_num == PyArray_UNICODE && mintype->type_num == PyArray_STRING) { testsize = MAX(chksize, 4*minsize); Modified: trunk/numpy/core/src/multiarraymodule.c =================================================================== --- trunk/numpy/core/src/multiarraymodule.c 2007-05-10 21:49:22 UTC (rev 3742) +++ trunk/numpy/core/src/multiarraymodule.c 2007-05-10 22:42:48 UTC (rev 3743) @@ -2153,7 +2153,8 @@ else if ((stype != NULL) && (intypekind != scalarkind)) { \ /* we need to upconvert to type that handles both intype and stype - and don't forcecast the scalars. + + also don't forcecast the scalars. */ if (!PyArray_CanCoerceScalar(stype->type_num, @@ -2187,7 +2188,7 @@ if (mps[i] == NULL) goto fail; } Py_DECREF(intype); - Py_XDECREF(stype); + Py_XDECREF(stype); return mps; fail: From numpy-svn at scipy.org Fri May 11 04:20:50 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 11 May 2007 03:20:50 -0500 (CDT) Subject: [Numpy-svn] r3744 - trunk/numpy/core Message-ID: <20070511082050.19D3CC7C014@new.scipy.org> Author: pearu Date: 2007-05-11 03:20:41 -0500 (Fri, 11 May 2007) New Revision: 3744 Modified: trunk/numpy/core/setup.py Log: Improved error message for missing Python.h. Modified: trunk/numpy/core/setup.py =================================================================== --- trunk/numpy/core/setup.py 2007-05-10 22:42:48 UTC (rev 3743) +++ trunk/numpy/core/setup.py 2007-05-11 08:20:41 UTC (rev 3744) @@ -42,10 +42,16 @@ tc = generate_testcode(target) from distutils import sysconfig python_include = sysconfig.get_python_inc() + python_h = join(python_include, 'Python.h') + if not os.path.isfile(python_h): + raise SystemError,\ + "Non-existing %s. Perhaps you need to install"\ + " python-dev|python-devel." % (python_h) result = config_cmd.try_run(tc,include_dirs=[python_include], library_dirs = default_lib_dirs) if not result: - raise "ERROR: Failed to test configuration" + raise SystemError,"Failed to test configuration. "\ + "See previous error messages for more information." # Python 2.3 causes a segfault when # trying to re-acquire the thread-state From numpy-svn at scipy.org Fri May 11 08:52:01 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 11 May 2007 07:52:01 -0500 (CDT) Subject: [Numpy-svn] r3745 - in trunk/numpy: core distutils distutils/tests distutils/tests/f2py_ext distutils/tests/f2py_f90_ext distutils/tests/gen_ext distutils/tests/pyrex_ext distutils/tests/swig_ext f2py/tests/array_from_pyobj fft lib linalg numarray oldnumeric random testing Message-ID: <20070511125201.DEC1F39C17F@new.scipy.org> Author: pearu Date: 2007-05-11 07:50:42 -0500 (Fri, 11 May 2007) New Revision: 3745 Modified: trunk/numpy/core/setup.py trunk/numpy/distutils/misc_util.py trunk/numpy/distutils/setup.py trunk/numpy/distutils/tests/f2py_ext/setup.py trunk/numpy/distutils/tests/f2py_f90_ext/setup.py trunk/numpy/distutils/tests/gen_ext/setup.py trunk/numpy/distutils/tests/pyrex_ext/setup.py trunk/numpy/distutils/tests/setup.py trunk/numpy/distutils/tests/swig_ext/setup.py trunk/numpy/f2py/tests/array_from_pyobj/setup.py trunk/numpy/fft/setup.py trunk/numpy/lib/setup.py trunk/numpy/linalg/setup.py trunk/numpy/numarray/setup.py trunk/numpy/oldnumeric/setup.py trunk/numpy/random/setup.py trunk/numpy/testing/setup.py Log: Clean up setup() calls. Modified: trunk/numpy/core/setup.py =================================================================== --- trunk/numpy/core/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/core/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -330,4 +330,5 @@ if __name__=='__main__': from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) + Modified: trunk/numpy/distutils/misc_util.py =================================================================== --- trunk/numpy/distutils/misc_util.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/distutils/misc_util.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -530,9 +530,10 @@ caller_frame = get_frame(caller_level) caller_name = eval('__name__',caller_frame.f_globals,caller_frame.f_locals) self.local_path = get_path(caller_name, top_path) + # local_path -- directory of a file (usually setup.py) that + # defines a configuration() function. if top_path is None: top_path = self.local_path - self.local_path = '.' if package_path is None: package_path = self.local_path elif os.path.isdir(njoin(self.local_path,package_path)): Modified: trunk/numpy/distutils/setup.py =================================================================== --- trunk/numpy/distutils/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/distutils/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -12,4 +12,4 @@ if __name__ == '__main__': from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: trunk/numpy/distutils/tests/f2py_ext/setup.py =================================================================== --- trunk/numpy/distutils/tests/f2py_ext/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/distutils/tests/f2py_ext/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -8,4 +8,4 @@ if __name__ == "__main__": from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: trunk/numpy/distutils/tests/f2py_f90_ext/setup.py =================================================================== --- trunk/numpy/distutils/tests/f2py_f90_ext/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/distutils/tests/f2py_f90_ext/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -13,4 +13,4 @@ if __name__ == "__main__": from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: trunk/numpy/distutils/tests/gen_ext/setup.py =================================================================== --- trunk/numpy/distutils/tests/gen_ext/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/distutils/tests/gen_ext/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -44,4 +44,4 @@ if __name__ == "__main__": from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: trunk/numpy/distutils/tests/pyrex_ext/setup.py =================================================================== --- trunk/numpy/distutils/tests/pyrex_ext/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/distutils/tests/pyrex_ext/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -9,4 +9,4 @@ if __name__ == "__main__": from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: trunk/numpy/distutils/tests/setup.py =================================================================== --- trunk/numpy/distutils/tests/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/distutils/tests/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -11,4 +11,4 @@ if __name__ == "__main__": from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: trunk/numpy/distutils/tests/swig_ext/setup.py =================================================================== --- trunk/numpy/distutils/tests/swig_ext/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/distutils/tests/swig_ext/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -15,4 +15,4 @@ if __name__ == "__main__": from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: trunk/numpy/f2py/tests/array_from_pyobj/setup.py =================================================================== --- trunk/numpy/f2py/tests/array_from_pyobj/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/f2py/tests/array_from_pyobj/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -22,4 +22,4 @@ if __name__ == "__main__": from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: trunk/numpy/fft/setup.py =================================================================== --- trunk/numpy/fft/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/fft/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -16,4 +16,4 @@ if __name__ == '__main__': from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: trunk/numpy/lib/setup.py =================================================================== --- trunk/numpy/lib/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/lib/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -18,4 +18,4 @@ if __name__=='__main__': from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: trunk/numpy/linalg/setup.py =================================================================== --- trunk/numpy/linalg/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/linalg/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -28,4 +28,4 @@ if __name__ == '__main__': from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: trunk/numpy/numarray/setup.py =================================================================== --- trunk/numpy/numarray/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/numarray/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -15,4 +15,4 @@ if __name__ == '__main__': from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: trunk/numpy/oldnumeric/setup.py =================================================================== --- trunk/numpy/oldnumeric/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/oldnumeric/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -5,4 +5,4 @@ if __name__ == '__main__': from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: trunk/numpy/random/setup.py =================================================================== --- trunk/numpy/random/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/random/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -50,4 +50,4 @@ if __name__ == '__main__': from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: trunk/numpy/testing/setup.py =================================================================== --- trunk/numpy/testing/setup.py 2007-05-11 08:20:41 UTC (rev 3744) +++ trunk/numpy/testing/setup.py 2007-05-11 12:50:42 UTC (rev 3745) @@ -12,5 +12,5 @@ description = "NumPy test module", url = "http://www.numpy.org", license = "NumPy License (BSD Style)", - **configuration(top_path='').todict() + configuration = configuration, ) From numpy-svn at scipy.org Fri May 11 08:58:40 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 11 May 2007 07:58:40 -0500 (CDT) Subject: [Numpy-svn] r3746 - trunk/numpy/distutils Message-ID: <20070511125840.A857439C030@new.scipy.org> Author: pearu Date: 2007-05-11 07:58:31 -0500 (Fri, 11 May 2007) New Revision: 3746 Modified: trunk/numpy/distutils/system_info.py Log: Using meaningful NotFoundError exception for blas_opt and lapack_opt resources. Modified: trunk/numpy/distutils/system_info.py =================================================================== --- trunk/numpy/distutils/system_info.py 2007-05-11 12:50:42 UTC (rev 3745) +++ trunk/numpy/distutils/system_info.py 2007-05-11 12:58:31 UTC (rev 3746) @@ -1158,6 +1158,8 @@ class lapack_opt_info(system_info): + notfounderror = LapackNotFoundError + def calc_info(self): if sys.platform=='darwin' and not os.environ.get('ATLAS',None): @@ -1253,6 +1255,8 @@ class blas_opt_info(system_info): + notfounderror = BlasNotFoundError + def calc_info(self): if sys.platform=='darwin' and not os.environ.get('ATLAS',None): From numpy-svn at scipy.org Fri May 11 09:37:36 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 11 May 2007 08:37:36 -0500 (CDT) Subject: [Numpy-svn] r3747 - trunk/numpy/distutils/command Message-ID: <20070511133736.D87D339C030@new.scipy.org> Author: pearu Date: 2007-05-11 08:37:31 -0500 (Fri, 11 May 2007) New Revision: 3747 Modified: trunk/numpy/distutils/command/build_src.py Log: Raise exception when pyrex is required. Modified: trunk/numpy/distutils/command/build_src.py =================================================================== --- trunk/numpy/distutils/command/build_src.py 2007-05-11 12:58:31 UTC (rev 3746) +++ trunk/numpy/distutils/command/build_src.py 2007-05-11 13:37:31 UTC (rev 3747) @@ -363,10 +363,14 @@ if pyrex_result.num_errors != 0: raise RuntimeError("%d errors in Pyrex compile" % pyrex_result.num_errors) - else: + elif os.path.isfile(target_file): log.warn("Pyrex needed to compile %s but not available."\ " Using old target %s"\ % (source, target_file)) + else: + raise SystemError,"Non-existing target %r. "\ + "Perhaps you need to install Pyrex."\ + % (target_file) new_sources.append(target_file) else: new_sources.append(source) From numpy-svn at scipy.org Fri May 11 17:49:18 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 11 May 2007 16:49:18 -0500 (CDT) Subject: [Numpy-svn] r3748 - in branches/multicore: . numpy/core numpy/core/code_generators numpy/core/include/numpy numpy/core/src numpy/core/tests numpy/distutils numpy/distutils/command numpy/distutils/tests numpy/distutils/tests/f2py_ext numpy/distutils/tests/f2py_f90_ext numpy/distutils/tests/gen_ext numpy/distutils/tests/pyrex_ext numpy/distutils/tests/swig_ext numpy/f2py/tests/array_from_pyobj numpy/fft numpy/lib numpy/linalg numpy/numarray numpy/oldnumeric numpy/random numpy/testing Message-ID: <20070511214918.20E5239C1EC@new.scipy.org> Author: eric Date: 2007-05-11 16:49:03 -0500 (Fri, 11 May 2007) New Revision: 3748 Modified: branches/multicore/ branches/multicore/numpy/core/code_generators/genapi.py branches/multicore/numpy/core/include/numpy/ndarrayobject.h branches/multicore/numpy/core/records.py branches/multicore/numpy/core/setup.py branches/multicore/numpy/core/src/arrayobject.c branches/multicore/numpy/core/src/arraytypes.inc.src branches/multicore/numpy/core/src/multiarraymodule.c branches/multicore/numpy/core/tests/test_multiarray.py branches/multicore/numpy/distutils/command/build_src.py branches/multicore/numpy/distutils/core.py branches/multicore/numpy/distutils/misc_util.py branches/multicore/numpy/distutils/setup.py branches/multicore/numpy/distutils/system_info.py branches/multicore/numpy/distutils/tests/f2py_ext/setup.py branches/multicore/numpy/distutils/tests/f2py_f90_ext/setup.py branches/multicore/numpy/distutils/tests/gen_ext/setup.py branches/multicore/numpy/distutils/tests/pyrex_ext/setup.py branches/multicore/numpy/distutils/tests/setup.py branches/multicore/numpy/distutils/tests/swig_ext/setup.py branches/multicore/numpy/f2py/tests/array_from_pyobj/setup.py branches/multicore/numpy/fft/setup.py branches/multicore/numpy/lib/setup.py branches/multicore/numpy/linalg/setup.py branches/multicore/numpy/numarray/setup.py branches/multicore/numpy/oldnumeric/mlab.py branches/multicore/numpy/oldnumeric/setup.py branches/multicore/numpy/random/setup.py branches/multicore/numpy/testing/numpytest.py branches/multicore/numpy/testing/setup.py branches/multicore/numpy/testing/utils.py Log: Merged revisions 3734-3747 via svnmerge from http://svn.scipy.org/svn/numpy/trunk ........ r3735 | oliphant | 2007-05-03 23:08:15 -0500 (Thu, 03 May 2007) | 1 line Fix compatibility layer definition of std ........ r3736 | rkern | 2007-05-04 16:17:20 -0500 (Fri, 04 May 2007) | 1 line assert_approx_equal used significant digit more than requested. ........ r3737 | oliphant | 2007-05-09 11:27:06 -0500 (Wed, 09 May 2007) | 1 line Change recarray attribute getting to return a view using the class instead of pure recarray when fields are present. ........ r3738 | cookedm | 2007-05-10 12:23:52 -0500 (Thu, 10 May 2007) | 2 lines Add docstrings to numpy/core/code_generators/genapi.py ........ r3739 | cookedm | 2007-05-10 12:24:44 -0500 (Thu, 10 May 2007) | 2 lines Better warning when using ScipyTest ........ r3740 | cookedm | 2007-05-10 12:26:20 -0500 (Thu, 10 May 2007) | 2 lines Use a try/finally instead of try/except Exception for cleanup in numpy/distutils/core.py ........ r3741 | cookedm | 2007-05-10 13:14:29 -0500 (Thu, 10 May 2007) | 10 lines Improvement of separator handling for fromstring and fromfile. * fromstring and fromfile should behave identically on text. * added more test cases for fromstring * the dtype gets passed to the C code doing the type-specific string conversions. We don't use it, but someone making their own dtype could. * separator handling for fromfile is moved out of the type-specific conversion. I've left the argument in for backwards compatibility; when the API version is next bumped up, it can be removed. * separator handling in fromfile is now safe (no fscanf(fp, sep) anymore) ........ r3742 | oliphant | 2007-05-10 16:49:22 -0500 (Thu, 10 May 2007) | 1 line Remove wasteful check. Fix problem with PyArray_Transpose for large arrays. ........ r3743 | oliphant | 2007-05-10 17:42:48 -0500 (Thu, 10 May 2007) | 1 line Fix ticket #514 (and probably others) due to inappropriate fixing of largest string type on common type conversion. ........ r3744 | pearu | 2007-05-11 03:20:41 -0500 (Fri, 11 May 2007) | 1 line Improved error message for missing Python.h. ........ r3745 | pearu | 2007-05-11 07:50:42 -0500 (Fri, 11 May 2007) | 1 line Clean up setup() calls. ........ r3746 | pearu | 2007-05-11 07:58:31 -0500 (Fri, 11 May 2007) | 1 line Using meaningful NotFoundError exception for blas_opt and lapack_opt resources. ........ r3747 | pearu | 2007-05-11 08:37:31 -0500 (Fri, 11 May 2007) | 1 line Raise exception when pyrex is required. ........ Property changes on: branches/multicore ___________________________________________________________________ Name: svnmerge-integrated - /branches/distutils-revamp:1-2752 /trunk:1-3733 + /branches/distutils-revamp:1-2752 /trunk:1-3747 Modified: branches/multicore/numpy/core/code_generators/genapi.py =================================================================== --- branches/multicore/numpy/core/code_generators/genapi.py 2007-05-11 13:37:31 UTC (rev 3747) +++ branches/multicore/numpy/core/code_generators/genapi.py 2007-05-11 21:49:03 UTC (rev 3748) @@ -1,3 +1,10 @@ +""" +Get API information encoded in C files. + +See ``find_function`` for how functions should be formatted, and +``read_order`` for how the order of the functions should be +specified. +""" import sys, os, re import md5 import textwrap @@ -2,2 +9,5 @@ +__docformat__ = 'restructuredtext' + +# The files under src/ that are scanned for API functions API_FILES = ['arraymethods.c', @@ -128,6 +138,27 @@ def find_functions(filename, tag='API'): + """ + Scan the file, looking for tagged functions. + + Assuming ``tag=='API'``, a tagged function looks like:: + + /*API*/ + static returntype* + function_name(argtype1 arg1, argtype2 arg2) + { + } + + where the return type must be on a separate line, the function + name must start the line, and the opening ``{`` must start the line. + + An optional documentation comment in ReST format may follow the tag, + as in:: + + /*API + This function does foo... + */ + """ fo = open(filename, 'r') functions = [] return_type = None @@ -191,6 +222,11 @@ return functions def read_order(order_file): + """ + Read the order of the API functions from a file. + + Comments can be put on lines starting with # + """ fo = open(order_file, 'r') order = {} i = 0 Modified: branches/multicore/numpy/core/include/numpy/ndarrayobject.h =================================================================== --- branches/multicore/numpy/core/include/numpy/ndarrayobject.h 2007-05-11 13:37:31 UTC (rev 3747) +++ branches/multicore/numpy/core/include/numpy/ndarrayobject.h 2007-05-11 21:49:03 UTC (rev 3748) @@ -9,17 +9,18 @@ extern "C" CONFUSE_EMACS #undef CONFUSE_EMACS #undef CONFUSE_EMACS2 -/* ... otherwise a semi-smart idententer (like emacs) tries to indent +/* ... otherwise a semi-smart identer (like emacs) tries to indent everything when you're typing */ #endif /* This is auto-generated by the installer */ #include "config.h" -/* There are several places in the code where an array of dimensions is */ -/* allocated statically. This is the size of that static allocation. */ -/* The array creation itself could have arbitrary dimensions but - * all the places where static allocation is used would need to - * be changed to dynamic (including inside of several structures) +/* There are several places in the code where an array of dimensions is + * allocated statically. This is the size of that static allocation. + * + * The array creation itself could have arbitrary dimensions but + * all the places where static allocation is used would need to + * be changed to dynamic (including inside of several structures) */ #define NPY_MAXDIMS 32 @@ -1008,6 +1009,8 @@ #define PyDimMem_RENEW(ptr,size) \ ((npy_intp *)PyArray_realloc(ptr,size*sizeof(npy_intp))) +/* forward declaration */ +struct _PyArray_Descr; /* These must deal with unaligned and swapped data if necessary */ typedef PyObject * (PyArray_GetItemFunc) (void *, void *); @@ -1032,8 +1035,12 @@ typedef void (PyArray_VectorUnaryFunc)(void *, void *, npy_intp, void *, void *); -typedef int (PyArray_ScanFunc)(FILE *, void *, void *, void *); -typedef int (PyArray_FromStrFunc)(char *, void *, char **, void *); +/* XXX the ignore argument should be removed next time the API version + is bumped. It used to be the separator. */ +typedef int (PyArray_ScanFunc)(FILE *fp, void *dptr, + char *ignore, struct _PyArray_Descr *); +typedef int (PyArray_FromStrFunc)(char *s, void *dptr, char **endptr, + struct _PyArray_Descr *); typedef int (PyArray_FillFunc)(void *, npy_intp, void *); @@ -1161,7 +1168,7 @@ PyDataType_FLAGCHK(dtype, NPY_ITEM_REFCOUNT) /* Change dtype hasobject to 32-bit in 1.1 and change its name */ -typedef struct { +typedef struct _PyArray_Descr { PyObject_HEAD PyTypeObject *typeobj; /* the type object representing an instance of this type -- should not Modified: branches/multicore/numpy/core/records.py =================================================================== --- branches/multicore/numpy/core/records.py 2007-05-11 13:37:31 UTC (rev 3747) +++ branches/multicore/numpy/core/records.py 2007-05-11 21:49:03 UTC (rev 3748) @@ -136,7 +136,7 @@ # if it's a string return 'SU' return a chararray # otherwise return a normal array if obj.dtype.fields: - return obj.view(recarray) + return obj.view(obj.__class__) if obj.dtype.char in 'SU': return obj.view(chararray) return obj Modified: branches/multicore/numpy/core/setup.py =================================================================== --- branches/multicore/numpy/core/setup.py 2007-05-11 13:37:31 UTC (rev 3747) +++ branches/multicore/numpy/core/setup.py 2007-05-11 21:49:03 UTC (rev 3748) @@ -42,10 +42,16 @@ tc = generate_testcode(target) from distutils import sysconfig python_include = sysconfig.get_python_inc() + python_h = join(python_include, 'Python.h') + if not os.path.isfile(python_h): + raise SystemError,\ + "Non-existing %s. Perhaps you need to install"\ + " python-dev|python-devel." % (python_h) result = config_cmd.try_run(tc,include_dirs=[python_include], library_dirs = default_lib_dirs) if not result: - raise "ERROR: Failed to test configuration" + raise SystemError,"Failed to test configuration. "\ + "See previous error messages for more information." # Python 2.3 causes a segfault when # trying to re-acquire the thread-state @@ -351,4 +357,5 @@ if __name__=='__main__': from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) + Modified: branches/multicore/numpy/core/src/arrayobject.c =================================================================== --- branches/multicore/numpy/core/src/arrayobject.c 2007-05-11 13:37:31 UTC (rev 3747) +++ branches/multicore/numpy/core/src/arrayobject.c 2007-05-11 21:49:03 UTC (rev 3748) @@ -5368,7 +5368,7 @@ return NULL; } size *= dims[i]; - if (size <=0 || size > largest) { + if (size > largest) { PyErr_SetString(PyExc_ValueError, "dimensions too large."); Py_DECREF(descr); @@ -6937,15 +6937,13 @@ else { outtype = PyArray_DescrFromType(outtype_num); } - if (PyTypeNum_ISEXTENDED(outtype->type_num) && \ - (PyTypeNum_ISEXTENDED(mintype->type_num) || \ - mintype->type_num==0)) { + if (PyTypeNum_ISEXTENDED(outtype->type_num)) { int testsize = outtype->elsize; register int chksize, minsize; chksize = chktype->elsize; minsize = mintype->elsize; /* Handle string->unicode case separately - because string itemsize is twice as large */ + because string itemsize is 4* as large */ if (outtype->type_num == PyArray_UNICODE && mintype->type_num == PyArray_STRING) { testsize = MAX(chksize, 4*minsize); Modified: branches/multicore/numpy/core/src/arraytypes.inc.src =================================================================== --- branches/multicore/numpy/core/src/arraytypes.inc.src 2007-05-11 13:37:31 UTC (rev 3747) +++ branches/multicore/numpy/core/src/arraytypes.inc.src 2007-05-11 21:49:03 UTC (rev 3748) @@ -11,7 +11,7 @@ if (mylong == NULL) return (longlong) -1; vv = mylong; } - else Py_INCREF(vv); + else Py_INCREF(vv); ret = PyLong_AsLongLong(vv); Py_DECREF(vv); @@ -867,18 +867,9 @@ /****************** scan *************************************/ -#define _ENDSCAN \ - if (num != 1) { \ - if (num == 0) return -3; \ - if (num == EOF) return -4; \ - return -5; \ - } \ - if (sep != NULL) { \ - num = fscanf(fp, sep); \ - if (num == 0) return 0; \ - if (num == EOF) return -1; \ - } \ - return 0 +/* The first ignore argument is for backwards compatibility. + Should be removed when the API version is bumped up. + */ /**begin repeat @@ -887,11 +878,9 @@ #format="hd","hu","d","u","ld","lu",LONGLONG_FMT,ULONGLONG_FMT,"f","lf","Lf"# */ static int - at fname@_scan (FILE *fp, @type@ *ip, char *sep, void *ignore) + at fname@_scan (FILE *fp, @type@ *ip, void *ignore, PyArray_Descr *ignore2) { - int num; - num = fscanf(fp, "%"@format@, ip); - _ENDSCAN; + return fscanf(fp, "%"@format@, ip); } /**end repeat**/ @@ -903,24 +892,24 @@ #format="d","u"# */ static int - at fname@_scan (FILE *fp, @type@ *ip, char *sep, void *ignore) + at fname@_scan (FILE *fp, @type@ *ip, void *ignore, PyArray_Descr *ignore2) { @btype@ temp; int num; num = fscanf(fp, "%"@format@, &temp); *ip = (@type@) temp; - _ENDSCAN; + return num; } /**end repeat**/ static int -BOOL_scan (FILE *fp, Bool *ip, char *sep, void *ignore) +BOOL_scan (FILE *fp, Bool *ip, void *ignore, PyArray_Descr *ignore2) { int temp; int num; num = fscanf(fp, "%d", &temp); *ip = (Bool) (temp != 0); - _ENDSCAN; + return num; } /**begin repeat @@ -929,8 +918,6 @@ #define @fname at _scan NULL /**end repeat**/ -#undef _ENDSCAN - /****************** fromstr *************************************/ /**begin repeat @@ -940,7 +927,7 @@ #btype=(long,ulong)*5# */ static int - at fname@_fromstr(char *str, @type@ *ip, char **endptr, void *ignore) + at fname@_fromstr(char *str, @type@ *ip, char **endptr, PyArray_Descr *ignore) { @btype@ result; @@ -956,7 +943,7 @@ */ #if (PY_VERSION_HEX >= 0x02040000) || defined(PyOS_ascii_strtod) static int - at fname@_fromstr(char *str, @type@ *ip, char **endptr, void *ignore) + at fname@_fromstr(char *str, @type@ *ip, char **endptr, PyArray_Descr *ignore) { double result; Modified: branches/multicore/numpy/core/src/multiarraymodule.c =================================================================== --- branches/multicore/numpy/core/src/multiarraymodule.c 2007-05-11 13:37:31 UTC (rev 3747) +++ branches/multicore/numpy/core/src/multiarraymodule.c 2007-05-11 21:49:03 UTC (rev 3748) @@ -1875,7 +1875,7 @@ ret = (PyArrayObject *)\ PyArray_NewFromDescr(ap->ob_type, ap->descr, - n, permutation, + n, ap->dimensions, NULL, ap->data, ap->flags, (PyObject *)ap); if (ret == NULL) return NULL; @@ -1884,6 +1884,7 @@ ret->base = (PyObject *)ap; Py_INCREF(ap); + /* fix the dimensions and strides of the return-array */ for(i=0; idimensions[i] = ap->dimensions[permutation[i]]; ret->strides[i] = ap->strides[permutation[i]]; @@ -2152,7 +2153,8 @@ else if ((stype != NULL) && (intypekind != scalarkind)) { \ /* we need to upconvert to type that handles both intype and stype - and don't forcecast the scalars. + + also don't forcecast the scalars. */ if (!PyArray_CanCoerceScalar(stype->type_num, @@ -2186,7 +2188,7 @@ if (mps[i] == NULL) goto fail; } Py_DECREF(intype); - Py_XDECREF(stype); + Py_XDECREF(stype); return mps; fail: @@ -5828,34 +5830,237 @@ return Py_None; } + +/* Reading from a file or a string. + + As much as possible, we try to use the same code for both files and strings, + so the semantics for fromstring and fromfile are the same, especially with + regards to the handling of text representations. + */ + + +typedef int (*next_element)(void **, void *, PyArray_Descr *, void *); +typedef int (*skip_separator)(void **, const char *, void *); + static int -_skip_sep(char **ptr, char *sep) +fromstr_next_element(char **s, void *dptr, PyArray_Descr *dtype, + const char *end) { - char *a; - int n; - n = strlen(sep); - a = *ptr; - while(*a != '\0' && (strncmp(a, sep, n) != 0)) - a++; - if (*a == '\0') return -1; - *ptr = a+strlen(sep); - return 0; + int r = dtype->f->fromstr(*s, dptr, s, dtype); + if (end != NULL && *s > end) { + return -1; + } + return r; } -/* steals a reference to dtype -- accepts NULL */ -/*OBJECT_API*/ +static int +fromfile_next_element(FILE **fp, void *dptr, PyArray_Descr *dtype, + void *stream_data) +{ + /* the NULL argument is for backwards-compatibility */ + return dtype->f->scanfunc(*fp, dptr, NULL, dtype); +} + +/* Remove multiple whitespace from the separator, and add a space to the + beginning and end. This simplifies the separator-skipping code below. +*/ +static char * +swab_separator(char *sep) +{ + int skip_space = 0; + char *s, *start; + s = start = malloc(strlen(sep)+3); + /* add space to front if there isn't one */ + if (*sep != '\0' && !isspace(*sep)) { + *s = ' '; s++; + } + while (*sep != '\0') { + if (isspace(*sep)) { + if (skip_space) { + sep++; + } else { + *s = ' '; + s++; sep++; + skip_space = 1; + } + } else { + *s = *sep; + s++; sep++; + skip_space = 0; + } + } + /* add space to end if there isn't one */ + if (s != start && s[-1] == ' ') { + *s = ' '; + s++; + } + *s = '\0'; + return start; +} + +/* Assuming that the separator is the next bit in the string (file), skip it. + + Single spaces in the separator are matched to arbitrary-long sequences + of whitespace in the input. + + If we can't match the separator, return -2. + If we hit the end of the string (file), return -1. + Otherwise, return 0. + */ + +static int +fromstr_skip_separator(char **s, const char *sep, const char *end) +{ + char *string = *s; + int result = 0; + while (1) { + char c = *string; + if (c == '\0' || (end != NULL && string >= end)) { + result = -1; + break; + } else if (*sep == '\0') { + /* matched separator */ + result = 0; + break; + } else if (*sep == ' ') { + if (!isspace(c)) { + sep++; + continue; + } + } else if (*sep != c) { + result = -2; + break; + } else { + sep++; + } + string++; + } + *s = string; + return result; +} + +static int +fromfile_skip_separator(FILE **fp, const char *sep, void *stream_data) +{ + int result = 0; + while (1) { + int c = fgetc(*fp); + if (c == EOF) { + result = -1; + break; + } else if (*sep == '\0') { + /* matched separator */ + ungetc(c, *fp); + result = 0; + break; + } else if (*sep == ' ') { + if (!isspace(c)) { + sep++; + ungetc(c, *fp); + } + } else if (*sep != c) { + ungetc(c, *fp); + result = -2; + break; + } else { + sep++; + } + } + return result; +} + +/* Create an array by reading from the given stream, using the passed + next_element and skip_separator functions. + */ + +#define FROM_BUFFER_SIZE 4096 +static PyArrayObject * +array_from_text(PyArray_Descr *dtype, intp num, char *sep, size_t *nread, + void *stream, next_element next, skip_separator skip_sep, + void *stream_data) +{ + PyArrayObject *r; + intp i; + char *dptr, *clean_sep; + + intp thisbuf = 0; + intp size; + intp bytes, totalbytes; + + size = (num >= 0) ? num : FROM_BUFFER_SIZE; + + r = (PyArrayObject *) + PyArray_NewFromDescr(&PyArray_Type, + dtype, + 1, &size, + NULL, NULL, + 0, NULL); + if (r == NULL) return NULL; + clean_sep = swab_separator(sep); + NPY_BEGIN_ALLOW_THREADS; + totalbytes = bytes = size * dtype->elsize; + dptr = r->data; + for (i=0; num < 0 || i < num; i++) { + if (next(&stream, dptr, dtype, stream_data) < 0) + break; + *nread += 1; + thisbuf += 1; + dptr += dtype->elsize; + if (num < 0 && thisbuf == size) { + totalbytes += bytes; + r->data = PyDataMem_RENEW(r->data, totalbytes); + dptr = r->data + (totalbytes - bytes); + thisbuf = 0; + } + if (skip_sep(&stream, clean_sep, stream_data) < 0) + break; + } + if (num < 0) { + r->data = PyDataMem_RENEW(r->data, (*nread)*dtype->elsize); + PyArray_DIM(r,0) = *nread; + } + NPY_END_ALLOW_THREADS; + free(clean_sep); + if (PyErr_Occurred()) { + Py_DECREF(r); + return NULL; + } + return r; +} +#undef FROM_BUFFER_SIZE + +/*OBJECT_API + + Given a pointer to a string ``data``, a string length ``slen``, and + a ``PyArray_Descr``, return an array corresponding to the data + encoded in that string. + + If the dtype is NULL, the default array type is used (double). + If non-null, the reference is stolen. + + If ``slen`` is < 0, then the end of string is used for text data. + It is an error for ``slen`` to be < 0 for binary data (since embedded NULLs + would be the norm). + + The number of elements to read is given as ``num``; if it is < 0, then + then as many as possible are read. + + If ``sep`` is NULL or empty, then binary data is assumed, else + text data, with ``sep`` as the separator between elements. Whitespace in + the separator matches any length of whitespace in the text, and a match + for whitespace around the separator is added. + */ static PyObject * PyArray_FromString(char *data, intp slen, PyArray_Descr *dtype, - intp n, char *sep) + intp num, char *sep) { int itemsize; PyArrayObject *ret; Bool binary; - if (dtype == NULL) dtype=PyArray_DescrFromType(PyArray_DEFAULT); - + if (PyDataType_FLAGCHK(dtype, NPY_ITEM_IS_POINTER)) { PyErr_SetString(PyExc_ValueError, "Cannot create an object array from" \ @@ -5874,7 +6079,7 @@ binary = ((sep == NULL) || (strlen(sep) == 0)); if (binary) { - if (n < 0 ) { + if (num < 0 ) { if (slen % itemsize != 0) { PyErr_SetString(PyExc_ValueError, "string size must be a "\ @@ -5882,9 +6087,9 @@ Py_DECREF(dtype); return NULL; } - n = slen/itemsize; + num = slen/itemsize; } else { - if (slen < n*itemsize) { + if (slen < num*itemsize) { PyErr_SetString(PyExc_ValueError, "string is smaller than " \ "requested size"); @@ -5893,111 +6098,40 @@ } } - if ((ret = (PyArrayObject *)\ - PyArray_NewFromDescr(&PyArray_Type, dtype, - 1, &n, NULL, NULL, - 0, NULL)) == NULL) - return NULL; - memcpy(ret->data, data, n*dtype->elsize); - return (PyObject *)ret; - } - else { /* read from character-based string */ - char *ptr; - PyArray_FromStrFunc *fromstr; - char *dptr; - intp nread=0; - intp index; - - fromstr = dtype->f->fromstr; - if (fromstr == NULL) { + ret = (PyArrayObject *) + PyArray_NewFromDescr(&PyArray_Type, dtype, + 1, &num, NULL, NULL, + 0, NULL); + if (ret == NULL) return NULL; + memcpy(ret->data, data, num*dtype->elsize); + } else { + /* read from character-based string */ + size_t nread = 0; + char *end; + if (dtype->f->scanfunc == NULL) { PyErr_SetString(PyExc_ValueError, "don't know how to read " \ - "character strings for given " \ + "character strings with that " \ "array type"); Py_DECREF(dtype); return NULL; } - - if (n!=-1) { - ret = (PyArrayObject *) \ - PyArray_NewFromDescr(&PyArray_Type, - dtype, 1, &n, NULL, - NULL, 0, NULL); - if (ret == NULL) return NULL; - NPY_BEGIN_ALLOW_THREADS - ptr = data; - dptr = ret->data; - for (index=0; index < n; index++) { - if (fromstr(ptr, dptr, &ptr, ret) < 0) - break; - nread += 1; - dptr += dtype->elsize; - if (_skip_sep(&ptr, sep) < 0) - break; - } - if (nread < n) { - fprintf(stderr, "%ld items requested but "\ - "only %ld read\n", - (long) n, (long) nread); - ret->data = \ - PyDataMem_RENEW(ret->data, - nread * \ - ret->descr->elsize); - PyArray_DIM(ret,0) = nread; - - } - NPY_END_ALLOW_THREADS + if (slen < 0) { + end = NULL; + } else { + end = data + slen; } - else { -#define _FILEBUFNUM 4096 - intp thisbuf=0; - intp size = _FILEBUFNUM; - intp bytes; - intp totalbytes; - char *end; - int val; - - ret = (PyArrayObject *)\ - PyArray_NewFromDescr(&PyArray_Type, - dtype, - 1, &size, - NULL, NULL, - 0, NULL); - if (ret==NULL) return NULL; - NPY_BEGIN_ALLOW_THREADS - totalbytes = bytes = size * dtype->elsize; - dptr = ret->data; - ptr = data; - end = data+slen; - while (ptr < end) { - val = fromstr(ptr, dptr, &ptr, ret); - if (val < 0) break; - nread += 1; - val = _skip_sep(&ptr, sep); - if (val < 0) break; - thisbuf += 1; - dptr += dtype->elsize; - if (thisbuf == size) { - totalbytes += bytes; - ret->data = PyDataMem_RENEW(ret->data, - totalbytes); - dptr = ret->data + \ - (totalbytes - bytes); - thisbuf = 0; - } - } - ret->data = PyDataMem_RENEW(ret->data, - nread*ret->descr->elsize); - PyArray_DIM(ret,0) = nread; -#undef _FILEBUFNUM - NPY_END_ALLOW_THREADS - } + ret = array_from_text(dtype, num, sep, &nread, + data, + (next_element) fromstr_next_element, + (skip_separator) fromstr_skip_separator, + end); } return (PyObject *)ret; } static PyObject * -array_fromString(PyObject *ignored, PyObject *args, PyObject *keywds) +array_fromstring(PyObject *ignored, PyObject *args, PyObject *keywds) { char *data; Py_ssize_t nin=-1; @@ -6018,6 +6152,148 @@ } + +static PyArrayObject * +array_fromfile_binary(FILE *fp, PyArray_Descr *dtype, intp num, size_t *nread) +{ + PyArrayObject *r; + intp start, numbytes; + + if (num < 0) { + int fail=0; + start = (intp )ftell(fp); + if (start < 0) fail=1; + if (fseek(fp, 0, SEEK_END) < 0) fail=1; + numbytes = (intp) ftell(fp); + if (numbytes < 0) fail=1; + numbytes -= start; + if (fseek(fp, start, SEEK_SET) < 0) fail=1; + if (fail) { + PyErr_SetString(PyExc_IOError, + "could not seek in file"); + Py_DECREF(dtype); + return NULL; + } + num = numbytes / dtype->elsize; + } + r = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type, + dtype, + 1, &num, + NULL, NULL, + 0, NULL); + if (r==NULL) return NULL; + NPY_BEGIN_ALLOW_THREADS; + *nread = fread(r->data, dtype->elsize, num, fp); + NPY_END_ALLOW_THREADS; + return r; +} + +/*OBJECT_API + + Given a ``FILE *`` pointer ``fp``, and a ``PyArray_Descr``, return an + array corresponding to the data encoded in that file. + + If the dtype is NULL, the default array type is used (double). + If non-null, the reference is stolen. + + The number of elements to read is given as ``num``; if it is < 0, then + then as many as possible are read. + + If ``sep`` is NULL or empty, then binary data is assumed, else + text data, with ``sep`` as the separator between elements. Whitespace in + the separator matches any length of whitespace in the text, and a match + for whitespace around the separator is added. + + For memory-mapped files, use the buffer interface. No more data than + necessary is read by this routine. +*/ +static PyObject * +PyArray_FromFile(FILE *fp, PyArray_Descr *dtype, intp num, char *sep) +{ + PyArrayObject *ret; + size_t nread = 0; + + if (PyDataType_REFCHK(dtype)) { + PyErr_SetString(PyExc_ValueError, + "cannot read into object array"); + Py_DECREF(dtype); + return NULL; + } + if (dtype->elsize == 0) { + PyErr_SetString(PyExc_ValueError, "0-sized elements."); + Py_DECREF(dtype); + return NULL; + } + + if ((sep == NULL) || (strlen(sep) == 0)) { + ret = array_fromfile_binary(fp, dtype, num, &nread); + } else { + if (dtype->f->scanfunc == NULL) { + PyErr_SetString(PyExc_ValueError, + "don't know how to read " \ + "character files with that " \ + "array type"); + Py_DECREF(dtype); + return NULL; + } + ret = array_from_text(dtype, num, sep, &nread, + fp, + (next_element) fromfile_next_element, + (skip_separator) fromfile_skip_separator, + NULL); + } + if (((intp) nread) < num) { + fprintf(stderr, "%ld items requested but only %ld read\n", + (long) num, (long) nread); + ret->data = PyDataMem_RENEW(ret->data, + nread * ret->descr->elsize); + PyArray_DIM(ret,0) = nread; + } + return (PyObject *)ret; +} + +static PyObject * +array_fromfile(PyObject *ignored, PyObject *args, PyObject *keywds) +{ + PyObject *file=NULL, *ret; + FILE *fp; + char *sep=""; + Py_ssize_t nin=-1; + static char *kwlist[] = {"file", "dtype", "count", "sep", NULL}; + PyArray_Descr *type=NULL; + + if (!PyArg_ParseTupleAndKeywords(args, keywds, + "O|O&" NPY_SSIZE_T_PYFMT "s", + kwlist, + &file, + PyArray_DescrConverter, &type, + &nin, &sep)) { + return NULL; + } + + if (type == NULL) type = PyArray_DescrFromType(PyArray_DEFAULT); + + if (PyString_Check(file) || PyUnicode_Check(file)) { + file = PyObject_CallFunction((PyObject *)&PyFile_Type, + "Os", file, "rb"); + if (file==NULL) return NULL; + } + else { + Py_INCREF(file); + } + fp = PyFile_AsFile(file); + if (fp == NULL) { + PyErr_SetString(PyExc_IOError, + "first argument must be an open file"); + Py_DECREF(file); + return NULL; + } + ret = PyArray_FromFile(fp, type, (intp) nin, sep); + Py_DECREF(file); + return ret; +} + + /* steals a reference to dtype (which cannot be NULL) */ /*OBJECT_API */ static PyObject * @@ -6108,7 +6384,7 @@ } static PyObject * -array_fromIter(PyObject *ignored, PyObject *args, PyObject *keywds) +array_fromiter(PyObject *ignored, PyObject *args, PyObject *keywds) { PyObject *iter; Py_ssize_t nin=-1; @@ -6128,210 +6404,8 @@ } - - -/* This needs an open file object and reads it in directly. - memory-mapped files handled differently through buffer interface. - -file pointer number in resulting 1d array -(can easily reshape later, -1 for to end of file) -type of array -sep is a separator string for character-based data (or NULL for binary) - " " means whitespace -*/ - /*OBJECT_API*/ static PyObject * -PyArray_FromFile(FILE *fp, PyArray_Descr *typecode, intp num, char *sep) -{ - PyArrayObject *r; - size_t nread = 0; - PyArray_ScanFunc *scan; - Bool binary; - - if (PyDataType_REFCHK(typecode)) { - PyErr_SetString(PyExc_ValueError, "cannot read into" - "object array"); - Py_DECREF(typecode); - return NULL; - } - if (typecode->elsize == 0) { - PyErr_SetString(PyExc_ValueError, "0-sized elements."); - Py_DECREF(typecode); - return NULL; - } - - binary = ((sep == NULL) || (strlen(sep) == 0)); - if (num == -1 && binary) { /* Get size for binary file*/ - intp start, numbytes; - int fail=0; - start = (intp )ftell(fp); - if (start < 0) fail=1; - if (fseek(fp, 0, SEEK_END) < 0) fail=1; - numbytes = (intp) ftell(fp); - if (numbytes < 0) fail=1; - numbytes -= start; - if (fseek(fp, start, SEEK_SET) < 0) fail=1; - if (fail) { - PyErr_SetString(PyExc_IOError, - "could not seek in file"); - Py_DECREF(typecode); - return NULL; - } - num = numbytes / typecode->elsize; - } - - if (binary) { /* binary data */ - r = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type, - typecode, - 1, &num, - NULL, NULL, - 0, NULL); - if (r==NULL) return NULL; - NPY_BEGIN_ALLOW_THREADS - nread = fread(r->data, typecode->elsize, num, fp); - NPY_END_ALLOW_THREADS - } - else { /* character reading */ - intp i; - char *dptr; - int done=0; - - scan = typecode->f->scanfunc; - if (scan == NULL) { - PyErr_SetString(PyExc_ValueError, - "don't know how to read " \ - "character files with that " \ - "array type"); - Py_DECREF(typecode); - return NULL; - } - - if (num != -1) { /* number to read is known */ - r = (PyArrayObject *)\ - PyArray_NewFromDescr(&PyArray_Type, - typecode, - 1, &num, - NULL, NULL, - 0, NULL); - if (r==NULL) return NULL; - NPY_BEGIN_ALLOW_THREADS - dptr = r->data; - for (i=0; i < num; i++) { - if (done) break; - done = scan(fp, dptr, sep, NULL); - if (done < -2) break; - nread += 1; - dptr += r->descr->elsize; - } - NPY_END_ALLOW_THREADS - if (PyErr_Occurred()) { - Py_DECREF(r); - return NULL; - } - } - else { /* we have to watch for the end of the file and - reallocate at the end */ -#define _FILEBUFNUM 4096 - intp thisbuf=0; - intp size = _FILEBUFNUM; - intp bytes; - intp totalbytes; - - r = (PyArrayObject *)\ - PyArray_NewFromDescr(&PyArray_Type, - typecode, - 1, &size, - NULL, NULL, - 0, NULL); - if (r==NULL) return NULL; - NPY_BEGIN_ALLOW_THREADS - totalbytes = bytes = size * typecode->elsize; - dptr = r->data; - while (!done) { - done = scan(fp, dptr, sep, NULL); - - /* end of file reached trying to - scan value. done is 1 or 2 - if end of file reached trying to - scan separator. Still good value. - */ - if (done < -2) break; - thisbuf += 1; - nread += 1; - dptr += r->descr->elsize; - if (!done && thisbuf == size) { - totalbytes += bytes; - r->data = PyDataMem_RENEW(r->data, - totalbytes); - dptr = r->data + (totalbytes - bytes); - thisbuf = 0; - } - } - r->data = PyDataMem_RENEW(r->data, nread*r->descr->elsize); - PyArray_DIM(r,0) = nread; - num = nread; - NPY_END_ALLOW_THREADS -#undef _FILEBUFNUM - } - if (PyErr_Occurred()) { - Py_DECREF(r); - return NULL; - } - - } - if (((intp) nread) < num) { - fprintf(stderr, "%ld items requested but only %ld read\n", - (long) num, (long) nread); - r->data = PyDataMem_RENEW(r->data, nread * r->descr->elsize); - PyArray_DIM(r,0) = nread; - } - return (PyObject *)r; -} - -static PyObject * -array_fromfile(PyObject *ignored, PyObject *args, PyObject *keywds) -{ - PyObject *file=NULL, *ret; - FILE *fp; - char *sep=""; - Py_ssize_t nin=-1; - static char *kwlist[] = {"file", "dtype", "count", "sep", NULL}; - PyArray_Descr *type=NULL; - - if (!PyArg_ParseTupleAndKeywords(args, keywds, - "O|O&" NPY_SSIZE_T_PYFMT "s", - kwlist, - &file, - PyArray_DescrConverter, &type, - &nin, &sep)) { - return NULL; - } - - if (type == NULL) type = PyArray_DescrFromType(PyArray_DEFAULT); - - if (PyString_Check(file) || PyUnicode_Check(file)) { - file = PyObject_CallFunction((PyObject *)&PyFile_Type, - "Os", file, "rb"); - if (file==NULL) return NULL; - } - else { - Py_INCREF(file); - } - fp = PyFile_AsFile(file); - if (fp == NULL) { - PyErr_SetString(PyExc_IOError, - "first argument must be an open file"); - Py_DECREF(file); - return NULL; - } - ret = PyArray_FromFile(fp, type, (intp) nin, sep); - Py_DECREF(file); - return ret; -} - -/*OBJECT_API*/ -static PyObject * PyArray_FromBuffer(PyObject *buf, PyArray_Descr *type, intp count, intp offset) { @@ -7213,9 +7287,9 @@ METH_VARARGS | METH_KEYWORDS, NULL}, {"putmask", (PyCFunction)array_putmask, METH_VARARGS | METH_KEYWORDS, NULL}, - {"fromstring",(PyCFunction)array_fromString, + {"fromstring",(PyCFunction)array_fromstring, METH_VARARGS|METH_KEYWORDS, NULL}, - {"fromiter",(PyCFunction)array_fromIter, + {"fromiter",(PyCFunction)array_fromiter, METH_VARARGS|METH_KEYWORDS, NULL}, {"concatenate", (PyCFunction)array_concatenate, METH_VARARGS|METH_KEYWORDS, NULL}, Modified: branches/multicore/numpy/core/tests/test_multiarray.py =================================================================== --- branches/multicore/numpy/core/tests/test_multiarray.py 2007-05-11 13:37:31 UTC (rev 3747) +++ branches/multicore/numpy/core/tests/test_multiarray.py 2007-05-11 21:49:03 UTC (rev 3748) @@ -116,9 +116,28 @@ a = fromstring('\x00\x00\x80?\x00\x00\x00@\x00\x00@@\x00\x00\x80@',dtype=' Author: oliphant Date: 2007-05-11 17:35:26 -0500 (Fri, 11 May 2007) New Revision: 3749 Modified: trunk/numpy/lib/function_base.py Log: Fix nan functions to allow sub-class. Modified: trunk/numpy/lib/function_base.py =================================================================== --- trunk/numpy/lib/function_base.py 2007-05-11 21:49:03 UTC (rev 3748) +++ trunk/numpy/lib/function_base.py 2007-05-11 22:35:26 UTC (rev 3749) @@ -730,7 +730,7 @@ def nansum(a, axis=None): """Sum the array over the given axis, treating NaNs as 0. """ - y = array(a) + y = array(a,subok=True) if not issubclass(y.dtype.type, _nx.integer): y[isnan(a)] = 0 return y.sum(axis) @@ -738,7 +738,7 @@ def nanmin(a, axis=None): """Find the minimium over the given axis, ignoring NaNs. """ - y = array(a) + y = array(a,subok=True) if not issubclass(y.dtype.type, _nx.integer): y[isnan(a)] = _nx.inf return y.min(axis) @@ -746,7 +746,7 @@ def nanargmin(a, axis=None): """Find the indices of the minimium over the given axis ignoring NaNs. """ - y = array(a) + y = array(a, subok=True) if not issubclass(y.dtype.type, _nx.integer): y[isnan(a)] = _nx.inf return y.argmin(axis) @@ -754,7 +754,7 @@ def nanmax(a, axis=None): """Find the maximum over the given axis ignoring NaNs. """ - y = array(a) + y = array(a, subok=True) if not issubclass(y.dtype.type, _nx.integer): y[isnan(a)] = -_nx.inf return y.max(axis) @@ -762,7 +762,7 @@ def nanargmax(a, axis=None): """Find the maximum over the given axis ignoring NaNs. """ - y = array(a) + y = array(a,subok=True) if not issubclass(y.dtype.type, _nx.integer): y[isnan(a)] = -_nx.inf return y.argmax(axis) From numpy-svn at scipy.org Fri May 11 19:13:30 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 11 May 2007 18:13:30 -0500 (CDT) Subject: [Numpy-svn] r3750 - trunk/numpy/core/src Message-ID: <20070511231330.BD37039C1D4@new.scipy.org> Author: oliphant Date: 2007-05-11 18:13:27 -0500 (Fri, 11 May 2007) New Revision: 3750 Modified: trunk/numpy/core/src/multiarraymodule.c Log: Special check for common error in arange. Modified: trunk/numpy/core/src/multiarraymodule.c =================================================================== --- trunk/numpy/core/src/multiarraymodule.c 2007-05-11 22:35:26 UTC (rev 3749) +++ trunk/numpy/core/src/multiarraymodule.c 2007-05-11 23:13:27 UTC (rev 3750) @@ -6641,7 +6641,15 @@ double value; *next = PyNumber_Subtract(stop, start); - if (!(*next)) return -1; + if (!(*next)) { + if (PyTuple_Check(stop)) { + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "arange: scalar arguments expected "\ + "instead of a tuple."); + } + return -1; + } val = PyNumber_TrueDivide(*next, step); Py_DECREF(*next); *next=NULL; if (!val) return -1; From numpy-svn at scipy.org Sat May 12 15:58:50 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 12 May 2007 14:58:50 -0500 (CDT) Subject: [Numpy-svn] r3751 - in trunk/numpy: . core Message-ID: <20070512195850.3DB3D39C01A@new.scipy.org> Author: charris Date: 2007-05-12 14:58:43 -0500 (Sat, 12 May 2007) New Revision: 3751 Modified: trunk/numpy/add_newdocs.py trunk/numpy/core/defmatrix.py trunk/numpy/core/fromnumeric.py Log: Add/edit documentation for mean, std, var. Modified: trunk/numpy/add_newdocs.py =================================================================== --- trunk/numpy/add_newdocs.py 2007-05-11 23:13:27 UTC (rev 3750) +++ trunk/numpy/add_newdocs.py 2007-05-12 19:58:43 UTC (rev 3751) @@ -332,7 +332,7 @@ add_newdoc('numpy.core.multiarray','set_numeric_ops', """set_numeric_ops(op=func, ...) - Set some or all of the number methods for all array objects. Don't + Set some or all of the number methods for all array objects. Do not forget **dict can be used as the argument list. Return the functions that were replaced, which can be stored and set later. @@ -810,7 +810,7 @@ """a.getfield(dtype, offset) -> field of array as given type. Returns a field of the given array as a certain type. A field is a view of - the array's data with each itemsize determined by the given type and the + the array data with each itemsize determined by the given type and the offset into the current array. """)) @@ -832,16 +832,38 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('mean', - """a.mean(axis=None, dtype=None) + """a.mean(axis=None, dtype=None, out=None) -> mean - Average the array over the given axis. If the axis is None, - average over all dimensions of the array. Equivalent to + Returns the average of the array elements. The average is taken over the + flattened array by default, otherwise over the specified axis. + + :Parameters: + axis : integer + Axis along which the means are computed. The default is + to compute the standard deviation of the flattened array. + dtype : type + Type to use in computing the means. For arrays of + integer type the default is float32, for arrays of float types it + is the same as the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. - a.sum(axis, dtype) / size(a, axis). + :Returns: + mean : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. - The optional dtype argument is the data type for intermediate - calculations in the sum. + :SeeAlso: + - var : variance + - std : standard deviation + Notes + ----- + The mean is the sum of the elements along the axis divided by the + number of elements. + """)) @@ -1072,16 +1094,40 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('std', """a.std(axis=None, dtype=None, out=None) -> standard deviation. - The standard deviation isa measure of the spread of a - distribution. + Returns the standard deviation of the array elements, a measure of the + spread of a distribution. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. - The standard deviation is the square root of the average of the - squared deviations from the mean, i.e. - std = sqrt(mean((x - x.mean())**2,axis=0)). + :Parameters: + axis : integer + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : type + Type to use in computing the standard deviation. For arrays of + integer type the default is float32, for arrays of float types it + is the same as the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. - For multidimensional arrays, std is computed by default along the - first axis. + :Returns: + standard deviation : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. + :SeeAlso: + - var : variance + - mean : average + + Notes + ----- + + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e. var = sqrt(mean((x - x.mean())**2)). The + computed standard deviation is biased, i.e., the mean is computed by + dividing by the number of elements, N, rather than by N-1. + """)) @@ -1224,8 +1270,42 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('var', - """a.var(axis=None, dtype=None) + """a.var(axis=None, dtype=None, out=None) -> variance + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by default, + otherwise over the specified axis. + + :Parameters: + axis : integer + Axis along which the variance is computed. The default is to + compute the variance of the flattened array. + dtype : type + Type to use in computing the variance. For arrays of integer type + the default is float32, for arrays of float types it is the same as + the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + + :Returns: + variance : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. + + :SeeAlso: + - std : standard deviation + - mean: average + + Notes + ----- + + The variance is the average of the squared deviations from the mean, i.e. + var = mean((x - x.mean())**2). The computed variance is biased, i.e., + the mean is computed by dividing by the number of elements, N, rather + than by N-1. + """)) Modified: trunk/numpy/core/defmatrix.py =================================================================== --- trunk/numpy/core/defmatrix.py 2007-05-11 23:13:27 UTC (rev 3750) +++ trunk/numpy/core/defmatrix.py 2007-05-12 19:58:43 UTC (rev 3751) @@ -241,12 +241,121 @@ return N.ndarray.sum(self, axis, dtype, out)._align(axis) def mean(self, axis=None, out=None): + """Compute the mean along the specified axis. + + Returns the average of the array elements. The average is taken over + the flattened array by default, otherwise over the specified axis. + + :Parameters: + axis : integer + Axis along which the means are computed. The default is + to compute the standard deviation of the flattened array. + dtype : type + Type to use in computing the means. For arrays of integer type + the default is float32, for arrays of float types it is the + same as the array type. + out : ndarray + Alternative output array in which to place the result. It must + have the same shape as the expected output but the type will be + cast if necessary. + + :Returns: + mean : The return type varies, see above. + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. + + :SeeAlso: + - var : variance + - std : standard deviation + + Notes + ----- + The mean is the sum of the elements along the axis divided by the + number of elements. + + """ return N.ndarray.mean(self, axis, out)._align(axis) def std(self, axis=None, dtype=None, out=None): + """Compute the standard deviation along the specified axis. + + Returns the standard deviation of the array elements, a measure of the + spread of a distribution. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. + + :Parameters: + axis : integer + Axis along which the standard deviation is computed. The + default is to compute the standard deviation of the flattened + array. + dtype : type + Type to use in computing the standard deviation. For arrays of + integer type the default is float32, for arrays of float types + it is the same as the array type. + out : ndarray + Alternative output array in which to place the result. It must + have the same shape as the expected output but the type will be + cast if necessary. + + :Returns: + standard deviation : The return type varies, see above. + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. + + :SeeAlso: + - var : variance + - mean : average + + Notes + ----- + + The standard deviation is the square root of the average of the + squared deviations from the mean, i.e. var = sqrt(mean((x - + x.mean())**2)). The computed standard deviation is biased, i.e., the + mean is computed by dividing by the number of elements, N, rather + than by N-1. + + """ return N.ndarray.std(self, axis, dtype, out)._align(axis) def var(self, axis=None, dtype=None, out=None): + """Compute the variance along the specified axis. + + Returns the variance of the array elements, a measure of the spread of + a distribution. The variance is computed for the flattened array by + default, otherwise over the specified axis. + + :Parameters: + axis : integer + Axis along which the variance is computed. The default is to + compute the variance of the flattened array. + dtype : type + Type to use in computing the variance. For arrays of integer + type the default is float32, for arrays of float types it is + the same as the array type. + out : ndarray + Alternative output array in which to place the result. It must + have the same shape as the expected output but the type will be + cast if necessary. + + :Returns: + variance : depends, see above + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. + + :SeeAlso: + - std : standard deviation + - mean : average + + Notes + ----- + + The variance is the average of the squared deviations from the mean, + i.e. var = mean((x - x.mean())**2). The computed variance is + biased, i.e., the mean is computed by dividing by the number of + elements, N, rather than by N-1. + + """ return N.ndarray.var(self, axis, dtype, out)._align(axis) def prod(self, axis=None, dtype=None, out=None): Modified: trunk/numpy/core/fromnumeric.py =================================================================== --- trunk/numpy/core/fromnumeric.py 2007-05-11 23:13:27 UTC (rev 3750) +++ trunk/numpy/core/fromnumeric.py 2007-05-12 19:58:43 UTC (rev 3751) @@ -691,12 +691,38 @@ around = round_ def mean(a, axis=None, dtype=None, out=None): - """mean(a, axis=None, dtype=None) - Return the arithmetic mean. + """Compute the mean along the specified axis. - The mean is the sum of the elements divided by the number of elements. + Returns the average of the array elements. The average is taken over the + flattened array by default, otherwise over the specified axis. - See also: average + :Parameters: + axis : integer + Axis along which the means are computed. The default is + to compute the standard deviation of the flattened array. + dtype : type + Type to use in computing the means. For arrays of + integer type the default is float32, for arrays of float types it + is the same as the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + + :Returns: + mean : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. + + :SeeAlso: + - var : variance + - std : standard deviation + + Notes + ----- + The mean is the sum of the elements along the axis divided by the + number of elements. + """ try: mean = a.mean @@ -704,14 +730,44 @@ return _wrapit(a, 'mean', axis, dtype, out) return mean(axis, dtype, out) + def std(a, axis=None, dtype=None, out=None): - """std(sample, axis=None, dtype=None) - Return the standard deviation, a measure of the spread of a distribution. + """Compute the standard deviation along the specified axis. - The standard deviation is the square root of the average of the squared - deviations from the mean, i.e. std = sqrt(mean((x - x.mean())**2)). + Returns the standard deviation of the array elements, a measure of the + spread of a distribution. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. - See also: var + :Parameters: + axis : integer + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : type + Type to use in computing the standard deviation. For arrays of + integer type the default is float32, for arrays of float types it + is the same as the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + + :Returns: + standard deviation : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. + + :SeeAlso: + - var : variance + - mean : average + + Notes + ----- + + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e. var = sqrt(mean((x - x.mean())**2)). The + computed standard deviation is biased, i.e., the mean is computed by + dividing by the number of elements, N, rather than by N-1. + """ try: std = a.std @@ -719,14 +775,44 @@ return _wrapit(a, 'std', axis, dtype, out) return std(axis, dtype, out) + def var(a, axis=None, dtype=None, out=None): - """var(sample, axis=None, dtype=None) - Return the variance, a measure of the spread of a distribution. + """Compute the variance along the specified axis. - The variance is the average of the squared deviations from the mean, - i.e. var = mean((x - x.mean())**2). + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by default, + otherwise over the specified axis. - See also: std + :Parameters: + axis : integer + Axis along which the variance is computed. The default is to + compute the variance of the flattened array. + dtype : type + Type to use in computing the variance. For arrays of integer type + the default is float32, for arrays of float types it is the same as + the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + + :Returns: + variance : depends, see above + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. + + :SeeAlso: + - std : standard deviation + - mean : average + + Notes + ----- + + The variance is the average of the squared deviations from the mean, i.e. + var = mean((x - x.mean())**2). The computed variance is biased, i.e., + the mean is computed by dividing by the number of elements, N, rather + than by N-1. + """ try: var = a.var From numpy-svn at scipy.org Sat May 12 20:35:12 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 12 May 2007 19:35:12 -0500 (CDT) Subject: [Numpy-svn] r3752 - in trunk/numpy: . core Message-ID: <20070513003512.D827E39C27B@new.scipy.org> Author: charris Date: 2007-05-12 19:35:09 -0500 (Sat, 12 May 2007) New Revision: 3752 Modified: trunk/numpy/add_newdocs.py trunk/numpy/core/defmatrix.py trunk/numpy/core/fromnumeric.py Log: Add documentation for diagonal. Reformat documentation of sort, argsort, lexsort, and searchsorted. Modified: trunk/numpy/add_newdocs.py =================================================================== --- trunk/numpy/add_newdocs.py 2007-05-12 19:58:43 UTC (rev 3751) +++ trunk/numpy/add_newdocs.py 2007-05-13 00:35:09 UTC (rev 3752) @@ -356,18 +356,37 @@ add_newdoc('numpy.core.multiarray','lexsort', - """lexsort(keys=, axis=-1) -> array of indices. argsort with list of keys. + """lexsort(keys=, axis=-1) -> array of indices. Argsort with list of keys. - Return an array of indices similar to argsort, except the sorting is - done using the provided sorting keys. First the sort is done using - key[0], then the resulting list of indices is further manipulated by - sorting on key[1], and so forth. The result is a sort on multiple - keys. If the keys represented columns of a spreadsheet, for example, - this would sort using multiple columns (the last key being used for the - primary sort order, the second-to-last key for the secondary sort order, - and so on). The keys argument must be a sequence of things that can be - converted to arrays of the same shape. + Perform an indirect sort using a list of keys. The first key is sorted, + then the second, and so on through the list of keys. At each step the + previous order is preserved when equal keys are encountered. The result is + a sort on multiple keys. If the keys represented columns of a spreadsheet, + for example, this would sort using multiple columns (the last key being + used for the primary sort order, the second-to-last key for the secondary + sort order, and so on). The keys argument must be a sequence of things + that can be converted to arrays of the same shape. + :Parameters: + + a : array type + Array containing values that the returned indices should sort. + + axis : integer + Axis to be indirectly sorted. None indicates that the flattened + array should be used. Default is -1. + + :Returns: + + indices : integer array + Array of indices that sort the keys along the specified axis. The + array has the same shape as the keys. + + :SeeAlso: + + - argsort : indirect sort + - sort : inplace sort + """) add_newdoc('numpy.core.multiarray','can_cast', @@ -650,24 +669,38 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('argsort', """a.argsort(axis=-1, kind='quicksort', order=None) -> indices - Return array of indices that sort a along the given axis. + Perform an indirect sort along the given axis using the algorithm specified + by the kind keyword. It returns an array of indices of the same shape as + 'a' that index data along the given axis in sorted order. - Keyword arguments: + :Parameters: - axis -- axis to be indirectly sorted (default -1) - kind -- sorting algorithm (default 'quicksort') - Possible values: 'quicksort', 'mergesort', or 'heapsort' - order -- If a has fields defined, then the order keyword can be the - field name to sort on or a list (or tuple) of field names - to indicate the order that fields should be used to define - the sort. + axis : integer + Axis to be indirectly sorted. None indicates that the flattened + array should be used. Default is -1. - Returns: array of indices that sort a along the specified axis. + kind : string + Sorting algorithm to use. Possible values are 'quicksort', + 'mergesort', or 'heapsort'. Default is 'quicksort'. - This method executes an indirect sort along the given axis using the - algorithm specified by the kind keyword. It returns an array of indices of - the same shape as 'a' that index data along the given axis in sorted order. + order : list type or None + When a is an array with fields defined, this argument specifies + which fields to compare first, second, etc. Not all fields need be + specified. + :Returns: + + indices : integer array + Array of indices that sort 'a' along the specified axis. + + :SeeAlso: + + - lexsort : indirect stable sort with multiple keys + - sort : inplace sort + + :Notes: + ------ + The various sorts are characterized by average speed, worst case performance, need for work space, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The three @@ -681,9 +714,9 @@ |'heapsort' | 3 | O(n*log(n)) | 0 | no | |------------------------------------------------------| - All the sort algorithms make temporary copies of the data when the sort is - not along the last axis. Consequently, sorts along the last axis are faster - and use less space than sorts along other axis. + All the sort algorithms make temporary copies of the data when the sort is not + along the last axis. Consequently, sorts along the last axis are faster and use + less space than sorts along other axis. """)) @@ -771,8 +804,59 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('diagonal', - """a.diagonal(offset=0, axis1=0, axis2=1) + """a.diagonal(offset=0, axis1=0, axis2=1) -> diagonals + If a is 2-d, return the diagonal of self with the given offset, i.e., the + collection of elements of the form a[i,i+offset]. If a is n-d with n > 2, + then the axes specified by axis1 and axis2 are used to determine the 2-d + subarray whose diagonal is returned. The shape of the resulting array can + be determined by removing axis1 and axis2 and appending an index to the + right equal to the size of the resulting diagonals. + + :Parameters: + offset : integer + Offset of the diagonal from the main diagonal. Can be both positive + and negative. Defaults to main diagonal. + axis1 : integer + Axis to be used as the first axis of the 2-d subarrays from which + the diagonals should be taken. Defaults to first index. + axis2 : integer + Axis to be used as the second axis of the 2-d subarrays from which + the diagonals should be taken. Defaults to second index. + + :Returns: + array_of_diagonals : same type as original array + If a is 2-d, then a 1-d array containing the diagonal is returned. + If a is n-d, n > 2, then an array of diagonals is returned. + + :SeeAlso: + - diag : matlab workalike for 1-d and 2-d arrays. + - diagflat : creates diagonal arrays + - trace : sum along diagonals + + Examples + -------- + + >>> a = arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + >>> a = arange(8).reshape(2,2,2) + >>> a + array([[[0, 1], + [2, 3]], + + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0,-2,-1) + array([[0, 3], + [4, 7]]) + """)) @@ -836,31 +920,37 @@ Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis. - + :Parameters: + axis : integer Axis along which the means are computed. The default is to compute the standard deviation of the flattened array. + dtype : type Type to use in computing the means. For arrays of integer type the default is float32, for arrays of float types it is the same as the array type. + out : ndarray Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if necessary. :Returns: + mean : The return type varies, see above. A new array holding the result is returned unless out is specified, in which case a reference to out is returned. :SeeAlso: + - var : variance - std : standard deviation Notes ----- + The mean is the sum of the elements along the axis divided by the number of elements. @@ -943,7 +1033,7 @@ Return a new array from this one. The new array must have the same number of elements as self. Also always returns a view or raises a ValueError if - that is impossible.; + that is impossible. """)) @@ -988,46 +1078,38 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('searchsorted', """a.searchsorted(v, side='left') -> index array. - Required arguments: - v -- array of keys to be searched for in a. + Find the indices into a sorted array such that if the corresponding keys in + v were inserted before the indices the order of a would be preserved. If + side='left', then the first such index is returned. If side='right', then + the last such index is returned. If there is no such index because the key + is out of bounds, then the length of a is returned, i.e., the key would + need to be appended. The returned index array has the same shape as v. - Keyword arguments: - side -- {'left', 'right'}, (default 'left'). + :Parameters: - Returns: - index array with the same shape as keys. + v : array or list type + Array of keys to be searched for in a. - The array to be searched must be 1-D and is assumed to be sorted in - ascending order. + side : string + Possible values are : 'left', 'right'. Default is 'left'. Return + the first or last index where the key could be inserted. - The method call + :Returns: - a.searchsorted(v, side='left') + indices : integer array + The returned array has the same shape as v. - returns an index array with the same shape as v such that for each value i - in the index and the corresponding key in v the following holds: + :SeeAlso: - a[j] < key <= a[i] for all j < i, + - sort + - histogram - If such an index does not exist, a.size() is used. Consequently, i is the - index of the first item in 'a' that is >= key. If the key were to be - inserted into a in the slot before the index i, then the order of a would - be preserved and i would be the smallest index with that property. + :Notes: + ------- - The method call + The array a must be 1-d and is assumed to be sorted in ascending order. + Searchsorted uses binary search to find the required insertion points. - a.searchsorted(v, side='right') - - returns an index array with the same shape as v such that for each value i - in the index and the corresponding key in v the following holds: - - a[j] <= key < a[i] for all j < i, - - If such an index does not exist, a.size() is used. Consequently, i is the - index of the first item in 'a' that is > key. If the key were to be - inserted into a in the slot before the index i, then the order of a would - be preserved and i would be the largest index with that property. - """)) @@ -1047,28 +1129,41 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('sort', """a.sort(axis=-1, kind='quicksort', order=None) -> None. - Sort a along the given axis. + Perform an inplace sort along the given axis using the algorithm specified + by the kind keyword. - Keyword arguments: + :Parameters: - axis -- axis to be sorted (default -1) - kind -- sorting algorithm (default 'quicksort') - Possible values: 'quicksort', 'mergesort', or 'heapsort'. - order -- If a has fields defined, then the order keyword can be the - field name to sort on or a list (or tuple) of field names - to indicate the order that fields should be used to define - the sort. + axis : integer + Axis to be sorted along. None indicates that the flattened array + should be used. Default is -1. - Returns: None. + kind : string + Sorting algorithm to use. Possible values are 'quicksort', + 'mergesort', or 'heapsort'. Default is 'quicksort'. - This method sorts 'a' in place along the given axis using the algorithm - specified by the kind keyword. + order : list type or None + When a is an array with fields defined, this argument specifies + which fields to compare first, second, etc. Not all fields need be + specified. - The various sorts may characterized by average speed, worst case + :Returns: + + None + + :SeeAlso: + + - argsort : indirect sort + - lexsort : indirect stable sort on multiple keys + - searchsorted : find keys in sorted array + + :Notes: + ------ + + The various sorts are characterized by average speed, worst case performance, need for work space, and whether they are stable. A stable - sort keeps items with the same key in the same relative order and is most - useful when used with argsort where the key might differ from the items - being sorted. The three available algorithms have the following properties: + sort keeps items with the same key in the same relative order. The three + available algorithms have the following properties: |------------------------------------------------------| | kind | speed | worst case | work space | stable| @@ -1078,9 +1173,9 @@ |'heapsort' | 3 | O(n*log(n)) | 0 | no | |------------------------------------------------------| - All the sort algorithms make temporary copies of the data when the sort is - not along the last axis. Consequently, sorts along the last axis are faster - and use less space than sorts along other axis. + All the sort algorithms make temporary copies of the data when the sort is not + along the last axis. Consequently, sorts along the last axis are faster and use + less space than sorts along other axis. """)) @@ -1099,24 +1194,29 @@ flattened array by default, otherwise over the specified axis. :Parameters: + axis : integer Axis along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array. + dtype : type Type to use in computing the standard deviation. For arrays of integer type the default is float32, for arrays of float types it is the same as the array type. + out : ndarray Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if necessary. :Returns: + standard deviation : The return type varies, see above. A new array holding the result is returned unless out is specified, in which case a reference to out is returned. :SeeAlso: + - var : variance - mean : average @@ -1277,24 +1377,29 @@ otherwise over the specified axis. :Parameters: + axis : integer Axis along which the variance is computed. The default is to compute the variance of the flattened array. + dtype : type Type to use in computing the variance. For arrays of integer type the default is float32, for arrays of float types it is the same as the array type. + out : ndarray Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if necessary. :Returns: + variance : The return type varies, see above. A new array holding the result is returned unless out is specified, in which case a reference to out is returned. :SeeAlso: + - std : standard deviation - mean: average @@ -1315,4 +1420,3 @@ Type can be either a new sub-type object or a data-descriptor object """)) - Modified: trunk/numpy/core/defmatrix.py =================================================================== --- trunk/numpy/core/defmatrix.py 2007-05-12 19:58:43 UTC (rev 3751) +++ trunk/numpy/core/defmatrix.py 2007-05-13 00:35:09 UTC (rev 3752) @@ -247,29 +247,35 @@ the flattened array by default, otherwise over the specified axis. :Parameters: + axis : integer Axis along which the means are computed. The default is to compute the standard deviation of the flattened array. + dtype : type Type to use in computing the means. For arrays of integer type the default is float32, for arrays of float types it is the same as the array type. + out : ndarray Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if necessary. :Returns: + mean : The return type varies, see above. A new array holding the result is returned unless out is specified, in which case a reference to out is returned. :SeeAlso: + - var : variance - std : standard deviation Notes ----- + The mean is the sum of the elements along the axis divided by the number of elements. @@ -284,25 +290,30 @@ flattened array by default, otherwise over the specified axis. :Parameters: + axis : integer Axis along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array. + dtype : type Type to use in computing the standard deviation. For arrays of integer type the default is float32, for arrays of float types it is the same as the array type. + out : ndarray Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if necessary. :Returns: + standard deviation : The return type varies, see above. A new array holding the result is returned unless out is specified, in which case a reference to out is returned. :SeeAlso: + - var : variance - mean : average @@ -326,24 +337,29 @@ default, otherwise over the specified axis. :Parameters: + axis : integer Axis along which the variance is computed. The default is to compute the variance of the flattened array. + dtype : type Type to use in computing the variance. For arrays of integer type the default is float32, for arrays of float types it is the same as the array type. + out : ndarray Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if necessary. :Returns: + variance : depends, see above A new array holding the result is returned unless out is specified, in which case a reference to out is returned. :SeeAlso: + - std : standard deviation - mean : average Modified: trunk/numpy/core/fromnumeric.py =================================================================== --- trunk/numpy/core/fromnumeric.py 2007-05-12 19:58:43 UTC (rev 3751) +++ trunk/numpy/core/fromnumeric.py 2007-05-13 00:35:09 UTC (rev 3752) @@ -40,6 +40,7 @@ result = wrap(result) return result + def take(a, indices, axis=None, out=None, mode='raise'): """Return an array with values pulled from the given array at the given indices. @@ -54,7 +55,7 @@ The indices of the values to extract. - `axis` : None or int, optional (default=None) The axis over which to select values. None signifies that the operation - should be performed over the flattened array. + should be performed over the flattened array. - `out` : array, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype. @@ -76,6 +77,7 @@ return _wrapit(a, 'take', indices, axis, out, mode) return take(indices, axis, out, mode) + # not deprecated --- copy if necessary, view otherwise def reshape(a, newshape, order='C'): """Return an array that uses the data of the given array, but with a new @@ -104,6 +106,7 @@ return _wrapit(a, 'reshape', newshape, order=order) return reshape(newshape, order=order) + def choose(a, choices, out=None, mode='raise'): """Use an index array to construct a new array from a set of choices. @@ -134,7 +137,7 @@ numpy.ndarray.choose() is the equivalent method. :Example: - >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], + >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], ... [20, 21, 22, 23], [30, 31, 32, 33]] >>> choose([2, 3, 1, 0], choices) array([20, 31, 12, 3]) @@ -142,7 +145,7 @@ array([20, 31, 12, 3]) >>> choose([2, 4, 1, 0], choices, mode='wrap') array([20, 1, 12, 3]) - + """ try: choose = a.choose @@ -150,6 +153,7 @@ return _wrapit(a, 'choose', choices, out=out, mode=mode) return choose(choices, out=out, mode=mode) + def repeat(a, repeats, axis=None): """Repeat elements of an array. @@ -174,7 +178,7 @@ array([0, 0, 1, 1, 2, 2]) >>> repeat([0, 1, 2], [2, 3, 4]) array([0, 0, 1, 1, 1, 2, 2, 2, 2]) - + """ try: repeat = a.repeat @@ -182,6 +186,7 @@ return _wrapit(a, 'repeat', repeats, axis) return repeat(repeats, axis) + def put (a, ind, v, mode='raise'): """put(a, ind, v) results in a[n] = v[n] for all n in ind If v is shorter than mask it will be repeated as necessary. @@ -196,6 +201,7 @@ """ return a.put(ind, v, mode) + def swapaxes(a, axis1, axis2): """swapaxes(a, axis1, axis2) returns array a with axis1 and axis2 interchanged. @@ -206,6 +212,7 @@ return _wrapit(a, 'swapaxes', axis1, axis2) return swapaxes(axis1, axis2) + def transpose(a, axes=None): """transpose(a, axes=None) returns a view of the array with dimensions permuted according to axes. If axes is None @@ -217,31 +224,48 @@ return _wrapit(a, 'transpose', axes) return transpose(axes) + def sort(a, axis=-1, kind='quicksort', order=None): - """Returns copy of 'a' sorted along the given axis. + """Return copy of 'a' sorted along the given axis. - Keyword arguments: + Perform an inplace sort along the given axis using the algorithm specified + by the kind keyword. - axis -- axis to be sorted (default -1). Can be None - to indicate that a flattened and sorted array should - be returned (the array method does not support this). - kind -- sorting algorithm (default 'quicksort') - Possible values: 'quicksort', 'mergesort', or 'heapsort'. - order -- For an array with fields defined, this argument allows - specification of which fields to compare first, second, - etc. Not all fields need be specified. + :Parameters: + a : array type + Array to be sorted. - Returns: None. + axis : integer + Axis to be sorted along. None indicates that the flattened array + should be used. Default is -1. - This method sorts 'a' in place along the given axis using the algorithm - specified by the kind keyword. + kind : string + Sorting algorithm to use. Possible values are 'quicksort', + 'mergesort', or 'heapsort'. Default is 'quicksort'. - The various sorts may characterized by average speed, worst case + order : list type or None + When a is an array with fields defined, this argument specifies + which fields to compare first, second, etc. Not all fields need be + specified. + + :Returns: + + sorted array : type is unchanged. + + :SeeAlso: + + - argsort : indirect sort + - lexsort : indirect stable sort on multiple keys + - searchsorted : find keys in sorted array + + :Notes: + ------ + + The various sorts are characterized by average speed, worst case performance, need for work space, and whether they are stable. A stable - sort keeps items with the same key in the same relative order and is most - useful when used with argsort where the key might differ from the items - being sorted. The three available algorithms have the following properties: + sort keeps items with the same key in the same relative order. The three + available algorithms have the following properties: |------------------------------------------------------| | kind | speed | worst case | work space | stable| @@ -251,9 +275,9 @@ |'heapsort' | 3 | O(n*log(n)) | 0 | no | |------------------------------------------------------| - All the sort algorithms make temporary copies of the data when the sort is - not along the last axis. Consequently, sorts along the last axis are faster - and use less space than sorts along other axis. + All the sort algorithms make temporary copies of the data when the sort is not + along the last axis. Consequently, sorts along the last axis are faster and use + less space than sorts along other axis. """ if axis is None: @@ -264,26 +288,45 @@ a.sort(axis, kind, order) return a + def argsort(a, axis=-1, kind='quicksort', order=None): """Returns array of indices that index 'a' in sorted order. - Keyword arguments: + Perform an indirect sort along the given axis using the algorithm specified + by the kind keyword. It returns an array of indices of the same shape as + 'a' that index data along the given axis in sorted order. - axis -- axis to be indirectly sorted (default -1) - Can be None to indicate return indices into the - flattened array. - kind -- sorting algorithm (default 'quicksort') - Possible values: 'quicksort', 'mergesort', or 'heapsort' - order -- For an array with fields defined, this argument allows - specification of which fields to compare first, second, - etc. Not all fields need be specified. + :Parameters: - Returns: array of indices that sort 'a' along the specified axis. + a : array type + Array containing values that the returned indices should sort. - This method executes an indirect sort along the given axis using the - algorithm specified by the kind keyword. It returns an array of indices of - the same shape as 'a' that index data along the given axis in sorted order. + axis : integer + Axis to be indirectly sorted. None indicates that the flattened + array should be used. Default is -1. + kind : string + Sorting algorithm to use. Possible values are 'quicksort', + 'mergesort', or 'heapsort'. Default is 'quicksort'. + + order : list type or None + When a is an array with fields defined, this argument specifies + which fields to compare first, second, etc. Not all fields need be + specified. + + :Returns: + + indices : integer array + Array of indices that sort 'a' along the specified axis. + + :SeeAlso: + + - lexsort : indirect stable sort with multiple keys + - sort : inplace sort + + :Notes: + ------ + The various sorts are characterized by average speed, worst case performance, need for work space, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The three @@ -308,6 +351,7 @@ return _wrapit(a, 'argsort', axis, kind, order) return argsort(axis, kind, order) + def argmax(a, axis=None): """argmax(a,axis=None) returns the indices to the maximum value of the 1-D arrays along the given axis. @@ -318,6 +362,7 @@ return _wrapit(a, 'argmax', axis) return argmax(axis) + def argmin(a, axis=None): """argmin(a,axis=None) returns the indices to the minimum value of the 1-D arrays along the given axis. @@ -328,50 +373,45 @@ return _wrapit(a, 'argmin', axis) return argmin(axis) + def searchsorted(a, v, side='left'): - """-> index array. Inserting v[i] before a[index[i]] maintains a in order. + """Returns indices where keys in v should be inserted to maintain order. - Required arguments: - a -- sorted 1-D array to be searched. - v -- array of keys to be searched for in a. + Find the indices into a sorted array such that if the corresponding keys in + v were inserted before the indices the order of a would be preserved. If + side='left', then the first such index is returned. If side='right', then + the last such index is returned. If there is no such index because the key + is out of bounds, then the length of a is returned, i.e., the key would + need to be appended. The returned index array has the same shape as v. - Keyword arguments: - side -- {'left', 'right'}, default('left'). + :Parameters: - Returns: - array of indices with the same shape as v. + a : array + 1-d array sorted in ascending order. - The array to be searched must be 1-D and is assumed to be sorted in - ascending order. + v : array or list type + Array of keys to be searched for in a. - The function call + side : string + Possible values are : 'left', 'right'. Default is 'left'. Return + the first or last index where the key could be inserted. - searchsorted(a, v, side='left') + :Returns: - returns an index array with the same shape as v such that for each value i - in the index and the corresponding key in v the following holds: + indices : integer array + Array of insertion points with the same shape as v. - a[j] < key <= a[i] for all j < i, + :SeeAlso: - If such an index does not exist, a.size() is used. Consequently, i is the - index of the first item in 'a' that is >= key. If the key were to be - inserted into a in the slot before the index i, then the order of a would - be preserved and i would be the smallest index with that property. + - sort + - histogram - The function call + :Notes: + ------- - searchsorted(a, v, side='right') + The array a must be 1-d and is assumed to be sorted in ascending order. + Searchsorted uses binary search to find the required insertion points. - returns an index array with the same shape as v such that for each value i - in the index and the corresponding key in v the following holds: - - a[j] <= key < a[i] for all j < i, - - If such an index does not exist, a.size() is used. Consequently, i is the - index of the first item in 'a' that is > key. If the key were to be - inserted into a in the slot before the index i, then the order of a would - be preserved and i would be the largest index with that property. - """ try: searchsorted = a.searchsorted @@ -379,6 +419,7 @@ return _wrapit(a, 'searchsorted', v, side) return searchsorted(v, side) + def resize(a, new_shape): """resize(a,new_shape) returns a new array with the specified shape. The original array's total size can be any size. It @@ -410,6 +451,7 @@ return reshape(a, new_shape) + def squeeze(a): "Returns a with any ones from the shape of a removed" try: @@ -418,12 +460,65 @@ return _wrapit(a, 'squeeze') return squeeze() + def diagonal(a, offset=0, axis1=0, axis2=1): - """diagonal(a, offset=0, axis1=0, axis2=1) returns the given diagonals - defined by the last two dimensions of the array. + """Return specified diagonals. Uses first two indices by default. + + If a is 2-d, return the diagonal of self with the given offset, i.e., the + collection of elements of the form a[i,i+offset]. If a is n-d with n > 2, + then the axes specified by axis1 and axis2 are used to determine the 2-d + subarray whose diagonal is returned. The shape of the resulting array can be + determined by removing axis1 and axis2 and appending an index to the right + equal to the size of the resulting diagonals. + + :Parameters: + offset : integer + Offset of the diagonal from the main diagonal. Can be both positive + and negative. Defaults to main diagonal. + axis1 : integer + Axis to be used as the first axis of the 2-d subarrays from which + the diagonals should be taken. Defaults to first axis. + axis2 : integer + Axis to be used as the second axis of the 2-d subarrays from which + the diagonals should be taken. Defaults to second axis. + + :Returns: + array_of_diagonals : same type as original array + If a is 2-d, then a 1-d array containing the diagonal is returned. + If a is n-d, n > 2, then an array of diagonals is returned. + + :SeeAlso: + - diag : matlab workalike for 1-d and 2-d arrays + - diagflat : creates diagonal arrays + - trace : sum along diagonals + + Examples + -------- + + >>> a = arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + >>> a = arange(8).reshape(2,2,2) + >>> a + array([[[0, 1], + [2, 3]], + + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0,-2,-1) + array([[0, 3], + [4, 7]]) + """ return asarray(a).diagonal(offset, axis1, axis2) + def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None): """trace(a,offset=0, axis1=0, axis2=1) returns the sum along diagonals (defined by the last two dimenions) of the array. @@ -697,29 +792,35 @@ flattened array by default, otherwise over the specified axis. :Parameters: + axis : integer Axis along which the means are computed. The default is to compute the standard deviation of the flattened array. + dtype : type Type to use in computing the means. For arrays of integer type the default is float32, for arrays of float types it is the same as the array type. + out : ndarray Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if necessary. :Returns: + mean : The return type varies, see above. A new array holding the result is returned unless out is specified, in which case a reference to out is returned. :SeeAlso: + - var : variance - std : standard deviation Notes ----- + The mean is the sum of the elements along the axis divided by the number of elements. @@ -739,24 +840,29 @@ flattened array by default, otherwise over the specified axis. :Parameters: + axis : integer Axis along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array. + dtype : type Type to use in computing the standard deviation. For arrays of integer type the default is float32, for arrays of float types it is the same as the array type. + out : ndarray Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if necessary. :Returns: + standard deviation : The return type varies, see above. A new array holding the result is returned unless out is specified, in which case a reference to out is returned. :SeeAlso: + - var : variance - mean : average @@ -784,24 +890,29 @@ otherwise over the specified axis. :Parameters: + axis : integer Axis along which the variance is computed. The default is to compute the variance of the flattened array. + dtype : type Type to use in computing the variance. For arrays of integer type the default is float32, for arrays of float types it is the same as the array type. + out : ndarray Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if necessary. :Returns: + variance : depends, see above A new array holding the result is returned unless out is specified, in which case a reference to out is returned. :SeeAlso: + - std : standard deviation - mean : average From numpy-svn at scipy.org Sun May 13 02:23:15 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sun, 13 May 2007 01:23:15 -0500 (CDT) Subject: [Numpy-svn] r3753 - trunk/numpy/linalg Message-ID: <20070513062315.4432E39C057@new.scipy.org> Author: charris Date: 2007-05-13 01:23:08 -0500 (Sun, 13 May 2007) New Revision: 3753 Modified: trunk/numpy/linalg/linalg.py Log: Add documentation for eigvals, eigvalsh, eig, and eigh. Modified: trunk/numpy/linalg/linalg.py =================================================================== --- trunk/numpy/linalg/linalg.py 2007-05-13 00:35:09 UTC (rev 3752) +++ trunk/numpy/linalg/linalg.py 2007-05-13 06:23:08 UTC (rev 3753) @@ -344,7 +344,42 @@ # Eigenvalues + + def eigvals(a): + """Compute the eigenvalues of the general 2-d array a. + + A simple interface to the LAPACK routines dgeev and zgeev that sets the + flags to return only the eigenvalues of general real and complex arrays + respectively. + + :Parameters: + + a : 2-d array + A complex or real 2-d array whose eigenvalues and eigenvectors + will be computed. + + :Returns: + + w : 1-d double or complex array + The eigenvalues. The eigenvalues are not necessarily ordered, nor + are they necessarily real for real matrices. + + :SeeAlso: + + - eig : eigenvalues and right eigenvectors of general arrays + - eigvalsh : eigenvalues of symmetric or Hemitiean arrays. + - eigh : eigenvalues and eigenvectors of symmetric/Hermitean arrays. + + :Notes: + ------- + + The number w is an eigenvalue of a if there exists a vector v + satisfying the equation dot(a,v) = w*v. Alternately, if w is a root of + the characteristic equation det(a - w[i]*I) = 0, where det is the + determinant and I is the identity matrix. + + """ _assertRank2(a) _assertSquareness(a) _assertFinite(a) @@ -389,6 +424,44 @@ def eigvalsh(a, UPLO='L'): + """Compute the eigenvalues of the symmetric or Hermitean 2-d array a. + + A simple interface to the LAPACK routines dsyevd and zheevd that sets the + flags to return only the eigenvalues of real symmetric and complex + Hermetian arrays respectively. + + :Parameters: + + a : 2-d array + A complex or real 2-d array whose eigenvalues and eigenvectors + will be computed. + + UPLO : string + Specifies whether the pertinent array date is taken from the upper + or lower triangular part of a. Possible values are 'L', and 'U' for + upper and lower respectively. Default is 'L'. + + :Returns: + + w : 1-d double array + The eigenvalues. The eigenvalues are not necessarily ordered. + + :SeeAlso: + + - eigh : eigenvalues and eigenvectors of symmetric/Hermitean arrays. + - eigvals : eigenvalues of general real or complex arrays. + - eig : eigenvalues and eigenvectors of general real or complex arrays. + + :Notes: + ------- + + The number w is an eigenvalue of a if there exists a vector v + satisfying the equation dot(a,v) = w*v. Alternately, if w is a root of + the characteristic equation det(a - w[i]*I) = 0, where det is the + determinant and I is the identity matrix. The eigenvalues of real + symmetric or complex Hermitean matrices are always real. + + """ _assertRank2(a) _assertSquareness(a) t, result_t = _commonType(a) @@ -432,13 +505,58 @@ a = _fastCT(a.astype(t)) return a, t, result_t + # Eigenvectors + def eig(a): - """eig(a) returns u,v where u is the eigenvalues and -v is a matrix of eigenvectors with vector v[:,i] corresponds to -eigenvalue u[i]. Satisfies the equation dot(a, v[:,i]) = u[i]*v[:,i] -""" + """Eigenvalues and right eigenvectors of a general matrix. + + A simple interface to the LAPACK routines dgeev and zgeev that compute the + eigenvalues and eigenvectors of general real and complex arrays + respectively. + + :Parameters: + + a : 2-d array + A complex or real 2-d array whose eigenvalues and eigenvectors + will be computed. + + :Returns: + + w : 1-d double or complex array + The eigenvalues. The eigenvalues are not necessarily ordered, nor + are they necessarily real for real matrices. + + v : 2-d double or complex double array. + The normalized eigenvector corresponding to the eigenvalue w[i] is + the column v[:,i]. + + :SeeAlso: + + - eigvalsh : eigenvalues of symmetric or Hemitiean arrays. + - eig : eigenvalues and right eigenvectors for non-symmetric arrays + - eigvals : eigenvalues of non-symmetric array. + + :Notes: + ------- + + The number w is an eigenvalue of a if there exists a vector v + satisfying the equation dot(a,v) = w*v. Alternately, if w is a root of + the characteristic equation det(a - w[i]*I) = 0, where det is the + determinant and I is the identity matrix. The arrays a, w, and v + satisfy the equation dot(a,v[i]) = w[i]*v[:,i]. + + The array v of eigenvectors may not be of maximum rank, that is, some + of the columns may be dependent, although roundoff error may obscure + that fact. If the eigenvalues are all different, then theoretically the + eigenvectors are independent. Likewise, the matrix of eigenvectors is + unitary if the matrix a is normal, i.e., if dot(a, a.H) = dot(a.H, a). + + The left and right eigenvectors are not necessarily the (Hemitian) + transposes of each other. + + """ a, wrap = _makearray(a) _assertRank2(a) _assertSquareness(a) @@ -492,8 +610,51 @@ vt = v.transpose().astype(result_t) return w.astype(result_t), wrap(vt) + def eigh(a, UPLO='L'): """Compute eigenvalues for a Hermitian-symmetric matrix. + + A simple interface to the LAPACK routines dsyevd and zheevd that compute + the eigenvalues and eigenvectors of real symmetric and complex Hermitian + arrays respectively. + + :Parameters: + + a : 2-d array + A complex Hermitian or symmetric real 2-d array whose eigenvalues + and eigenvectors will be computed. + + UPLO : string + Specifies whether the pertinent array date is taken from the upper + or lower triangular part of a. Possible values are 'L', and 'U'. + Default is 'L'. + + :Returns: + + w : 1-d double array + The eigenvalues. The eigenvalues are not necessarily ordered. + + v : 2-d double or complex double array, depending on input array type + The normalized eigenvector corresponding to the eigenvalue w[i] is + the column v[:,i]. + + :SeeAlso: + + - eigvalsh : eigenvalues of symmetric or Hemitiean arrays. + - eig : eigenvalues and right eigenvectors for non-symmetric arrays + - eigvals : eigenvalues of non-symmetric array. + + :Notes: + ------- + + The number w is an eigenvalue of a if there exists a vector v + satisfying the equation dot(a,v) = w*v. Alternately, if w is a root of + the characteristic equation det(a - w[i]*I) = 0, where det is the + determinant and I is the identity matrix. The eigenvalues of real + symmetric or complex Hermitean matrices are always real. The array v + of eigenvectors is unitary and a, w, and v satisfy the equation + dot(a,v[i]) = w[i]*v[:,i]. + """ a, wrap = _makearray(a) _assertRank2(a) From numpy-svn at scipy.org Sun May 13 04:19:35 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sun, 13 May 2007 03:19:35 -0500 (CDT) Subject: [Numpy-svn] r3754 - trunk/numpy/core/tests Message-ID: <20070513081935.EAC6B39C04A@new.scipy.org> Author: stefan Date: 2007-05-13 03:19:11 -0500 (Sun, 13 May 2007) New Revision: 3754 Modified: trunk/numpy/core/tests/test_regression.py Log: Add regression tests for tickets 469, 503, 514 and 516. Modified: trunk/numpy/core/tests/test_regression.py =================================================================== --- trunk/numpy/core/tests/test_regression.py 2007-05-13 06:23:08 UTC (rev 3753) +++ trunk/numpy/core/tests/test_regression.py 2007-05-13 08:19:11 UTC (rev 3754) @@ -633,6 +633,11 @@ assert_equal(N.array("a\x00\x0b\x0c\x00").item(), 'a\x00\x0b\x0c') + def check_mem_string_concat(self, level=rlevel): + """Ticket #469""" + x = N.array([]) + N.append(x,'asdasd\tasdasd') + def check_matrix_multiply_by_1d_vector(self, level=rlevel) : """Ticket #473""" def mul() : @@ -653,5 +658,25 @@ N.take(x,[0,2],axis=1,out=b) assert_array_equal(a,b) + def check_frompyfunc_endian(self, level=rlevel): + """Ticket #503""" + from math import radians + uradians = N.frompyfunc(radians, 1, 1) + big_endian = N.array([83.4, 83.5], dtype='>f8') + little_endian = N.array([83.4, 83.5], dtype=' Author: stefan Date: 2007-05-13 12:36:19 -0500 (Sun, 13 May 2007) New Revision: 3755 Modified: trunk/numpy/lib/getlimits.py trunk/numpy/lib/tests/test_getlimits.py Log: Add iinfo based on a patch by Albert Strasheim (ticket #250). Modified: trunk/numpy/lib/getlimits.py =================================================================== --- trunk/numpy/lib/getlimits.py 2007-05-13 08:19:11 UTC (rev 3754) +++ trunk/numpy/lib/getlimits.py 2007-05-13 17:36:19 UTC (rev 3755) @@ -7,8 +7,8 @@ import numpy.core.numeric as numeric import numpy.core.numerictypes as ntypes from numpy.core.numeric import array +import numpy as N - def _frz(a): """fix rank-0 --> rank-1""" if a.ndim == 0: a.shape = (1,) @@ -21,7 +21,16 @@ } class finfo(object): + """Machine limits for floating point types. + :Parameters: + dtype : floating point type or instance + + :SeeAlso: + - numpy.lib.machar.MachAr + + """ + _finfo_cache = {} def __new__(cls, dtype): @@ -106,6 +115,50 @@ --------------------------------------------------------------------- ''' % self.__dict__ + +class iinfo: + """Limits for integer types. + + :Parameters: + type : integer type or instance + + """ + + # Should be using dtypes as keys, but hash-function isn't yet implemented + _min_values = {'int8': -2**7, + 'int16': -2**15, + 'int32': -2**31, + 'int64': -2**63, + 'uint8': 0, + 'uint16': 0, + 'uint32': 0, + 'uint64': 0} + + _max_values = {'int8': 2**7 - 1, + 'int16': 2**15 - 1, + 'int32': 2**31 - 1, + 'int64': 2**63 - 1, + 'uint8': 2**8 - 1, + 'uint16': 2**16 - 1, + 'uint32': 2**32 - 1, + 'uint64': 2**64 - 1} + + def __init__(self, type): + self.dtype = str(N.dtype(type)) + if not (self.dtype in self._min_values and \ + self.dtype in self._max_values): + raise ValueError("Invalid integer data type.") + + def min(self): + """Minimum value of given dtype.""" + return self._min_values[self.dtype] + min = property(min) + + def max(self): + """Maximum value of given dtype.""" + return self._max_values[self.dtype] + max = property(max) + if __name__ == '__main__': f = finfo(ntypes.single) print 'single epsilon:',f.eps Modified: trunk/numpy/lib/tests/test_getlimits.py =================================================================== --- trunk/numpy/lib/tests/test_getlimits.py 2007-05-13 08:19:11 UTC (rev 3754) +++ trunk/numpy/lib/tests/test_getlimits.py 2007-05-13 17:36:19 UTC (rev 3755) @@ -4,8 +4,9 @@ from numpy.testing import * set_package_path() import numpy.lib;reload(numpy.lib) -from numpy.lib.getlimits import finfo +from numpy.lib.getlimits import finfo, iinfo from numpy import single,double,longdouble +import numpy as N restore_path() ################################################## @@ -34,5 +35,21 @@ ftype2 = finfo(longdouble) assert_equal(id(ftype),id(ftype2)) +class test_iinfo(NumpyTestCase): + def check_basic(self): + dts = zip(['i1', 'i2', 'i4', 'i8', + 'u1', 'u2', 'u4', 'u8'], + [N.int8, N.int16, N.int32, N.int64, + N.uint8, N.uint16, N.uint32, N.uint64]) + for dt1, dt2 in dts: + assert_equal(iinfo(dt1).min, iinfo(dt2).min) + assert_equal(iinfo(dt1).max, iinfo(dt2).max) + self.assertRaises(ValueError, iinfo, 'f4') + + def check_unsigned_max(self): + types = N.sctypes['uint'] + for T in types: + assert_equal(iinfo(T).max, T(-1)) + if __name__ == "__main__": NumpyTest().run() From numpy-svn at scipy.org Sun May 13 16:15:11 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sun, 13 May 2007 15:15:11 -0500 (CDT) Subject: [Numpy-svn] r3756 - trunk/numpy/fft Message-ID: <20070513201511.C85B339C12E@new.scipy.org> Author: charris Date: 2007-05-13 15:15:09 -0500 (Sun, 13 May 2007) New Revision: 3756 Modified: trunk/numpy/fft/fftpack_litemodule.c Log: Fix ticket #506 by applying the patch from cdavid. Modified: trunk/numpy/fft/fftpack_litemodule.c =================================================================== --- trunk/numpy/fft/fftpack_litemodule.c 2007-05-13 17:36:19 UTC (rev 3755) +++ trunk/numpy/fft/fftpack_litemodule.c 2007-05-13 20:15:09 UTC (rev 3756) @@ -18,8 +18,8 @@ if(!PyArg_ParseTuple(args, "OO", &op1, &op2)) return NULL; data = (PyArrayObject *)PyArray_CopyFromObject(op1, PyArray_CDOUBLE, 1, 0); - if (data == NULL) return NULL; - if (PyArray_As1D(&op2, (char **)&wsave, &nsave, PyArray_DOUBLE) == -1) + if (data == NULL) return NULL; + if (PyArray_As1D(&op2, (char **)&wsave, &nsave, PyArray_DOUBLE) == -1) goto fail; if (data == NULL) goto fail; @@ -31,10 +31,12 @@ nrepeats = PyArray_SIZE(data)/npts; dptr = (double *)data->data; + NPY_SIGINT_ON for (i=0; idata; + NPY_SIGINT_ON for (i=0; idata); + NPY_SIGINT_OFF return (PyObject *)op; } @@ -115,12 +121,12 @@ if (data == NULL) return NULL; npts = data->dimensions[data->nd-1]; data->dimensions[data->nd-1] = npts/2+1; - ret = (PyArrayObject *)PyArray_Zeros(data->nd, data->dimensions, + ret = (PyArrayObject *)PyArray_Zeros(data->nd, data->dimensions, PyArray_DescrFromType(PyArray_CDOUBLE), 0); data->dimensions[data->nd-1] = npts; rstep = (ret->dimensions[ret->nd-1])*2; - if (PyArray_As1D(&op2, (char **)&wsave, &nsave, PyArray_DOUBLE) == -1) + if (PyArray_As1D(&op2, (char **)&wsave, &nsave, PyArray_DOUBLE) == -1) goto fail; if (data == NULL || ret == NULL) goto fail; @@ -132,7 +138,9 @@ nrepeats = PyArray_SIZE(data)/npts; rptr = (double *)ret->data; dptr = (double *)data->data; - + + + NPY_SIGINT_ON for (i=0; idimensions[data->nd-1]; - ret = (PyArrayObject *)PyArray_Zeros(data->nd, data->dimensions, + ret = (PyArrayObject *)PyArray_Zeros(data->nd, data->dimensions, PyArray_DescrFromType(PyArray_DOUBLE), 0); - if (PyArray_As1D(&op2, (char **)&wsave, &nsave, PyArray_DOUBLE) == -1) + if (PyArray_As1D(&op2, (char **)&wsave, &nsave, PyArray_DOUBLE) == -1) goto fail; if (data == NULL || ret == NULL) goto fail; @@ -181,7 +190,8 @@ nrepeats = PyArray_SIZE(ret)/npts; rptr = (double *)ret->data; dptr = (double *)data->data; - + + NPY_SIGINT_ON for (i=0; idata); + NPY_SIGINT_OFF return (PyObject *)op; } @@ -236,7 +249,7 @@ /* Initialization function for the module (*must* be called initfftpack) */ -static char fftpack_module_documentation[] = +static char fftpack_module_documentation[] = "" ; @@ -258,5 +271,5 @@ PyDict_SetItemString(d, "error", ErrorObject); /* XXXX Add constants here */ - + } From numpy-svn at scipy.org Sun May 13 19:22:20 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sun, 13 May 2007 18:22:20 -0500 (CDT) Subject: [Numpy-svn] r3757 - in trunk/numpy/lib: . tests Message-ID: <20070513232220.A725C39C0B8@new.scipy.org> Author: charris Date: 2007-05-13 18:22:17 -0500 (Sun, 13 May 2007) New Revision: 3757 Modified: trunk/numpy/lib/function_base.py trunk/numpy/lib/tests/test_function_base.py Log: Add patch from dhuard to histogramdd. Fixes ticket #509. Restructure restructured comments; avoid consolidated lists, they are too ugly to contemplate and move around where they aren't wanted. They can be fixed later if epydoc fixes things up. Modified: trunk/numpy/lib/function_base.py =================================================================== --- trunk/numpy/lib/function_base.py 2007-05-13 20:15:09 UTC (rev 3756) +++ trunk/numpy/lib/function_base.py 2007-05-13 23:22:17 UTC (rev 3757) @@ -71,29 +71,41 @@ def histogram(a, bins=10, range=None, normed=False): """Compute the histogram from a set of data. - :Parameters: - - `a` : array - The data to histogram. n-D arrays will be flattened. - - `bins` : int or sequence of floats, optional - If an int, then the number of equal-width bins in the given range. - Otherwise, a sequence of the lower bound of each bin. - - `range` : (float, float), optional - The lower and upper range of the bins. If not provided, then (a.min(), - a.max()) is used. Values outside of this range are allocated to the - closest bin. - - `normed` : bool, optional - If False, the result array will contain the number of samples in each bin. - If True, the result array is the value of the probability *density* - function at the bin normalized such that the *integral* over the range - is 1. Note that the sum of all of the histogram values will not usually - be 1; it is not a probability *mass* function. + Parameters: - :Returns: - - `hist` : array (n,) - The values of the histogram. See `normed` for a description of the - possible semantics. - - `lower_edges` : float array (n,) - The lower edges of each bin. + a : array + The data to histogram. n-D arrays will be flattened. + + bins : int or sequence of floats + If an int, then the number of equal-width bins in the given range. + Otherwise, a sequence of the lower bound of each bin. + + range : (float, float) + The lower and upper range of the bins. If not provided, then + (a.min(), a.max()) is used. Values outside of this range are + allocated to the closest bin. + + normed : bool + If False, the result array will contain the number of samples in + each bin. If True, the result array is the value of the + probability *density* function at the bin normalized such that the + *integral* over the range is 1. Note that the sum of all of the + histogram values will not usually be 1; it is not a probability + *mass* function. + + Returns: + + hist : array + The values of the histogram. See `normed` for a description of the + possible semantics. + + lower_edges : float array + The lower edges of each bin. + + SeeAlso: + + histogramdd + """ a = asarray(a).ravel() if not iterable(bins): @@ -120,38 +132,54 @@ return n, bins def histogramdd(sample, bins=10, range=None, normed=False, weights=None): - """histogramdd(sample, bins=10, range=None, normed=False, weights=None) + """histogramdd(sample, bins=10, range=None, normed=False, weights=None) - Return the D-dimensional histogram of the sample. + Return the N-dimensional histogram of the sample. - :Parameters: - - `sample` : A sequence of D arrays, or an NxD array. - - `bins` : A sequence of edge arrays, a sequence of bin number, - or a scalar (the number of bins for all dimensions.) - - `range` : A sequence of lower and upper bin edges (default: [min, max]). - - `normed` : Boolean, if False, return the number of samples in each bin, - if True, returns the density. - - `weights` : An array of weights. The weights are normed only if normed is True. - Should weights.sum() not equal N, the total bin count will - not be equal to the number of samples. + Parameters: - :Return: - - `hist` : Histogram array. - - `edges` : List of arrays defining the bin edges. - + sample : sequence or array + A sequence containing N arrays or an NxM array. Input data. - Example: - >>> x = random.randn(100,3) - >>> hist3d, edges = histogramdd(x, bins = (5, 6, 7)) + bins : sequence or scalar + A sequence of edge arrays, a sequence of bin counts, or a scalar + which is the bin count for all dimensions. Default is 10. - :SeeAlso: histogram + range : sequence + A sequence of lower and upper bin edges. Default is [min, max]. + normed : boolean + If False, return the number of samples in each bin, if True, + returns the density. + + weights : array + Array of weights. The weights are normed only if normed is True. + Should the sum of the weights not equal N, the total bin count will + not be equal to the number of samples. + + Returns: + + hist : array + Histogram array. + + edges : list + List of arrays defining the lower bin edges. + + SeeAlso: + + histogram + + Example + + >>> x = random.randn(100,3) + >>> hist3d, edges = histogramdd(x, bins = (5, 6, 7)) + """ - try: + try: # Sample is an ND-array. N, D = sample.shape - except (AttributeError, ValueError): + except (AttributeError, ValueError): # Sample is a sequence of 1D arrays. sample = atleast_2d(sample).T N, D = sample.shape @@ -161,7 +189,7 @@ dedges = D*[None] if weights is not None: weights = asarray(weights) - + try: M = len(bins) if M != D: @@ -172,14 +200,20 @@ # Select range for each dimension # Used only if number of bins is given. if range is None: - smin = atleast_1d(sample.min(0)) - smax = atleast_1d(sample.max(0)) + smin = atleast_1d(array(sample.min(0), float)) + smax = atleast_1d(array(sample.max(0), float)) else: smin = zeros(D) smax = zeros(D) for i in arange(D): smin[i], smax[i] = range[i] + # Make sure the bins have a finite width. + for i in arange(len(smin)): + if smin[i] == smax[i]: + smin[i] = smin[i] - .5 + smax[i] = smax[i] + .5 + # Create edge arrays for i in arange(D): if isscalar(bins[i]): @@ -189,14 +223,14 @@ edges[i] = asarray(bins[i], float) nbin[i] = len(edges[i])+1 # +1 for outlier bins dedges[i] = diff(edges[i]) - + nbin = asarray(nbin) - - # Compute the bin number each sample falls into. + + # Compute the bin number each sample falls into. Ncount = {} for i in arange(D): Ncount[i] = digitize(sample[:,i], edges[i]) - + # Using digitize, values that fall on an edge are put in the right bin. # For the rightmost bin, we want values equal to the right # edge to be counted in the last bin, and not as an outlier. @@ -206,7 +240,7 @@ decimal = int(-log10(dedges[i].min())) +6 # Find which points are on the rightmost edge. on_edge = where(around(sample[:,i], decimal) == around(edges[i][-1], decimal))[0] - # Shift these points one bin to the left. + # Shift these points one bin to the left. Ncount[i][on_edge] -= 1 # Flattened histogram matrix (1D) @@ -238,7 +272,7 @@ # Remove outliers (indices 0 and -1 for each dimension). core = D*[slice(1,-1)] hist = hist[core] - + # Normalize if normed is True if normed: s = hist.sum() Modified: trunk/numpy/lib/tests/test_function_base.py =================================================================== --- trunk/numpy/lib/tests/test_function_base.py 2007-05-13 20:15:09 UTC (rev 3756) +++ trunk/numpy/lib/tests/test_function_base.py 2007-05-13 23:22:17 UTC (rev 3757) @@ -60,7 +60,7 @@ def check_weighted(self): y1 = array([[1,2,3], [4,5,6]]) - actual = average(y1,weights=[1,2],axis=0) + actual = average(y1,weights=[1,2],axis=0) desired = array([3.,4.,5.]) assert_array_equal(actual, desired) @@ -394,12 +394,12 @@ Z[range(5), range(5), range(5)] = 1. H,edges = histogramdd([arange(5), arange(5), arange(5)], 5) assert_array_equal(H, Z) - + def check_shape(self): x = rand(100,3) hist3d, edges = histogramdd(x, bins = (5, 7, 6)) assert_array_equal(hist3d.shape, (5,7,6)) - + def check_weights(self): v = rand(100,2) hist, edges = histogramdd(v) @@ -410,8 +410,12 @@ assert_array_equal(w_hist, n_hist) w_hist, edges = histogramdd(v, weights=ones(100, int)*2) assert_array_equal(w_hist, 2*hist) - + def check_identical_samples(self): + x = zeros((10,2),int) + hist, edges = histogramdd(x, bins=2) + assert_array_equal(edges[0],array([-0.5, 0. , 0.5])) + class test_unique(NumpyTestCase): def check_simple(self): x = array([4,3,2,1,1,2,3,4, 0]) From numpy-svn at scipy.org Mon May 14 01:55:27 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 14 May 2007 00:55:27 -0500 (CDT) Subject: [Numpy-svn] r3758 - trunk/numpy/core Message-ID: <20070514055527.42BA739C05C@new.scipy.org> Author: charris Date: 2007-05-14 00:55:19 -0500 (Mon, 14 May 2007) New Revision: 3758 Modified: trunk/numpy/core/fromnumeric.py Log: Restructure documentation of sort, argsort, searchsorted, var, mean, std, and diagonal. Modified: trunk/numpy/core/fromnumeric.py =================================================================== --- trunk/numpy/core/fromnumeric.py 2007-05-13 23:22:17 UTC (rev 3757) +++ trunk/numpy/core/fromnumeric.py 2007-05-14 05:55:19 UTC (rev 3758) @@ -1,4 +1,5 @@ # Module containing non-deprecated functions borrowed from Numeric. +__docformat__ = "restructuredtext en" # functions that are now methods __all__ = ['take', 'reshape', 'choose', 'repeat', 'put', @@ -228,10 +229,12 @@ def sort(a, axis=-1, kind='quicksort', order=None): """Return copy of 'a' sorted along the given axis. + *Description* + Perform an inplace sort along the given axis using the algorithm specified by the kind keyword. - :Parameters: + *Parameters*: a : array type Array to be sorted. @@ -249,35 +252,39 @@ which fields to compare first, second, etc. Not all fields need be specified. - :Returns: + *Returns*: - sorted array : type is unchanged. + sorted_array : type is unchanged. - :SeeAlso: + *SeeAlso*: - - argsort : indirect sort - - lexsort : indirect stable sort on multiple keys - - searchsorted : find keys in sorted array + argsort + Indirect sort + lexsort + Indirect stable sort on multiple keys + searchsorted + Find keys in sorted array - :Notes: - ------ + *Notes* - The various sorts are characterized by average speed, worst case - performance, need for work space, and whether they are stable. A stable - sort keeps items with the same key in the same relative order. The three - available algorithms have the following properties: + The various sorts are characterized by average speed, worst case + performance, need for work space, and whether they are stable. A stable + sort keeps items with the same key in the same relative order. The + three available algorithms have the following properties: - |------------------------------------------------------| - | kind | speed | worst case | work space | stable| - |------------------------------------------------------| - |'quicksort'| 1 | O(n^2) | 0 | no | - |'mergesort'| 2 | O(n*log(n)) | ~n/2 | yes | - |'heapsort' | 3 | O(n*log(n)) | 0 | no | - |------------------------------------------------------| + +-----------+-------+-------------+------------+-------+ + | kind | speed | worst case | work space | stable| + +===========+=======+=============+============+=======+ + | quicksort | 1 | O(n^2) | 0 | no | + +-----------+-------+-------------+------------+-------+ + | mergesort | 2 | O(n*log(n)) | ~n/2 | yes | + +-----------+-------+-------------+------------+-------+ + | heapsort | 3 | O(n*log(n)) | 0 | no | + +-----------+-------+-------------+------------+-------+ - All the sort algorithms make temporary copies of the data when the sort is not - along the last axis. Consequently, sorts along the last axis are faster and use - less space than sorts along other axis. + All the sort algorithms make temporary copies of the data when the sort + is not along the last axis. Consequently, sorts along the last axis are + faster and use less space than sorts along other axis. """ if axis is None: @@ -292,11 +299,13 @@ def argsort(a, axis=-1, kind='quicksort', order=None): """Returns array of indices that index 'a' in sorted order. + *Description* + Perform an indirect sort along the given axis using the algorithm specified by the kind keyword. It returns an array of indices of the same shape as - 'a' that index data along the given axis in sorted order. + a that index data along the given axis in sorted order. - :Parameters: + *Parameters*: a : array type Array containing values that the returned indices should sort. @@ -314,35 +323,38 @@ which fields to compare first, second, etc. Not all fields need be specified. - :Returns: + *Returns*: indices : integer array Array of indices that sort 'a' along the specified axis. - :SeeAlso: + *SeeAlso*: - - lexsort : indirect stable sort with multiple keys - - sort : inplace sort + lexsort + Indirect stable sort with multiple keys + sort + Inplace sort - :Notes: - ------ + *Notes* - The various sorts are characterized by average speed, worst case - performance, need for work space, and whether they are stable. A stable - sort keeps items with the same key in the same relative order. The three - available algorithms have the following properties: + The various sorts are characterized by average speed, worst case + performance, need for work space, and whether they are stable. A stable + sort keeps items with the same key in the same relative order. The + three available algorithms have the following properties: - |------------------------------------------------------| - | kind | speed | worst case | work space | stable| - |------------------------------------------------------| - |'quicksort'| 1 | O(n^2) | 0 | no | - |'mergesort'| 2 | O(n*log(n)) | ~n/2 | yes | - |'heapsort' | 3 | O(n*log(n)) | 0 | no | - |------------------------------------------------------| + +-----------+-------+-------------+------------+-------+ + | kind | speed | worst case | work space | stable| + +===========+=======+=============+============+=======+ + | quicksort | 1 | O(n^2) | 0 | no | + +-----------+-------+-------------+------------+-------+ + | mergesort | 2 | O(n*log(n)) | ~n/2 | yes | + +-----------+-------+-------------+------------+-------+ + | heapsort | 3 | O(n*log(n)) | 0 | no | + +-----------+-------+-------------+------------+-------+ - All the sort algorithms make temporary copies of the data when the sort is not - along the last axis. Consequently, sorts along the last axis are faster and use - less space than sorts along other axis. + All the sort algorithms make temporary copies of the data when the sort + is not along the last axis. Consequently, sorts along the last axis are + faster and use less space than sorts along other axis. """ try: @@ -377,15 +389,18 @@ def searchsorted(a, v, side='left'): """Returns indices where keys in v should be inserted to maintain order. - Find the indices into a sorted array such that if the corresponding keys in - v were inserted before the indices the order of a would be preserved. If - side='left', then the first such index is returned. If side='right', then - the last such index is returned. If there is no such index because the key - is out of bounds, then the length of a is returned, i.e., the key would - need to be appended. The returned index array has the same shape as v. + *Description* - :Parameters: + Find the indices into a sorted array such that if the corresponding + keys in v were inserted before the indices the order of a would be + preserved. If side='left', then the first such index is returned. If + side='right', then the last such index is returned. If there is no such + index because the key is out of bounds, then the length of a is + returned, i.e., the key would need to be appended. The returned index + array has the same shape as v. + *Parameters*: + a : array 1-d array sorted in ascending order. @@ -396,19 +411,21 @@ Possible values are : 'left', 'right'. Default is 'left'. Return the first or last index where the key could be inserted. - :Returns: + *Returns*: indices : integer array Array of insertion points with the same shape as v. - :SeeAlso: + *SeeAlso*: - - sort - - histogram + sort + Inplace sort + histogram + Produce histogram from 1-d data - :Notes: - ------- + *Notes* + The array a must be 1-d and is assumed to be sorted in ascending order. Searchsorted uses binary search to find the required insertion points. @@ -464,56 +481,64 @@ def diagonal(a, offset=0, axis1=0, axis2=1): """Return specified diagonals. Uses first two indices by default. - If a is 2-d, return the diagonal of self with the given offset, i.e., the + *Description* + + If a is 2-d, returns the diagonal of self with the given offset, i.e., the collection of elements of the form a[i,i+offset]. If a is n-d with n > 2, then the axes specified by axis1 and axis2 are used to determine the 2-d subarray whose diagonal is returned. The shape of the resulting array can be determined by removing axis1 and axis2 and appending an index to the right equal to the size of the resulting diagonals. - :Parameters: + *Parameters*: + offset : integer Offset of the diagonal from the main diagonal. Can be both positive and negative. Defaults to main diagonal. + axis1 : integer Axis to be used as the first axis of the 2-d subarrays from which the diagonals should be taken. Defaults to first axis. + axis2 : integer Axis to be used as the second axis of the 2-d subarrays from which the diagonals should be taken. Defaults to second axis. - :Returns: - array_of_diagonals : same type as original array + *Returns*: + + array_of_diagonals : type of original array If a is 2-d, then a 1-d array containing the diagonal is returned. If a is n-d, n > 2, then an array of diagonals is returned. - :SeeAlso: - - diag : matlab workalike for 1-d and 2-d arrays - - diagflat : creates diagonal arrays - - trace : sum along diagonals + *SeeAlso*: - Examples - -------- + diag : + matlab workalike for 1-d and 2-d arrays + diagflat : + creates diagonal arrays + trace : + sum along diagonals - >>> a = arange(4).reshape(2,2) - >>> a - array([[0, 1], - [2, 3]]) - >>> a.diagonal() - array([0, 3]) - >>> a.diagonal(1) - array([1]) + *Examples*: - >>> a = arange(8).reshape(2,2,2) - >>> a - array([[[0, 1], - [2, 3]], + >>> a = arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) - [[4, 5], - [6, 7]]]) - >>> a.diagonal(0,-2,-1) - array([[0, 3], - [4, 7]]) + >>> a = arange(8).reshape(2,2,2) + >>> a + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0,-2,-1) + array([[0, 3], + [4, 7]]) """ return asarray(a).diagonal(offset, axis1, axis2) @@ -788,11 +813,13 @@ def mean(a, axis=None, dtype=None, out=None): """Compute the mean along the specified axis. - Returns the average of the array elements. The average is taken over the - flattened array by default, otherwise over the specified axis. + *Description* - :Parameters: + Returns the average of the array elements. The average is taken over + the flattened array by default, otherwise over the specified axis. + *Parameters*: + axis : integer Axis along which the means are computed. The default is to compute the standard deviation of the flattened array. @@ -807,19 +834,20 @@ the same shape as the expected output but the type will be cast if necessary. - :Returns: + *Returns*: mean : The return type varies, see above. A new array holding the result is returned unless out is specified, in which case a reference to out is returned. - :SeeAlso: + *SeeAlso*: - - var : variance - - std : standard deviation + var + Variance + std + Standard deviation - Notes - ----- + *Notes* The mean is the sum of the elements along the axis divided by the number of elements. @@ -835,12 +863,14 @@ def std(a, axis=None, dtype=None, out=None): """Compute the standard deviation along the specified axis. - Returns the standard deviation of the array elements, a measure of the - spread of a distribution. The standard deviation is computed for the - flattened array by default, otherwise over the specified axis. + *Description* - :Parameters: + Returns the standard deviation of the array elements, a measure of the + spread of a distribution. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. + *Parameters*: + axis : integer Axis along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array. @@ -855,24 +885,25 @@ the same shape as the expected output but the type will be cast if necessary. - :Returns: + *Returns*: - standard deviation : The return type varies, see above. + standard_deviation : The return type varies, see above. A new array holding the result is returned unless out is specified, in which case a reference to out is returned. - :SeeAlso: + *SeeAlso*: - - var : variance - - mean : average + var + Variance + mean + Average - Notes - ----- + *Notes* - The standard deviation is the square root of the average of the squared - deviations from the mean, i.e. var = sqrt(mean((x - x.mean())**2)). The - computed standard deviation is biased, i.e., the mean is computed by - dividing by the number of elements, N, rather than by N-1. + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e. var = sqrt(mean((x - x.mean())**2)). + The computed standard deviation is biased, i.e., the mean is computed + by dividing by the number of elements, N, rather than by N-1. """ try: @@ -885,12 +916,14 @@ def var(a, axis=None, dtype=None, out=None): """Compute the variance along the specified axis. - Returns the variance of the array elements, a measure of the spread of a - distribution. The variance is computed for the flattened array by default, - otherwise over the specified axis. + *Description* - :Parameters: + Returns the variance of the array elements, a measure of the spread of + a distribution. The variance is computed for the flattened array by + default, otherwise over the specified axis. + *Parameters*: + axis : integer Axis along which the variance is computed. The default is to compute the variance of the flattened array. @@ -905,24 +938,25 @@ the same shape as the expected output but the type will be cast if necessary. - :Returns: + *Returns*: variance : depends, see above A new array holding the result is returned unless out is specified, in which case a reference to out is returned. - :SeeAlso: + *SeeAlso*: - - std : standard deviation - - mean : average + std + Standard deviation + mean + Average - Notes - ----- + *Notes* - The variance is the average of the squared deviations from the mean, i.e. - var = mean((x - x.mean())**2). The computed variance is biased, i.e., - the mean is computed by dividing by the number of elements, N, rather - than by N-1. + The variance is the average of the squared deviations from the mean, + i.e. var = mean((x - x.mean())**2). The computed variance is biased, + i.e., the mean is computed by dividing by the number of elements, N, + rather than by N-1. """ try: From numpy-svn at scipy.org Mon May 14 05:25:15 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 14 May 2007 04:25:15 -0500 (CDT) Subject: [Numpy-svn] r3759 - trunk/numpy/distutils/fcompiler Message-ID: <20070514092515.35D3839C089@new.scipy.org> Author: cookedm Date: 2007-05-14 04:25:11 -0500 (Mon, 14 May 2007) New Revision: 3759 Modified: trunk/numpy/distutils/fcompiler/gnu.py Log: With gfortran, compile modern Xeon's with EM64T with -march=nocona (#515) Modified: trunk/numpy/distutils/fcompiler/gnu.py =================================================================== --- trunk/numpy/distutils/fcompiler/gnu.py 2007-05-14 05:55:19 UTC (rev 3758) +++ trunk/numpy/distutils/fcompiler/gnu.py 2007-05-14 09:25:11 UTC (rev 3759) @@ -225,6 +225,8 @@ march_opt = '-march=nocona' elif cpu.is_Core2(): march_opt = '-march=nocona' + elif cpu.is_Xeon() and cpu.is_64bit(): + march_opt = '-march=nocona' elif cpu.is_Prescott(): march_opt = '-march=prescott' elif cpu.is_PentiumIV(): From numpy-svn at scipy.org Mon May 14 06:20:46 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 14 May 2007 05:20:46 -0500 (CDT) Subject: [Numpy-svn] r3760 - trunk/numpy/f2py/lib/parser Message-ID: <20070514102046.4604939C032@new.scipy.org> Author: pearu Date: 2007-05-14 05:20:41 -0500 (Mon, 14 May 2007) New Revision: 3760 Modified: trunk/numpy/f2py/lib/parser/doc.txt Log: Fix doc rest formatting. Modified: trunk/numpy/f2py/lib/parser/doc.txt =================================================================== --- trunk/numpy/f2py/lib/parser/doc.txt 2007-05-14 09:25:11 UTC (rev 3759) +++ trunk/numpy/f2py/lib/parser/doc.txt 2007-05-14 10:20:41 UTC (rev 3760) @@ -311,9 +311,9 @@ * .tostr() - return string representation of Fortran type declaration * .astypedecl() - pure type declaration instance, it has no .entity_decls - and .attrspec. + and .attrspec. * .analyze() - processes .entity_decls and .attsspec attributes and adds - Variable instance to .parent.a.variables dictionary. + Variable instance to .parent.a.variables dictionary. The following block statements are defined in block_statements.py: @@ -329,16 +329,16 @@ In summary, .a attribute may hold different information sets as follows: - BeginSource - .module, .external_subprogram, .blockdata - Module - .attributes, .implicit_rules, .use, .use_provides, .variables, - .type_decls, .module_subprogram, .module_data - PythonModule - .implicit_rules, .use, .use_provides - Program - .attributes, .implicit_rules, .use, .use_provides - BlockData - .implicit_rules, .use, .use_provides, .variables - Interface - .implicit_rules, .use, .use_provides, .module_procedures - Function, Subroutine - .implicit_rules, .attributes, .use, .use_statements, - .variables, .type_decls, .internal_subprogram - TypeDecl - .variables, .attributes + * BeginSource - .module, .external_subprogram, .blockdata + * Module - .attributes, .implicit_rules, .use, .use_provides, .variables, + .type_decls, .module_subprogram, .module_data + * PythonModule - .implicit_rules, .use, .use_provides + * Program - .attributes, .implicit_rules, .use, .use_provides + * BlockData - .implicit_rules, .use, .use_provides, .variables + * Interface - .implicit_rules, .use, .use_provides, .module_procedures + * Function, Subroutine - .implicit_rules, .attributes, .use, .use_statements, + .variables, .type_decls, .internal_subprogram + * TypeDecl - .variables, .attributes Block statements have the following methods: From numpy-svn at scipy.org Mon May 14 06:24:39 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 14 May 2007 05:24:39 -0500 (CDT) Subject: [Numpy-svn] r3761 - trunk/numpy/f2py/lib/parser Message-ID: <20070514102439.958D239C032@new.scipy.org> Author: pearu Date: 2007-05-14 05:24:35 -0500 (Mon, 14 May 2007) New Revision: 3761 Modified: trunk/numpy/f2py/lib/parser/doc.txt Log: Fix doc rest formatting - 2. Modified: trunk/numpy/f2py/lib/parser/doc.txt =================================================================== --- trunk/numpy/f2py/lib/parser/doc.txt 2007-05-14 10:20:41 UTC (rev 3760) +++ trunk/numpy/f2py/lib/parser/doc.txt 2007-05-14 10:24:35 UTC (rev 3761) @@ -1,3 +1,4 @@ +.. -*- rest -*- Created: September 2006 Author: Pearu Peterson @@ -92,17 +93,17 @@ * .line - contains Fortran code line * .span - a 2-tuple containing the span of line numbers containing - Fortran code in the original Fortran file + Fortran code in the original Fortran file * .label - the label of Fortran code line * .reader - the FortranReaderBase class instance * .strline - if not None then contains Fortran code line with parenthesis - content and string literal constants saved in .strlinemap dictionary. + content and string literal constants saved in .strlinemap dictionary. * .is_f2py_directive - True if line started with f2py directive comment. and the following methods: * .get_line() - returns .strline (also evalutes it if None). Also - handles Hollerith contstants in fixed F77 mode. + handles Hollerith contstants in fixed F77 mode. * .isempty() - returns True if Fortran line contains no code. * .copy(line=None, apply_map=False) - returns a Line instance with given .span, .label, .reader information but line content @@ -137,7 +138,7 @@ * .comment - comment string * .span - a 2-tuple containing the span of line numbers containing - Fortran comment in the original Fortran file + Fortran comment in the original Fortran file * .reader - the FortranReaderBase class instance and .isempty() method. @@ -152,7 +153,7 @@ * .block - a list of lines * .suffix - the content of * .span - a 2-tuple containing the span of line numbers containing - multiline syntax in the original Fortran file + multiline syntax in the original Fortran file * .reader - the FortranReaderBase class instance and .isempty() method. @@ -191,12 +192,12 @@ FortranReaderBase has the following attributes: * .source - a file-like object with .next() method to retrive - a source code line + a source code line * .source_lines - a list of read source lines * .reader - a FortranReaderBase instance for reading files - from INCLUDE statements. + from INCLUDE statements. * .include_dirs - a list of directories where INCLUDE files - are searched. Default is ['.']. + are searched. Default is ['.']. and the following methods: @@ -240,20 +241,20 @@ * .parent - it is either parent block-type statement or FortranParser instance. * .item - Line instance containing Fortran statement line information, see above. * .isvalid - when False then processing this Statement instance will be skipped, - for example, when the content of .item does not match with - the Statement class. + for example, when the content of .item does not match with + the Statement class. * .ignore - when True then the Statement instance will be ignored. * .modes - a list of Fortran format modes where the Statement instance is valid. and the following methods: * .info(message), .warning(message), .error(message) - to spit messages to - sys.stderr stream. + sys.stderr stream. * .get_variable(name) - get Variable instance by name that is defined in - current namespace. If name is not defined, then the corresponding - Variable instance is created. + current namespace. If name is not defined, then the corresponding + Variable instance is created. * .analyze() - calculate various information about the Statement, this information - is saved in .a attribute that is AttributeHolder instance. + is saved in .a attribute that is AttributeHolder instance. All statement classes are derived from Statement class. Block statements are derived from BeginStatement class and is assumed to end with EndStatement @@ -261,7 +262,7 @@ have the following attributes: * .name - name of the block, blocks without names use line label - as the name. + as the name. * .blocktype - type of the block (derived from class name) * .content - a list of Statement (or Line) instances. @@ -331,19 +332,19 @@ * BeginSource - .module, .external_subprogram, .blockdata * Module - .attributes, .implicit_rules, .use, .use_provides, .variables, - .type_decls, .module_subprogram, .module_data + .type_decls, .module_subprogram, .module_data * PythonModule - .implicit_rules, .use, .use_provides * Program - .attributes, .implicit_rules, .use, .use_provides * BlockData - .implicit_rules, .use, .use_provides, .variables * Interface - .implicit_rules, .use, .use_provides, .module_procedures * Function, Subroutine - .implicit_rules, .attributes, .use, .use_statements, - .variables, .type_decls, .internal_subprogram + .variables, .type_decls, .internal_subprogram * TypeDecl - .variables, .attributes Block statements have the following methods: * .get_classes() - returns a list of Statement classes that are valid - as a content of given block statement. + as a content of given block statement. The following one line statements are defined: From numpy-svn at scipy.org Mon May 14 06:35:09 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 14 May 2007 05:35:09 -0500 (CDT) Subject: [Numpy-svn] r3762 - in trunk/numpy/f2py: . lib/parser Message-ID: <20070514103509.5796139C032@new.scipy.org> Author: pearu Date: 2007-05-14 05:35:03 -0500 (Mon, 14 May 2007) New Revision: 3762 Modified: trunk/numpy/f2py/f2py2e.py trunk/numpy/f2py/lib/parser/doc.txt Log: Fix f2py command line doc. Modified: trunk/numpy/f2py/f2py2e.py =================================================================== --- trunk/numpy/f2py/f2py2e.py 2007-05-14 10:24:35 UTC (rev 3761) +++ trunk/numpy/f2py/f2py2e.py 2007-05-14 10:35:03 UTC (rev 3762) @@ -64,7 +64,7 @@ Options: - --3g-numpy Use numpy.f2py.lib tool, the 3rd generation of F2PY, + --g3-numpy Use numpy.f2py.lib tool, the 3rd generation of F2PY, with NumPy support. --2d-numpy Use numpy.f2py tool with NumPy support. [DEFAULT] --2d-numeric Use f2py2e tool with Numeric support. Modified: trunk/numpy/f2py/lib/parser/doc.txt =================================================================== --- trunk/numpy/f2py/lib/parser/doc.txt 2007-05-14 10:24:35 UTC (rev 3761) +++ trunk/numpy/f2py/lib/parser/doc.txt 2007-05-14 10:35:03 UTC (rev 3762) @@ -19,6 +19,7 @@ tree of Fortran input. For example, :: + >>> from api import parse >>> code = """ ... c comment @@ -74,6 +75,7 @@ For example, :: + >>> from readfortran import * >>> import os >>> reader = FortranFileReader(os.path.expanduser('~/src/blas/daxpy.f')) From numpy-svn at scipy.org Mon May 14 08:17:54 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 14 May 2007 07:17:54 -0500 (CDT) Subject: [Numpy-svn] r3763 - trunk/numpy/distutils/command Message-ID: <20070514121754.A30AA39C199@new.scipy.org> Author: pearu Date: 2007-05-14 07:17:49 -0500 (Mon, 14 May 2007) New Revision: 3763 Modified: trunk/numpy/distutils/command/build_clib.py Log: Workaround Python distutils bug sf 1718574. Modified: trunk/numpy/distutils/command/build_clib.py =================================================================== --- trunk/numpy/distutils/command/build_clib.py 2007-05-14 10:35:03 UTC (rev 3762) +++ trunk/numpy/distutils/command/build_clib.py 2007-05-14 12:17:49 UTC (rev 3763) @@ -9,6 +9,13 @@ from numpy.distutils.misc_util import filter_sources, has_f_sources,\ has_cxx_sources, all_strings, get_lib_source_files, is_sequence +# Fix Python distutils bug sf #1718574: +_l = old_build_clib.user_options +for _i in range(len(_l)): + if _l[_i][0] in ['build-clib', 'build-temp']: + _l[_i] = (_l[_i][0]+'=',)+_l[_i][1:] +# + class build_clib(old_build_clib): description = "build C/C++/F libraries used by Python extensions" From numpy-svn at scipy.org Mon May 14 20:35:51 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 14 May 2007 19:35:51 -0500 (CDT) Subject: [Numpy-svn] r3764 - trunk/numpy/distutils/fcompiler Message-ID: <20070515003551.616C739C00F@new.scipy.org> Author: cookedm Date: 2007-05-14 19:35:49 -0500 (Mon, 14 May 2007) New Revision: 3764 Modified: trunk/numpy/distutils/fcompiler/intel.py Log: #520: don't add arch-specific flags when linking with Intel Fortran Modified: trunk/numpy/distutils/fcompiler/intel.py =================================================================== --- trunk/numpy/distutils/fcompiler/intel.py 2007-05-14 12:17:49 UTC (rev 3763) +++ trunk/numpy/distutils/fcompiler/intel.py 2007-05-15 00:35:49 UTC (rev 3764) @@ -76,7 +76,6 @@ v = self.get_version() if v and v >= '8.0': opt.append('-nofor_main') - opt.extend(self.get_flags_arch()) return opt class IntelItaniumFCompiler(IntelFCompiler): From numpy-svn at scipy.org Tue May 15 07:19:36 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 15 May 2007 06:19:36 -0500 (CDT) Subject: [Numpy-svn] r3765 - in trunk/numpy: core/include/numpy/fenv numarray Message-ID: <20070515111936.10EBB39C07E@new.scipy.org> Author: cookedm Date: 2007-05-15 06:19:28 -0500 (Tue, 15 May 2007) New Revision: 3765 Modified: trunk/numpy/core/include/numpy/fenv/fenv.c trunk/numpy/core/include/numpy/fenv/fenv.h trunk/numpy/numarray/_capi.c trunk/numpy/numarray/setup.py Log: #513: fix up include of fenv.c in numarray for cygwin Modified: trunk/numpy/core/include/numpy/fenv/fenv.c =================================================================== --- trunk/numpy/core/include/numpy/fenv/fenv.c 2007-05-15 00:35:49 UTC (rev 3764) +++ trunk/numpy/core/include/numpy/fenv/fenv.c 2007-05-15 11:19:28 UTC (rev 3765) @@ -29,7 +29,7 @@ #include #include "fenv.h" -const fenv_t __fe_dfl_env = { +const fenv_t npy__fe_dfl_env = { 0xffff0000, 0xffff0000, 0xffffffff, Modified: trunk/numpy/core/include/numpy/fenv/fenv.h =================================================================== --- trunk/numpy/core/include/numpy/fenv/fenv.h 2007-05-15 00:35:49 UTC (rev 3764) +++ trunk/numpy/core/include/numpy/fenv/fenv.h 2007-05-15 11:19:28 UTC (rev 3765) @@ -62,8 +62,8 @@ __BEGIN_DECLS /* Default floating-point environment */ -extern const fenv_t __fe_dfl_env; -#define FE_DFL_ENV (&__fe_dfl_env) +extern const fenv_t npy__fe_dfl_env; +#define FE_DFL_ENV (&npy__fe_dfl_env) #define __fldcw(__cw) __asm __volatile("fldcw %0" : : "m" (__cw)) #define __fldenv(__env) __asm __volatile("fldenv %0" : : "m" (__env)) Modified: trunk/numpy/numarray/_capi.c =================================================================== --- trunk/numpy/numarray/_capi.c 2007-05-15 00:35:49 UTC (rev 3764) +++ trunk/numpy/numarray/_capi.c 2007-05-15 11:19:28 UTC (rev 3765) @@ -4,6 +4,13 @@ #include "numpy/libnumarray.h" #include +#if defined(__GLIBC__) || defined(__APPLE__) || defined(__MINGW32__) +#include +#elif defined(__CYGWIN__) +#include "numpy/fenv/fenv.h" +#include "numpy/fenv/fenv.c" +#endif + static PyObject *pCfuncClass; static PyTypeObject CfuncType; static PyObject *pHandleErrorFunc; @@ -225,11 +232,6 @@ /* Likewise for Integer overflows */ #if defined(__GLIBC__) || defined(__APPLE__) || defined(__CYGWIN__) || defined(__MINGW32__) -#if defined(__GLIBC__) || defined(__APPLE__) || defined(__MINGW32__) -#include -#elif defined(__CYGWIN__) -#include "numpy/fenv/fenv.c" -#endif static int int_overflow_error(Float64 value) { /* For x86_64 */ feraiseexcept(FE_OVERFLOW); return (int) value; @@ -2938,11 +2940,6 @@ } #elif defined(__GLIBC__) || defined(__APPLE__) || defined(__CYGWIN__) || defined(__MINGW32__) -#if defined(__GLIBC__) || defined(darwin) || defined(__MINGW32__) -#include -#elif defined(__CYGWIN__) -#include "numpy/fenv/fenv.h" -#endif static int NA_checkFPErrors(void) Modified: trunk/numpy/numarray/setup.py =================================================================== --- trunk/numpy/numarray/setup.py 2007-05-15 00:35:49 UTC (rev 3764) +++ trunk/numpy/numarray/setup.py 2007-05-15 11:19:28 UTC (rev 3765) @@ -6,9 +6,8 @@ config.add_data_files('numpy/') - # Configure fftpack_lite config.add_extension('_capi', - sources=['_capi.c'] + sources=['_capi.c'], ) return config From numpy-svn at scipy.org Tue May 15 08:33:29 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 15 May 2007 07:33:29 -0500 (CDT) Subject: [Numpy-svn] r3766 - trunk/numpy/testing Message-ID: <20070515123329.1453A39C050@new.scipy.org> Author: cookedm Date: 2007-05-15 07:33:27 -0500 (Tue, 15 May 2007) New Revision: 3766 Modified: trunk/numpy/testing/numpytest.py Log: Add stacklevel=2 to DeprecationWarning for ScipyTestCase Modified: trunk/numpy/testing/numpytest.py =================================================================== --- trunk/numpy/testing/numpytest.py 2007-05-15 11:19:28 UTC (rev 3765) +++ trunk/numpy/testing/numpytest.py 2007-05-15 12:33:27 UTC (rev 3766) @@ -195,7 +195,7 @@ class ScipyTestCase(NumpyTestCase): def __init__(self, package=None): warnings.warn("ScipyTestCase is now called NumpyTestCase; please update your code", - DeprecationWarning) + DeprecationWarning, stacklevel=2) NumpyTestCase.__init__(self, package) From numpy-svn at scipy.org Tue May 15 19:24:00 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 15 May 2007 18:24:00 -0500 (CDT) Subject: [Numpy-svn] r3767 - in trunk/numpy/core: . src Message-ID: <20070515232400.0672C39C02F@new.scipy.org> Author: oliphant Date: 2007-05-15 18:23:57 -0500 (Tue, 15 May 2007) New Revision: 3767 Modified: trunk/numpy/core/records.py trunk/numpy/core/src/multiarraymodule.c Log: Fix problem with records with object elements and add pretty-printing to record objects. Remove the global _multiarray_module_loaded. Modified: trunk/numpy/core/records.py =================================================================== --- trunk/numpy/core/records.py 2007-05-15 12:33:27 UTC (rev 3766) +++ trunk/numpy/core/records.py 2007-05-15 23:23:57 UTC (rev 3767) @@ -133,11 +133,15 @@ if res: obj = self.getfield(*res[:2]) # if it has fields return a recarray, - # if it's a string return 'SU' return a chararray - # otherwise return a normal array - if obj.dtype.fields: + # if it's a string ('SU') return a chararray + # otherwise return the object + try: + dt = obj.dtype + except AttributeError: + return obj + if dt.fields: return obj.view(obj.__class__) - if obj.dtype.char in 'SU': + if dt.char in 'SU': return obj.view(chararray) return obj else: @@ -160,6 +164,16 @@ raise AttributeError, "'record' object has no "\ "attribute '%s'" % attr + def pprint(self): + # pretty-print all fields + names = self.dtype.names + maxlen = max([len(name) for name in names]) + rows = [] + fmt = '%% %ds: %%s' %maxlen + for name in names: + rows.append(fmt%(name, getattr(self, name))) + return "\n".join(rows) + # The recarray is almost identical to a standard array (which supports # named fields already) The biggest difference is that it can use # attribute-lookup to find the fields and it is constructed using Modified: trunk/numpy/core/src/multiarraymodule.c =================================================================== --- trunk/numpy/core/src/multiarraymodule.c 2007-05-15 12:33:27 UTC (rev 3766) +++ trunk/numpy/core/src/multiarraymodule.c 2007-05-15 23:23:57 UTC (rev 3767) @@ -26,9 +26,7 @@ static PyObject *typeDict=NULL; /* Must be explicitly loaded */ static PyObject *_numpy_internal=NULL; /* A Python module for callbacks */ -static int _multiarray_module_loaded=0; - static PyArray_Descr * _arraydescr_fromobj(PyObject *obj) { @@ -7501,8 +7499,6 @@ PyObject *m, *d, *s; PyObject *c_api; - if (_multiarray_module_loaded) return; - _multiarray_module_loaded = 1; /* Create the module and add the functions */ m = Py_InitModule("multiarray", array_module_methods); if (!m) goto err; From numpy-svn at scipy.org Wed May 16 03:10:44 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 16 May 2007 02:10:44 -0500 (CDT) Subject: [Numpy-svn] r3768 - trunk/numpy/core/src Message-ID: <20070516071044.5ABDB39C22F@new.scipy.org> Author: oliphant Date: 2007-05-16 02:10:37 -0500 (Wed, 16 May 2007) New Revision: 3768 Modified: trunk/numpy/core/src/arrayobject.c Log: Fixed a place where unicode itemsize was being counted twice. This led to array([u'abc'],'U') returning the wrong itemsize. Modified: trunk/numpy/core/src/arrayobject.c =================================================================== --- trunk/numpy/core/src/arrayobject.c 2007-05-15 23:23:57 UTC (rev 3767) +++ trunk/numpy/core/src/arrayobject.c 2007-05-16 07:10:37 UTC (rev 3768) @@ -6859,10 +6859,7 @@ if ((nd == 0) || PyString_Check(s) || \ PyUnicode_Check(s) || PyBuffer_Check(s)) { - if PyUnicode_Check(s) - *itemsize = MAX(*itemsize, 4*n); - else - *itemsize = MAX(*itemsize, n); + *itemsize = MAX(*itemsize, n); return 0; } for (i=0; i Author: cookedm Date: 2007-05-16 10:27:12 -0500 (Wed, 16 May 2007) New Revision: 3769 Modified: branches/distutils-revamp/ branches/distutils-revamp/command/build_clib.py branches/distutils-revamp/command/build_src.py branches/distutils-revamp/fcompiler/gnu.py branches/distutils-revamp/fcompiler/intel.py branches/distutils-revamp/misc_util.py branches/distutils-revamp/setup.py branches/distutils-revamp/system_info.py branches/distutils-revamp/tests/f2py_ext/setup.py branches/distutils-revamp/tests/f2py_f90_ext/setup.py branches/distutils-revamp/tests/gen_ext/setup.py branches/distutils-revamp/tests/pyrex_ext/setup.py branches/distutils-revamp/tests/setup.py branches/distutils-revamp/tests/swig_ext/setup.py Log: Merged revisions 3732-3768 via svnmerge from http://svn.scipy.org/svn/numpy/trunk/numpy/distutils ........ r3740 | cookedm | 2007-05-10 13:26:20 -0400 (Thu, 10 May 2007) | 2 lines Use a try/finally instead of try/except Exception for cleanup in numpy/distutils/core.py ........ r3745 | pearu | 2007-05-11 08:50:42 -0400 (Fri, 11 May 2007) | 1 line Clean up setup() calls. ........ r3746 | pearu | 2007-05-11 08:58:31 -0400 (Fri, 11 May 2007) | 1 line Using meaningful NotFoundError exception for blas_opt and lapack_opt resources. ........ r3747 | pearu | 2007-05-11 09:37:31 -0400 (Fri, 11 May 2007) | 1 line Raise exception when pyrex is required. ........ r3759 | cookedm | 2007-05-14 05:25:11 -0400 (Mon, 14 May 2007) | 2 lines With gfortran, compile modern Xeon's with EM64T with -march=nocona (#515) ........ r3763 | pearu | 2007-05-14 08:17:49 -0400 (Mon, 14 May 2007) | 1 line Workaround Python distutils bug sf 1718574. ........ r3764 | cookedm | 2007-05-14 20:35:49 -0400 (Mon, 14 May 2007) | 2 lines #520: don't add arch-specific flags when linking with Intel Fortran ........ Property changes on: branches/distutils-revamp ___________________________________________________________________ Name: svnmerge-integrated - /branches/distutils-revamp:1-2756 /trunk/numpy/distutils:1-3731 + /branches/distutils-revamp:1-2756 /trunk/numpy/distutils:1-3768 Modified: branches/distutils-revamp/command/build_clib.py =================================================================== --- branches/distutils-revamp/command/build_clib.py 2007-05-16 07:10:37 UTC (rev 3768) +++ branches/distutils-revamp/command/build_clib.py 2007-05-16 15:27:12 UTC (rev 3769) @@ -14,6 +14,13 @@ except NameError: from sets import Set as set +# Fix Python distutils bug sf #1718574: +_l = old_build_clib.user_options +for _i in range(len(_l)): + if _l[_i][0] in ['build-clib', 'build-temp']: + _l[_i] = (_l[_i][0]+'=',)+_l[_i][1:] +# + class build_clib(old_build_clib): description = "build C/C++/F libraries used by Python extensions" Modified: branches/distutils-revamp/command/build_src.py =================================================================== --- branches/distutils-revamp/command/build_src.py 2007-05-16 07:10:37 UTC (rev 3768) +++ branches/distutils-revamp/command/build_src.py 2007-05-16 15:27:12 UTC (rev 3769) @@ -356,10 +356,14 @@ if pyrex_result.num_errors != 0: raise RuntimeError("%d errors in Pyrex compile" % pyrex_result.num_errors) - else: + elif os.path.isfile(target_file): log.warn("Pyrex needed to compile %s but not available."\ " Using old target %s"\ % (source, target_file)) + else: + raise SystemError,"Non-existing target %r. "\ + "Perhaps you need to install Pyrex."\ + % (target_file) new_sources.append(target_file) else: new_sources.append(source) Modified: branches/distutils-revamp/fcompiler/gnu.py =================================================================== --- branches/distutils-revamp/fcompiler/gnu.py 2007-05-16 07:10:37 UTC (rev 3768) +++ branches/distutils-revamp/fcompiler/gnu.py 2007-05-16 15:27:12 UTC (rev 3769) @@ -218,6 +218,8 @@ march_opt = '-march=nocona' elif cpu.is_Core2(): march_opt = '-march=nocona' + elif cpu.is_Xeon() and cpu.is_64bit(): + march_opt = '-march=nocona' elif cpu.is_Prescott(): march_opt = '-march=prescott' elif cpu.is_PentiumIV(): Modified: branches/distutils-revamp/fcompiler/intel.py =================================================================== --- branches/distutils-revamp/fcompiler/intel.py 2007-05-16 07:10:37 UTC (rev 3768) +++ branches/distutils-revamp/fcompiler/intel.py 2007-05-16 15:27:12 UTC (rev 3769) @@ -79,7 +79,6 @@ v = self.get_version() if v and v >= '8.0': opt.append('-nofor_main') - opt.extend(self.get_flags_arch()) return opt class IntelItaniumFCompiler(IntelFCompiler): Modified: branches/distutils-revamp/misc_util.py =================================================================== --- branches/distutils-revamp/misc_util.py 2007-05-16 07:10:37 UTC (rev 3768) +++ branches/distutils-revamp/misc_util.py 2007-05-16 15:27:12 UTC (rev 3769) @@ -530,9 +530,10 @@ caller_frame = get_frame(caller_level) caller_name = eval('__name__',caller_frame.f_globals,caller_frame.f_locals) self.local_path = get_path(caller_name, top_path) + # local_path -- directory of a file (usually setup.py) that + # defines a configuration() function. if top_path is None: top_path = self.local_path - self.local_path = '.' if package_path is None: package_path = self.local_path elif os.path.isdir(njoin(self.local_path,package_path)): Modified: branches/distutils-revamp/setup.py =================================================================== --- branches/distutils-revamp/setup.py 2007-05-16 07:10:37 UTC (rev 3768) +++ branches/distutils-revamp/setup.py 2007-05-16 15:27:12 UTC (rev 3769) @@ -12,4 +12,4 @@ if __name__ == '__main__': from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: branches/distutils-revamp/system_info.py =================================================================== --- branches/distutils-revamp/system_info.py 2007-05-16 07:10:37 UTC (rev 3768) +++ branches/distutils-revamp/system_info.py 2007-05-16 15:27:12 UTC (rev 3769) @@ -1158,6 +1158,8 @@ class lapack_opt_info(system_info): + notfounderror = LapackNotFoundError + def calc_info(self): if sys.platform=='darwin' and not os.environ.get('ATLAS',None): @@ -1253,6 +1255,8 @@ class blas_opt_info(system_info): + notfounderror = BlasNotFoundError + def calc_info(self): if sys.platform=='darwin' and not os.environ.get('ATLAS',None): Modified: branches/distutils-revamp/tests/f2py_ext/setup.py =================================================================== --- branches/distutils-revamp/tests/f2py_ext/setup.py 2007-05-16 07:10:37 UTC (rev 3768) +++ branches/distutils-revamp/tests/f2py_ext/setup.py 2007-05-16 15:27:12 UTC (rev 3769) @@ -8,4 +8,4 @@ if __name__ == "__main__": from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: branches/distutils-revamp/tests/f2py_f90_ext/setup.py =================================================================== --- branches/distutils-revamp/tests/f2py_f90_ext/setup.py 2007-05-16 07:10:37 UTC (rev 3768) +++ branches/distutils-revamp/tests/f2py_f90_ext/setup.py 2007-05-16 15:27:12 UTC (rev 3769) @@ -13,4 +13,4 @@ if __name__ == "__main__": from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: branches/distutils-revamp/tests/gen_ext/setup.py =================================================================== --- branches/distutils-revamp/tests/gen_ext/setup.py 2007-05-16 07:10:37 UTC (rev 3768) +++ branches/distutils-revamp/tests/gen_ext/setup.py 2007-05-16 15:27:12 UTC (rev 3769) @@ -44,4 +44,4 @@ if __name__ == "__main__": from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: branches/distutils-revamp/tests/pyrex_ext/setup.py =================================================================== --- branches/distutils-revamp/tests/pyrex_ext/setup.py 2007-05-16 07:10:37 UTC (rev 3768) +++ branches/distutils-revamp/tests/pyrex_ext/setup.py 2007-05-16 15:27:12 UTC (rev 3769) @@ -9,4 +9,4 @@ if __name__ == "__main__": from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: branches/distutils-revamp/tests/setup.py =================================================================== --- branches/distutils-revamp/tests/setup.py 2007-05-16 07:10:37 UTC (rev 3768) +++ branches/distutils-revamp/tests/setup.py 2007-05-16 15:27:12 UTC (rev 3769) @@ -11,4 +11,4 @@ if __name__ == "__main__": from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) Modified: branches/distutils-revamp/tests/swig_ext/setup.py =================================================================== --- branches/distutils-revamp/tests/swig_ext/setup.py 2007-05-16 07:10:37 UTC (rev 3768) +++ branches/distutils-revamp/tests/swig_ext/setup.py 2007-05-16 15:27:12 UTC (rev 3769) @@ -15,4 +15,4 @@ if __name__ == "__main__": from numpy.distutils.core import setup - setup(**configuration(top_path='').todict()) + setup(configuration=configuration) From numpy-svn at scipy.org Thu May 17 03:23:27 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 17 May 2007 02:23:27 -0500 (CDT) Subject: [Numpy-svn] r3770 - trunk/numpy/core/src Message-ID: <20070517072327.46B7039C237@new.scipy.org> Author: oliphant Date: 2007-05-17 02:23:17 -0500 (Thu, 17 May 2007) New Revision: 3770 Modified: trunk/numpy/core/src/multiarraymodule.c Log: Perhaps fix the problem with multiarray_module_loaded. Modified: trunk/numpy/core/src/multiarraymodule.c =================================================================== --- trunk/numpy/core/src/multiarraymodule.c 2007-05-16 15:27:12 UTC (rev 3769) +++ trunk/numpy/core/src/multiarraymodule.c 2007-05-17 07:23:17 UTC (rev 3770) @@ -7584,9 +7584,10 @@ if (set_typeinfo(d) != 0) goto err; - _numpy_internal = \ - PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal != NULL) return; + if (_numpy_internal == NULL) { + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal != NULL) return; + } err: if (!PyErr_Occurred()) { From numpy-svn at scipy.org Thu May 17 06:13:42 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 17 May 2007 05:13:42 -0500 (CDT) Subject: [Numpy-svn] r3771 - trunk/numpy/core/src Message-ID: <20070517101342.028E539C092@new.scipy.org> Author: oliphant Date: 2007-05-17 05:13:39 -0500 (Thu, 17 May 2007) New Revision: 3771 Modified: trunk/numpy/core/src/scalarmathmodule.c.src Log: Propagate changes made to umathmodule.c to fix the problem with division and remainder not being consistent for negative numbers. Modified: trunk/numpy/core/src/scalarmathmodule.c.src =================================================================== --- trunk/numpy/core/src/scalarmathmodule.c.src 2007-05-17 07:23:17 UTC (rev 3770) +++ trunk/numpy/core/src/scalarmathmodule.c.src 2007-05-17 10:13:39 UTC (rev 3771) @@ -218,7 +218,14 @@ } #endif else { +#if @neg@ + @name@ tmp; + tmp = a / b; + if (((a > 0) != (b > 0)) && (a % b != 0)) tmp--; + *out = tmp; +#else *out = a / b; +#endif } } #define @name at _ctype_floor_divide @name at _ctype_divide From numpy-svn at scipy.org Thu May 17 06:37:12 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 17 May 2007 05:37:12 -0500 (CDT) Subject: [Numpy-svn] r3772 - in trunk/numpy: core lib Message-ID: <20070517103712.B9AFC39C08F@new.scipy.org> Author: oliphant Date: 2007-05-17 05:37:08 -0500 (Thu, 17 May 2007) New Revision: 3772 Modified: trunk/numpy/core/numeric.py trunk/numpy/lib/ufunclike.py Log: Fix some bugs with isposinf and isneginf as well as with how allclose dealt with infinities. See ticket #519 Modified: trunk/numpy/core/numeric.py =================================================================== --- trunk/numpy/core/numeric.py 2007-05-17 10:13:39 UTC (rev 3771) +++ trunk/numpy/core/numeric.py 2007-05-17 10:37:08 UTC (rev 3772) @@ -835,8 +835,15 @@ """ x = array(a, copy=False) y = array(b, copy=False) - d = less_equal(absolute(x-y), atol + rtol * absolute(y)) - return d.ravel().all() + d1 = less_equal(absolute(x-y), atol + rtol * absolute(y)) + xinf = isinf(x) + yinf = isinf(y) + xneg = signbit(x) + yneg = signbit(y) + d2 = (xinf == yinf) + d3 = (xneg == yneg) + d4 = logical_not(d2) + return (d1.all() and not d4.any()) or (d2.all() and d3.all()) def array_equal(a1, a2): Modified: trunk/numpy/lib/ufunclike.py =================================================================== --- trunk/numpy/lib/ufunclike.py 2007-05-17 10:13:39 UTC (rev 3771) +++ trunk/numpy/lib/ufunclike.py 2007-05-17 10:37:08 UTC (rev 3772) @@ -29,6 +29,7 @@ If y is an array, the result replaces the contents of y. """ if y is None: + x = asarray(x) y = empty(x.shape, dtype=nx.bool_) umath.logical_and(isinf(x), ~signbit(x), y) return y @@ -39,6 +40,7 @@ If y is an array, the result replaces the contents of y. """ if y is None: + x = asarray(x) y = empty(x.shape, dtype=nx.bool_) umath.logical_and(isinf(x), signbit(x), y) return y From numpy-svn at scipy.org Thu May 17 07:55:14 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 17 May 2007 06:55:14 -0500 (CDT) Subject: [Numpy-svn] r3773 - in trunk/numpy/core: . src Message-ID: <20070517115514.DB5D739C00A@new.scipy.org> Author: oliphant Date: 2007-05-17 06:55:11 -0500 (Thu, 17 May 2007) New Revision: 3773 Modified: trunk/numpy/core/numeric.py trunk/numpy/core/src/arrayobject.c Log: Fix ticekt #511 and start to handle allclose problems. Modified: trunk/numpy/core/numeric.py =================================================================== --- trunk/numpy/core/numeric.py 2007-05-17 10:37:08 UTC (rev 3772) +++ trunk/numpy/core/numeric.py 2007-05-17 11:55:11 UTC (rev 3773) @@ -838,14 +838,18 @@ d1 = less_equal(absolute(x-y), atol + rtol * absolute(y)) xinf = isinf(x) yinf = isinf(y) - xneg = signbit(x) - yneg = signbit(y) - d2 = (xinf == yinf) - d3 = (xneg == yneg) - d4 = logical_not(d2) - return (d1.all() and not d4.any()) or (d2.all() and d3.all()) + if (not xinf.any() and not yinf.any()): + return d1.all() + d2 = (xinf != yinf) + d3 = (x[xinf] == y[yinf]) + d4 = (~xinf & ~yinf) + if d3.size == 0: + return False + if d3.all(): + return d1[d4].all() + else: + return False - def array_equal(a1, a2): try: a1, a2 = asarray(a1), asarray(a2) Modified: trunk/numpy/core/src/arrayobject.c =================================================================== --- trunk/numpy/core/src/arrayobject.c 2007-05-17 10:37:08 UTC (rev 3772) +++ trunk/numpy/core/src/arrayobject.c 2007-05-17 11:55:11 UTC (rev 3773) @@ -6918,10 +6918,18 @@ return mintype; } + if (chktype->type_num > mintype->type_num) outtype_num = chktype->type_num; - else - outtype_num = mintype->type_num; + else { + if (PyDataType_ISOBJECT(chktype) && \ + PyDataType_ISSTRING(mintype)) { + return PyArray_DescrFromType(NPY_OBJECT); + } + else { + outtype_num = mintype->type_num; + } + } save_num = outtype_num; while(outtype_num < PyArray_NTYPES && From numpy-svn at scipy.org Fri May 18 08:56:47 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 18 May 2007 07:56:47 -0500 (CDT) Subject: [Numpy-svn] r3774 - trunk/numpy/core/src Message-ID: <20070518125647.DADDBC7C04E@new.scipy.org> Author: cookedm Date: 2007-05-18 07:56:45 -0500 (Fri, 18 May 2007) New Revision: 3774 Modified: trunk/numpy/core/src/multiarraymodule.c Log: fix typo: iteratable -> iterator Modified: trunk/numpy/core/src/multiarraymodule.c =================================================================== --- trunk/numpy/core/src/multiarraymodule.c 2007-05-17 11:55:11 UTC (rev 3773) +++ trunk/numpy/core/src/multiarraymodule.c 2007-05-18 12:56:45 UTC (rev 3774) @@ -6356,7 +6356,7 @@ } if (i < count) { - PyErr_SetString(PyExc_ValueError, "iteratable too short"); + PyErr_SetString(PyExc_ValueError, "iterator too short"); goto done; } From numpy-svn at scipy.org Fri May 18 10:32:37 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 18 May 2007 09:32:37 -0500 (CDT) Subject: [Numpy-svn] r3775 - trunk/numpy/distutils/command Message-ID: <20070518143237.0A2DCC7C05E@new.scipy.org> Author: pearu Date: 2007-05-18 09:32:33 -0500 (Fri, 18 May 2007) New Revision: 3775 Modified: trunk/numpy/distutils/command/build_src.py Log: build_src: introduced --swig and other related options (as in std distutils build_ext command), use --f2py-opts instead of --f2pyflags, improved error messages. Modified: trunk/numpy/distutils/command/build_src.py =================================================================== --- trunk/numpy/distutils/command/build_src.py 2007-05-18 12:56:45 UTC (rev 3774) +++ trunk/numpy/distutils/command/build_src.py 2007-05-18 14:32:33 UTC (rev 3775) @@ -8,12 +8,14 @@ from distutils.command import build_ext from distutils.dep_util import newer_group, newer from distutils.util import get_platform +from distutils.errors import DistutilsError, DistutilsSetupError from numpy.distutils import log from numpy.distutils.misc_util import fortran_ext_match, \ appendpath, is_string, is_sequence from numpy.distutils.from_template import process_file as process_f_file from numpy.distutils.conv_template import process_file as process_c_file +from numpy.distutils.exec_command import splitcmdline class build_src(build_ext.build_ext): @@ -21,8 +23,12 @@ user_options = [ ('build-src=', 'd', "directory to \"build\" sources to"), - ('f2pyflags=', None, "additonal flags to f2py"), - ('swigflags=', None, "additional flags to swig"), + ('f2py-opts=', None, "list of f2py command line options"), + ('swig=', None, "path to the SWIG executable"), + ('swig-opts=', None, "list of SWIG command line options"), + ('swig-cpp', None, "make SWIG create C++ files (default is autodetected from sources)"), + ('f2pyflags=', None, "additional flags to f2py (use --f2py-opts= instead)"), # obsolete + ('swigflags=', None, "additional flags to swig (use --swig-opts= instead)"), # obsolete ('force', 'f', "forcibly build everything (ignore file timestamps)"), ('inplace', 'i', "ignore build-lib and put compiled extensions into the source " + @@ -44,8 +50,12 @@ self.force = None self.inplace = None self.package_dir = None - self.f2pyflags = None - self.swigflags = None + self.f2pyflags = None # obsolete + self.f2py_opts = None + self.swigflags = None # obsolete + self.swig_opts = None + self.swig_cpp = None + self.swig = None return def finalize_options(self): @@ -63,22 +73,48 @@ if self.build_src is None: plat_specifier = ".%s-%s" % (get_platform(), sys.version[0:3]) self.build_src = os.path.join(self.build_base, 'src'+plat_specifier) - if self.inplace is None: - build_ext = self.get_finalized_command('build_ext') - self.inplace = build_ext.inplace # py_modules_dict is used in build_py.find_package_modules self.py_modules_dict = {} - if self.f2pyflags is None: - self.f2pyflags = [] + if self.f2pyflags: + if self.f2py_opts: + log.warn('ignoring --f2pyflags as --f2py-opts already used') + else: + self.f2py_opts = self.f2pyflags + self.f2pyflags = None + if self.f2py_opts is None: + self.f2py_opts = [] else: - self.f2pyflags = self.f2pyflags.split() # XXX spaces?? + self.f2py_opts = splitcmdline(self.f2py_opts) - if self.swigflags is None: - self.swigflags = [] + if self.swigflags: + if self.swig_opts: + log.warn('ignoring --swigflags as --swig-opts already used') + else: + self.swig_opts = self.swigflags + self.swigflags = None + + if self.swig_opts is None: + self.swig_opts = [] else: - self.swigflags = self.swigflags.split() # XXX spaces?? + self.swig_opts = splitcmdline(self.swig_opts) + + # use options from build_ext command + build_ext = self.get_finalized_command('build_ext') + if self.inplace is None: + self.inplace = build_ext.inplace + if self.swig_cpp is None: + self.swig_cpp = build_ext.swig_cpp + for c in ['swig','swig_opt']: + o = '--'+c.replace('_','-') + v = getattr(build_ext,c,None) + if v: + if getattr(self,c): + log.warn('both build_src and build_ext define %s option' % (o)) + else: + log.info('using "%s=%s" option from build_ext command' % (o,v)) + setattr(self, c, v) return def run(self): @@ -141,7 +177,7 @@ filenames = get_data_files((d,files)) new_data_files.append((d, filenames)) else: - raise + raise TypeError(repr(data)) self.data_files[:] = new_data_files return @@ -361,16 +397,14 @@ output_file=target_file) pyrex_result = Main.compile(source, options=options) if pyrex_result.num_errors != 0: - raise RuntimeError("%d errors in Pyrex compile" % - pyrex_result.num_errors) + raise DistutilsError,"%d errors while compiling %r with Pyrex" \ + % (pyrex_result.num_errors, source) elif os.path.isfile(target_file): - log.warn("Pyrex needed to compile %s but not available."\ - " Using old target %s"\ + log.warn("Pyrex required for compiling %r but not available,"\ + " using old target %r"\ % (source, target_file)) else: - raise SystemError,"Non-existing target %r. "\ - "Perhaps you need to install Pyrex."\ - % (target_file) + raise DistutilsError,"Pyrex required for compiling %r but not available" % (source) new_sources.append(target_file) else: new_sources.append(source) @@ -395,9 +429,9 @@ if os.path.isfile(source): name = get_f2py_modulename(source) if name != ext_name: - raise ValueError('mismatch of extension names: %s ' - 'provides %r but expected %r' % ( - source, name, ext_name)) + raise DistutilsSetupError('mismatch of extension names: %s ' + 'provides %r but expected %r' % ( + source, name, ext_name)) target_file = os.path.join(target_dir,name+'module.c') else: log.debug(' source %s does not exist: skipping f2py\'ing.' \ @@ -406,16 +440,16 @@ skip_f2py = 1 target_file = os.path.join(target_dir,name+'module.c') if not os.path.isfile(target_file): - log.debug(' target %s does not exist:\n '\ - 'Assuming %smodule.c was generated with '\ - '"build_src --inplace" command.' \ - % (target_file, name)) + log.warn(' target %s does not exist:\n '\ + 'Assuming %smodule.c was generated with '\ + '"build_src --inplace" command.' \ + % (target_file, name)) target_dir = os.path.dirname(base) target_file = os.path.join(target_dir,name+'module.c') if not os.path.isfile(target_file): - raise ValueError("%r missing" % (target_file,)) - log.debug(' Yes! Using %s as up-to-date target.' \ - % (target_file)) + raise DistutilsSetupError("%r missing" % (target_file,)) + log.info(' Yes! Using %r as up-to-date target.' \ + % (target_file)) target_dirs.append(target_dir) f2py_sources.append(source) f2py_targets[source] = target_file @@ -430,7 +464,7 @@ map(self.mkpath, target_dirs) - f2py_options = extension.f2py_options + self.f2pyflags + f2py_options = extension.f2py_options + self.f2py_opts if self.distribution.libraries: for name,build_info in self.distribution.libraries: @@ -441,7 +475,7 @@ if f2py_sources: if len(f2py_sources) != 1: - raise ValueError( + raise DistutilsSetupError( 'only one .pyf file is allowed per extension module but got'\ ' more: %r' % (f2py_sources,)) source = f2py_sources[0] @@ -478,7 +512,7 @@ % (target_file)) if not os.path.isfile(target_file): - raise ValueError("%r missing" % (target_file,)) + raise DistutilsError("f2py target file %r not generated" % (target_file,)) target_c = os.path.join(self.build_src,'fortranobject.c') target_h = os.path.join(self.build_src,'fortranobject.h') @@ -500,9 +534,9 @@ self.copy_file(source_h,target_h) else: if not os.path.isfile(target_c): - raise ValueError("%r missing" % (target_c,)) + raise DistutilsSetupError("f2py target_c file %r not found" % (target_c,)) if not os.path.isfile(target_h): - raise ValueError("%r missing" % (target_h,)) + raise DistutilsSetupError("f2py target_h file %r not found" % (target_h,)) for name_ext in ['-f2pywrappers.f','-f2pywrappers2.f90']: filename = os.path.join(target_dir,ext_name + name_ext) @@ -522,8 +556,12 @@ target_dirs = [] py_files = [] # swig generated .py files target_ext = '.c' - typ = None - is_cpp = 0 + if self.swig_cpp: + typ = 'c++' + is_cpp = True + else: + typ = None + is_cpp = False skip_swig = 0 ext_name = extension.name.split('.')[-1] @@ -539,35 +577,43 @@ if os.path.isfile(source): name = get_swig_modulename(source) if name != ext_name[1:]: - raise ValueError( + raise DistutilsSetupError( 'mismatch of extension names: %s provides %r' ' but expected %r' % (source, name, ext_name[1:])) if typ is None: typ = get_swig_target(source) is_cpp = typ=='c++' - if is_cpp: - target_ext = '.cpp' + if is_cpp: target_ext = '.cpp' else: - assert typ == get_swig_target(source), repr(typ) + typ2 = get_swig_target(source) + if typ!=typ2: + log.warn('expected %r but source %r defines %r swig target' \ + % (typ, source, typ2)) + if typ2=='c++': + log.warn('resetting swig target to c++ (some targets may have .c extension)') + is_cpp = True + target_ext = '.cpp' + else: + log.warn('assuming that %r has c++ swig target' % (source)) target_file = os.path.join(target_dir,'%s_wrap%s' \ % (name, target_ext)) else: - log.debug(' source %s does not exist: skipping swig\'ing.' \ + log.warn(' source %s does not exist: skipping swig\'ing.' \ % (source)) name = ext_name[1:] skip_swig = 1 target_file = _find_swig_target(target_dir, name) if not os.path.isfile(target_file): - log.debug(' target %s does not exist:\n '\ - 'Assuming %s_wrap.{c,cpp} was generated with '\ - '"build_src --inplace" command.' \ + log.warn(' target %s does not exist:\n '\ + 'Assuming %s_wrap.{c,cpp} was generated with '\ + '"build_src --inplace" command.' \ % (target_file, name)) target_dir = os.path.dirname(base) target_file = _find_swig_target(target_dir, name) if not os.path.isfile(target_file): - raise ValueError("%r missing" % (target_file,)) - log.debug(' Yes! Using %s as up-to-date target.' \ - % (target_file)) + raise DistutilsSetupError("%r missing" % (target_file,)) + log.warn(' Yes! Using %r as up-to-date target.' \ + % (target_file)) target_dirs.append(target_dir) new_sources.append(target_file) py_files.append(os.path.join(py_target_dir, name+'.py')) @@ -583,7 +629,7 @@ return new_sources + py_files map(self.mkpath, target_dirs) - swig = self.find_swig() + swig = self.swig or self.find_swig() swig_cmd = [swig, "-python"] if is_cpp: swig_cmd.append('-c++') @@ -595,7 +641,7 @@ if self.force or newer_group(depends, target, 'newer'): log.info("%s: %s" % (os.path.basename(swig) \ + (is_cpp and '++' or ''), source)) - self.spawn(swig_cmd + self.swigflags \ + self.spawn(swig_cmd + self.swig_opts \ + ["-o", target, '-outdir', py_target_dir, source]) else: log.debug(" skipping '%s' swig interface (up-to-date)" \ From numpy-svn at scipy.org Fri May 18 12:41:50 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 18 May 2007 11:41:50 -0500 (CDT) Subject: [Numpy-svn] r3776 - trunk/numpy/distutils/command Message-ID: <20070518164150.BAB6D39C0C0@new.scipy.org> Author: pearu Date: 2007-05-18 11:41:44 -0500 (Fri, 18 May 2007) New Revision: 3776 Modified: trunk/numpy/distutils/command/build_clib.py trunk/numpy/distutils/command/build_ext.py Log: Extension modules and libraries are built with suitable compilers/linkers. Improved failure handling. Modified: trunk/numpy/distutils/command/build_clib.py =================================================================== --- trunk/numpy/distutils/command/build_clib.py 2007-05-18 14:32:33 UTC (rev 3775) +++ trunk/numpy/distutils/command/build_clib.py 2007-05-18 16:41:44 UTC (rev 3776) @@ -1,8 +1,9 @@ """ Modified version of build_clib that handles fortran source files. """ +import os from distutils.command.build_clib import build_clib as old_build_clib -from distutils.errors import DistutilsSetupError +from distutils.errors import DistutilsSetupError, DistutilsError from numpy.distutils import log from distutils.dep_util import newer_group @@ -89,6 +90,7 @@ self.fcompiler.show_customization() self.build_libraries(self.libraries) + return def get_source_files(self): self.check_library_list(self.libraries) @@ -111,9 +113,21 @@ "a list of source filenames") % lib_name sources = list(sources) + c_sources, cxx_sources, f_sources, fmodule_sources \ + = filter_sources(sources) + requiref90 = not not fmodule_sources or \ + build_info.get('language','c')=='f90' + + # save source type information so that build_ext can use it. + source_languages = [] + if c_sources: source_languages.append('c') + if cxx_sources: source_languages.append('c++') + if requiref90: source_languages.append('f90') + elif f_sources: source_languages.append('f77') + build_info['source_languages'] = source_languages + lib_file = compiler.library_filename(lib_name, output_dir=self.build_clib) - depends = sources + build_info.get('depends',[]) if not (self.force or newer_group(depends, lib_file, 'newer')): log.debug("skipping '%s' library (up-to-date)", lib_name) @@ -121,15 +135,13 @@ else: log.info("building '%s' library", lib_name) - config_fc = build_info.get('config_fc',{}) if fcompiler is not None and config_fc: log.info('using additional config_fc from setup script '\ 'for fortran compiler: %s' \ % (config_fc,)) from numpy.distutils.fcompiler import new_fcompiler - requiref90 = build_info.get('language','c')=='f90' - fcompiler = new_fcompiler(compiler=self.fcompiler.compiler_type, + fcompiler = new_fcompiler(compiler=fcompiler.compiler_type, verbose=self.verbose, dry_run=self.dry_run, force=self.force, @@ -139,23 +151,26 @@ base_config_fc.update(config_fc) fcompiler.customize(base_config_fc) + # check availability of Fortran compilers + if (f_sources or fmodule_sources) and fcompiler is None: + raise DistutilsError, "library %s has Fortran sources"\ + " but no Fortran compiler found" % (lib_name) + macros = build_info.get('macros') include_dirs = build_info.get('include_dirs') extra_postargs = build_info.get('extra_compiler_args') or [] - c_sources, cxx_sources, f_sources, fmodule_sources \ - = filter_sources(sources) - - if self.compiler.compiler_type=='msvc': + # where compiled F90 module files are: + module_dirs = build_info.get('module_dirs') or [] + module_build_dir = os.path.dirname(lib_file) + if requiref90: self.mkpath(module_build_dir) + + if compiler.compiler_type=='msvc': # this hack works around the msvc compiler attributes # problem, msvc uses its own convention :( c_sources += cxx_sources cxx_sources = [] - if fmodule_sources: - print 'XXX: Fortran 90 module support not implemented or tested' - f_sources.extend(fmodule_sources) - objects = [] if c_sources: log.info("compiling C sources") @@ -176,24 +191,67 @@ debug=self.debug, extra_postargs=extra_postargs) objects.extend(cxx_objects) + + if f_sources or fmodule_sources: + extra_postargs = [] + f_objects = [] - if f_sources: - log.info("compiling Fortran sources") - f_objects = fcompiler.compile(f_sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=[]) - objects.extend(f_objects) + if requiref90: + if fcompiler.module_dir_switch is None: + existing_modules = glob('*.mod') + extra_postargs += fcompiler.module_options(\ + module_dirs,module_build_dir) - self.compiler.create_static_lib(objects, lib_name, - output_dir=self.build_clib, - debug=self.debug) + if fmodule_sources: + log.info("compiling Fortran 90 module sources") + f_objects += fcompiler.compile(fmodule_sources, + output_dir=self.build_temp, + macros=macros, + include_dirs=include_dirs, + debug=self.debug, + extra_postargs=extra_postargs) + if requiref90 and self.fcompiler.module_dir_switch is None: + # move new compiled F90 module files to module_build_dir + for f in glob('*.mod'): + if f in existing_modules: + continue + t = os.path.join(module_build_dir, f) + if os.path.abspath(f)==os.path.abspath(t): + continue + if os.path.isfile(t): + os.remove(t) + try: + self.move_file(f, module_build_dir) + except DistutilsFileError: + log.warn('failed to move %r to %r' \ + % (f, module_build_dir)) + + if f_sources: + log.info("compiling Fortran sources") + f_objects += fcompiler.compile(f_sources, + output_dir=self.build_temp, + macros=macros, + include_dirs=include_dirs, + debug=self.debug, + extra_postargs=extra_postargs) + else: + f_objects = [] + + objects.extend(f_objects) + + # assume that default linker is suitable for + # linking Fortran object files + compiler.create_static_lib(objects, lib_name, + output_dir=self.build_clib, + debug=self.debug) + + # fix library dependencies clib_libraries = build_info.get('libraries',[]) for lname, binfo in libraries: if lname in clib_libraries: clib_libraries.extend(binfo[1].get('libraries',[])) if clib_libraries: build_info['libraries'] = clib_libraries + return +#EOF Modified: trunk/numpy/distutils/command/build_ext.py =================================================================== --- trunk/numpy/distutils/command/build_ext.py 2007-05-18 14:32:33 UTC (rev 3775) +++ trunk/numpy/distutils/command/build_ext.py 2007-05-18 16:41:44 UTC (rev 3776) @@ -8,7 +8,8 @@ from distutils.dep_util import newer_group from distutils.command.build_ext import build_ext as old_build_ext -from distutils.errors import DistutilsFileError, DistutilsSetupError +from distutils.errors import DistutilsFileError, DistutilsSetupError,\ + DistutilsError from distutils.file_util import copy_file from numpy.distutils import log @@ -52,7 +53,7 @@ if self.distribution.has_c_libraries(): self.run_command('build_clib') build_clib = self.get_finalized_command('build_clib') - self.library_dirs.append(build_clib.build_clib) + self.library_dirs.append(build_clib.build_clib) else: build_clib = None @@ -61,61 +62,143 @@ # bogus linking commands. Extensions must # explicitly specify the C libraries that they use. - # Determine if Fortran compiler is needed. - if build_clib and build_clib.fcompiler is not None: - need_f_compiler = 1 - else: - need_f_compiler = 0 - for ext in self.extensions: - if has_f_sources(ext.sources): - need_f_compiler = 1 - break - if getattr(ext,'language','c') in ['f77','f90']: - need_f_compiler = 1 - break + from distutils.ccompiler import new_compiler + from numpy.distutils.fcompiler import new_fcompiler - requiref90 = 0 - if need_f_compiler: - for ext in self.extensions: - if getattr(ext,'language','c')=='f90': - requiref90 = 1 - break - - # Determine if C++ compiler is needed. - need_cxx_compiler = 0 - for ext in self.extensions: - if has_cxx_sources(ext.sources): - need_cxx_compiler = 1 - break - if getattr(ext,'language','c')=='c++': - need_cxx_compiler = 1 - break - - from distutils.ccompiler import new_compiler - self.compiler = new_compiler(compiler=self.compiler, + compiler_type = self.compiler + # Initialize C compiler: + self.compiler = new_compiler(compiler=compiler_type, verbose=self.verbose, dry_run=self.dry_run, force=self.force) - self.compiler.customize(self.distribution,need_cxx=need_cxx_compiler) + self.compiler.customize(self.distribution) self.compiler.customize_cmd(self) self.compiler.show_customization() - # Initialize Fortran/C++ compilers if needed. - if need_f_compiler: - from numpy.distutils.fcompiler import new_fcompiler - self.fcompiler = new_fcompiler(compiler=self.fcompiler, - verbose=self.verbose, - dry_run=self.dry_run, - force=self.force, - requiref90=requiref90) - if self.fcompiler.get_version(): - self.fcompiler.customize(self.distribution) - self.fcompiler.customize_cmd(self) - self.fcompiler.show_customization() + # Create mapping of libraries built by build_clib: + clibs = {} + if build_clib is not None: + for libname,build_info in build_clib.libraries or []: + if clibs.has_key(libname): + log.warn('library %r defined more than once,'\ + ' overwriting build_info %r with %r.' \ + % (libname, clibs[libname], build_info)) + clibs[libname] = build_info + # .. and distribution libraries: + for libname,build_info in self.distribution.libraries or []: + if clibs.has_key(libname): + # build_clib libraries have a precedence before distribution ones + continue + clibs[libname] = build_info + + # Determine if C++/Fortran 77/Fortran 90 compilers are needed. + # Update extension libraries, library_dirs, and macros. + all_languages = [] + for ext in self.extensions: + ext_languages = [] + c_libs = [] + c_lib_dirs = [] + macros = [] + for libname in ext.libraries: + if clibs.has_key(libname): + binfo = clibs[libname] + c_libs += binfo.get('libraries',[]) + c_lib_dirs += binfo.get('library_dirs',[]) + for m in binfo.get('macros',[]): + if m not in macros: macros.append(m) + for l in clibs.get(libname,{}).get('source_languages',[]): + if l not in ext_languages: ext_languages.append(l) + if c_libs: + new_c_libs = ext.libraries + c_libs + log.info('updating extension %r libraries from %r to %r' \ + % (ext.name, ext.libraries, new_c_libs)) + ext.libraries = new_c_libs + ext.library_dirs = ext.library_dirs + c_lib_dirs + if macros: + log.info('extending extension %r defined_macros with %r' \ + % (ext.name, macros)) + ext.define_macros = ext.define_macros + macros + + # determine extension languages + if 'f77' not in ext_languages and has_f_sources(ext.sources): + ext_languages.append('f77') + if 'c++' not in ext_languages and has_cxx_sources(ext.sources): + ext_languages.append('c++') + if sys.version[:3]>='2.3': + l = ext.language or self.compiler.detect_language(ext.sources) else: - self.warn('fcompiler=%s is not available.' % (self.fcompiler.compiler_type)) - self.fcompiler = None + l = ext.language + if l and l not in ext_languages: ext_languages.append(l) + # reset language attribute for choosing proper linker + if 'c++' in ext_languages: + ext_language = 'c++' + elif 'f90' in ext_languages: + ext_language = 'f90' + elif 'f77' in ext_languages: + ext_language = 'f77' + else: + ext_language = 'c' # default + if l and l!=ext_language: + log.warn('resetting extension %r language from %r to %r.' % (ext.name,l,ext_language)) + ext.language = ext_language + # global language + for l in ext_languages: + if l not in all_languages: all_languages.append(l) + need_f90_compiler = 'f90' in all_languages + need_f77_compiler = 'f77' in all_languages + need_cxx_compiler = 'c++' in all_languages + + # Initialize C++ compiler: + if need_cxx_compiler: + self._cxx_compiler = new_compiler(compiler=compiler_type, + verbose=self.verbose, + dry_run=self.dry_run, + force=self.force) + compiler = self._cxx_compiler + compiler.customize(self.distribution,need_cxx=need_cxx_compiler) + compiler.customize_cmd(self) + compiler.show_customization() + self._cxx_compiler = compiler.cxx_compiler() + else: + self._cxx_compiler = None + + # Initialize Fortran 77 compiler: + if need_f77_compiler: + self._f77_compiler = new_fcompiler(compiler=self.fcompiler, + verbose=self.verbose, + dry_run=self.dry_run, + force=self.force, + requiref90=False) + fcompiler = self._f77_compiler + if fcompiler.get_version(): + fcompiler.customize(self.distribution) + fcompiler.customize_cmd(self) + fcompiler.show_customization() + else: + self.warn('f77_compiler=%s is not available.' % (fcompiler.compiler_type)) + self._f77_compiler = None + else: + self._f77_compiler = None + + # Initialize Fortran 90 compiler: + if need_f90_compiler: + self._f90_compiler = new_fcompiler(compiler=self.fcompiler, + verbose=self.verbose, + dry_run=self.dry_run, + force=self.force, + requiref90=True) + fcompiler = self._f90_compiler + if fcompiler.get_version(): + fcompiler.customize(self.distribution) + fcompiler.customize_cmd(self) + fcompiler.show_customization() + else: + self.warn('f90_compiler=%s is not available.' % (fcompiler.compiler_type)) + self._f90_compiler = None + else: + self._f90_compiler = None + # Build extensions self.build_extensions() return @@ -162,17 +245,11 @@ for undef in ext.undef_macros: macros.append((undef,)) - clib_libraries = [] - clib_library_dirs = [] - if self.distribution.libraries: - for libname,build_info in self.distribution.libraries: - if libname in ext.libraries: - macros.extend(build_info.get('macros',[])) - clib_libraries.extend(build_info.get('libraries',[])) - clib_library_dirs.extend(build_info.get('library_dirs',[])) - c_sources, cxx_sources, f_sources, fmodule_sources = \ filter_sources(ext.sources) + + + if self.compiler.compiler_type=='msvc': if cxx_sources: # Needed to compile kiva.agg._agg extension. @@ -182,6 +259,28 @@ c_sources += cxx_sources cxx_sources = [] + # Set Fortran/C++ compilers for compilation and linking. + if ext.language=='f90': + fcompiler = self._f90_compiler + elif ext.language=='f77': + fcompiler = self._f77_compiler + else: # in case ext.language is c++, for instance + fcompiler = self._f90_compiler or self._f77_compiler + cxx_compiler = self._cxx_compiler + + # check for the availability of required compilers + if cxx_sources and cxx_compiler is None: + raise DistutilsError, "extension %r has C++ sources" \ + "but no C++ compiler found" % (ext.name) + if (f_sources or fmodule_sources) and fcompiler is None: + raise DistutilsError, "extension %r has Fortran sources " \ + "but no Fortran compiler found" % (ext.name) + if ext.language in ['f77','f90'] and fcompiler is None: + self.warn("extension %r has Fortran libraries " \ + "but no Fortran linker found, using default linker" % (ext.name)) + if ext.language=='c++' and cxx_compiler is None: + self.warn("extension %r has C++ libraries " \ + "but no C++ linker found, using default linker" % (ext.name)) kws = {'depends':ext.depends} output_dir = self.build_temp @@ -198,11 +297,9 @@ debug=self.debug, extra_postargs=extra_args, **kws) + if cxx_sources: - log.info("compiling C++ sources") - - cxx_compiler = self.compiler.cxx_compiler() - + log.info("compiling C++ sources") c_objects += cxx_compiler.compile(cxx_sources, output_dir=output_dir, macros=macros, @@ -211,127 +308,89 @@ extra_postargs=extra_args, **kws) - check_for_f90_modules = not not fmodule_sources - - if f_sources or fmodule_sources: - extra_postargs = [] + extra_postargs = [] + f_objects = [] + if fmodule_sources: + log.info("compiling Fortran 90 module sources") module_dirs = ext.module_dirs[:] - - #if self.fcompiler.compiler_type=='ibm': - macros = [] - - if check_for_f90_modules: - module_build_dir = os.path.join(\ + module_build_dir = os.path.join(\ self.build_temp,os.path.dirname(\ self.get_ext_filename(fullname))) - - self.mkpath(module_build_dir) - if self.fcompiler.module_dir_switch is None: - existing_modules = glob('*.mod') - extra_postargs += self.fcompiler.module_options(\ + + self.mkpath(module_build_dir) + if fcompiler.module_dir_switch is None: + existing_modules = glob('*.mod') + extra_postargs += fcompiler.module_options(\ module_dirs,module_build_dir) + f_objects += fcompiler.compile(fmodule_sources, + output_dir=self.build_temp, + macros=macros, + include_dirs=include_dirs, + debug=self.debug, + extra_postargs=extra_postargs, + depends=ext.depends) - f_objects = [] - if fmodule_sources: - log.info("compiling Fortran 90 module sources") - f_objects = self.fcompiler.compile(fmodule_sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=extra_postargs, - depends=ext.depends) - - if check_for_f90_modules \ - and self.fcompiler.module_dir_switch is None: + if fcompiler.module_dir_switch is None: for f in glob('*.mod'): if f in existing_modules: continue + t = os.path.join(module_build_dir, f) + if os.path.abspath(f)==os.path.abspath(t): + continue + if os.path.isfile(t): + os.remove(t) try: self.move_file(f, module_build_dir) - except DistutilsFileError: # already exists in destination - os.remove(f) + except DistutilsFileError: + log.warn('failed to move %r to %r' % (f, module_build_dir)) + if f_sources: + log.info("compiling Fortran sources") + f_objects += fcompiler.compile(f_sources, + output_dir=self.build_temp, + macros=macros, + include_dirs=include_dirs, + debug=self.debug, + extra_postargs=extra_postargs, + depends=ext.depends) - if f_sources: - log.info("compiling Fortran sources") - f_objects += self.fcompiler.compile(f_sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=extra_postargs, - depends=ext.depends) - else: - f_objects = [] - objects = c_objects + f_objects if ext.extra_objects: objects.extend(ext.extra_objects) extra_args = ext.extra_link_args or [] - + libraries = self.get_libraries(ext)[:] + library_dirs = ext.library_dirs[:] + linker = self.compiler.link_shared_object - - use_fortran_linker = getattr(ext,'language','c') in ['f77','f90'] \ - and self.fcompiler is not None - c_libraries = [] - c_library_dirs = [] - if use_fortran_linker or f_sources: - use_fortran_linker = 1 - elif self.distribution.has_c_libraries(): - build_clib = self.get_finalized_command('build_clib') - f_libs = [] - for (lib_name, build_info) in build_clib.libraries: - if has_f_sources(build_info.get('sources',[])): - f_libs.append(lib_name) - if lib_name in ext.libraries: - # XXX: how to determine if c_libraries contain - # fortran compiled sources? - c_libraries.extend(build_info.get('libraries',[])) - c_library_dirs.extend(build_info.get('library_dirs',[])) - for l in ext.libraries: - if l in f_libs: - use_fortran_linker = 1 - break - # Always use system linker when using MSVC compiler. - if self.compiler.compiler_type=='msvc' and use_fortran_linker: - self._libs_with_msvc_and_fortran(c_libraries, c_library_dirs) - use_fortran_linker = False + if self.compiler.compiler_type=='msvc': + # expand libraries with fcompiler libraries as we are + # not using fcompiler linker + self._libs_with_msvc_and_fortran(fcompiler, libraries, library_dirs) + elif ext.language in ['f77','f90'] and fcompiler is not None: + linker = fcompiler.link_shared_object + if ext.language=='c++' and cxx_compiler is not None: + linker = cxx_compiler.link_shared_object - if use_fortran_linker: - if cxx_sources: - # XXX: Which linker should be used, Fortran or C++? - log.warn('mixing Fortran and C++ is untested') - linker = self.fcompiler.link_shared_object - language = ext.language or self.fcompiler.detect_language(f_sources) - else: - linker = self.compiler.link_shared_object - if sys.version[:3]>='2.3': - language = ext.language or self.compiler.detect_language(sources) - else: - language = ext.language - if cxx_sources: - linker = self.compiler.cxx_compiler().link_shared_object - if sys.version[:3]>='2.3': - kws = {'target_lang':language} + kws = {'target_lang':ext.language} else: kws = {} linker(objects, ext_filename, - libraries=self.get_libraries(ext) + c_libraries + clib_libraries, - library_dirs=ext.library_dirs+c_library_dirs+clib_library_dirs, + libraries=libraries, + library_dirs=library_dirs, runtime_library_dirs=ext.runtime_library_dirs, extra_postargs=extra_args, export_symbols=self.get_export_symbols(ext), debug=self.debug, build_temp=self.build_temp,**kws) + return - def _libs_with_msvc_and_fortran(self, c_libraries, c_library_dirs): + def _libs_with_msvc_and_fortran(self, fcompiler, c_libraries, c_library_dirs): # Always use system linker when using MSVC compiler. f_lib_dirs = [] - for dir in self.fcompiler.library_dirs: + for dir in fcompiler.library_dirs: # correct path when compiling in Cygwin but with normal Win # Python if dir.startswith('/usr/lib'): @@ -343,7 +402,7 @@ # make g77-compiled static libs available to MSVC lib_added = False - for lib in self.fcompiler.libraries: + for lib in fcompiler.libraries: if not lib.startswith('msvcr'): c_libraries.append(lib) p = combine_paths(f_lib_dirs, 'lib' + lib + '.a') @@ -353,6 +412,7 @@ if not lib_added: c_library_dirs.append(self.build_temp) lib_added = True + return def get_source_files (self): self.check_extensions_list(self.extensions) From numpy-svn at scipy.org Fri May 18 12:45:18 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 18 May 2007 11:45:18 -0500 (CDT) Subject: [Numpy-svn] r3777 - in trunk/numpy/f2py/lib: . tests Message-ID: <20070518164518.D98CA39C0BF@new.scipy.org> Author: pearu Date: 2007-05-18 11:44:43 -0500 (Fri, 18 May 2007) New Revision: 3777 Added: trunk/numpy/f2py/lib/nary.py trunk/numpy/f2py/lib/tests/ trunk/numpy/f2py/lib/tests/test_derived_scalar.py trunk/numpy/f2py/lib/tests/test_module_module.py trunk/numpy/f2py/lib/tests/test_module_scalar.py trunk/numpy/f2py/lib/tests/test_scalar_function_in.py trunk/numpy/f2py/lib/tests/test_scalar_in_out.py Removed: trunk/numpy/f2py/lib/test_derived_scalar.py trunk/numpy/f2py/lib/test_module_module.py trunk/numpy/f2py/lib/test_module_scalar.py trunk/numpy/f2py/lib/test_scalar_function_in.py trunk/numpy/f2py/lib/test_scalar_in_out.py Modified: trunk/numpy/f2py/lib/main.py Log: g3 f2py: impl. compiling Fortran codes online (function numpy.f2py.lib.compile), clean up testing. Modified: trunk/numpy/f2py/lib/main.py =================================================================== --- trunk/numpy/f2py/lib/main.py 2007-05-18 16:41:44 UTC (rev 3776) +++ trunk/numpy/f2py/lib/main.py 2007-05-18 16:44:43 UTC (rev 3777) @@ -21,7 +21,7 @@ except ImportError: numpy_version = 'N/A' -__all__ = ['main'] +__all__ = ['main', 'compile'] __usage__ = """ F2PY G3 --- The third generation of Fortran to Python Interface Generator @@ -100,7 +100,8 @@ import re import shutil import parser.api -from parser.api import parse, PythonModule, EndStatement, Module, Subroutine, Function +from parser.api import parse, PythonModule, EndStatement, Module, Subroutine, Function,\ + get_reader def get_values(sys_argv, prefix='', suffix='', strip_prefix=False, strip_suffix=False): """ @@ -200,9 +201,11 @@ if os.path.isfile(signature_output): overwrite = get_option(sys_argv, '--overwrite-signature', False) if not overwrite: - print >> sys.stderr, 'Signature file %r exists. Use --overwrite-signature to overwrite.' % (signature_output) + print >> sys.stderr, 'Signature file %r exists. '\ + 'Use --overwrite-signature to overwrite.' % (signature_output) sys.exit() - modulename = get_option_value(sys_argv,'-m',os.path.basename(name),os.path.basename(name)) + modulename = get_option_value(sys_argv,'-m',os.path.basename(name), + os.path.basename(name)) output_stream = open(signature_output,'w') flag = 'file' @@ -217,7 +220,8 @@ elif word==':': flag = 'file' elif word.startswith('--'): options.append(word) else: - {'file': file_names,'only': only_names, 'skip': skip_names}[flag].append(word) + {'file': file_names,'only': only_names, + 'skip': skip_names}[flag].append(word) if options: sys.stderr.write('Unused options: %s\n' % (', '.join(options))) @@ -286,7 +290,7 @@ f = open(f_fn,'w') f.write(f_code) f.close() - f_lib = '%s_f_wrappers_f2py' % (block.name) + #f_lib = '%s_f_wrappers_f2py' % (block.name) module_info = {'name':block.name, 'c_sources':[c_fn], 'f_sources':[f_fn], 'language':'f90'} module_infos.append(module_info) @@ -371,7 +375,7 @@ if sources_only: return - def configuration(parent_package='', top_path=None): + def configuration(parent_package='', top_path=None or ''): from numpy.distutils.misc_util import Configuration config = Configuration('',parent_package,top_path) flibname = modulename + '_fortran_f2py' @@ -403,10 +407,18 @@ return config old_sys_argv = sys.argv[:] - new_sys_argv = [sys.argv[0]] + ['build', - '--build-temp',build_dir, - '--build-base',build_dir, - '--build-platlib','.'] + build_dir_ext_temp = os.path.join(build_dir,'ext_temp') + build_dir_clib_temp = os.path.join(build_dir,'clib_temp') + build_dir_clib_clib = os.path.join(build_dir,'clib_clib') + new_sys_argv = [sys.argv[0]] + ['build_ext', + '--build-temp',build_dir_ext_temp, + '--build-lib',build_dir, + 'build_clib', + '--build-temp',build_dir_clib_temp, + '--build-clib',build_dir_clib_clib, + ] + temp_dirs = [build_dir_ext_temp, build_dir_clib_temp, build_dir_clib_clib] + if fc_flags: new_sys_argv += ['config_fc'] + fc_flags sys.argv[:] = new_sys_argv @@ -418,9 +430,11 @@ sys.argv[:] = old_sys_argv - if clean_build_dir and os.path.exists(build_dir): - sys.stderr.write('Removing build directory %s\n'%(build_dir)) - shutil.rmtree(build_dir) + if 1 or clean_build_dir: + for d in temp_dirs: + if os.path.exists(d): + sys.stderr.write('Removing build directory %s\n'%(d)) + shutil.rmtree(d) return def main(sys_argv = None): @@ -449,3 +463,73 @@ build_extension(sys_argv, sources_only = True) return + +def compile(source, + jobname = 'untitled', + extra_args = [], + source_ext = None, + modulenames = None + ): + """ + Build extension module from processing source with f2py. + + jobname - the name of compile job. For non-module source + this will be also the name of extension module. + modulenames - the list of extension module names that + the given compilation job should create. + extra_args - a list of extra arguments for numpy style + setup.py command line. + source_ext - extension of the Fortran source file: .f90 or .f + + Extension modules are saved to current working directory. + Returns a list of module objects according to modulenames + input. + """ + from nary import encode + tempdir = tempfile.gettempdir() + s = 'f2pyjob_%s_%s' % (jobname, encode(source)) + tmpdir = os.path.join(tempdir, s) + if source_ext is None: + reader = get_reader(source) + source_ext = {'free90':'.f90','fix90':'.f90','fix77':'.f','pyf':'.pyf'}[reader.mode] + + if modulenames is None: + modulenames = jobname, + if os.path.isdir(tmpdir): + try: + sys.path.insert(0, tmpdir) + modules = [] + for modulename in modulenames: + exec('import %s as m' % (modulename)) + modules.append(m) + sys.path.pop(0) + return modules + except ImportError: + pass + finally: + sys.path.pop(0) + else: + os.mkdir(tmpdir) + + fname = os.path.join(tmpdir,'%s_src%s' % (jobname, source_ext)) + + f = open(fname,'w') + f.write(source) + f.close() + + sys_argv = [] + sys_argv.extend(['--build-dir',tmpdir]) + #sys_argv.extend(['-DF2PY_DEBUG_PYOBJ_TOFROM']) + sys_argv.extend(['-m',jobname, fname]) + + build_extension(sys_argv + extra_args) + + sys.path.insert(0, tmpdir) + modules = [] + for modulename in modulenames: + exec('import %s as m' % (modulename)) + modules.append(m) + sys.path.pop(0) + return modules + +#EOF Added: trunk/numpy/f2py/lib/nary.py =================================================================== --- trunk/numpy/f2py/lib/nary.py 2007-05-18 16:41:44 UTC (rev 3776) +++ trunk/numpy/f2py/lib/nary.py 2007-05-18 16:44:43 UTC (rev 3777) @@ -0,0 +1,32 @@ +""" +nary - convert integer to a number with an arbitrary base. +""" + +__all__ = ['nary'] + +_alphabet='0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' +def _getalpha(r): + if r>=len(_alphabet): + return '_'+nary(r-len(_alphabet),len(_alphabet)) + return _alphabet[r] + +def nary(number, base=64): + """ + Return string representation of a number with a given base. + """ + if isinstance(number, str): + number = eval(number) + n = number + s = '' + while n: + n1 = n // base + r = n - n1*base + n = n1 + s = _getalpha(r) + s + return s + +def encode(string): + import md5 + return nary('0x'+md5.new(string).hexdigest()) + +#print nary(12345124254252525522512324,64) Deleted: trunk/numpy/f2py/lib/test_derived_scalar.py =================================================================== --- trunk/numpy/f2py/lib/test_derived_scalar.py 2007-05-18 16:41:44 UTC (rev 3776) +++ trunk/numpy/f2py/lib/test_derived_scalar.py 2007-05-18 16:44:43 UTC (rev 3777) @@ -1,99 +0,0 @@ -#!/usr/bin/env python -""" -Tests for intent(in,out) derived type arguments in Fortran subroutine's. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * - -def build(fortran_code, rebuild=True): - modulename = os.path.splitext(os.path.basename(__file__))[0]+'_ext' - try: - exec ('import %s as m' % (modulename)) - if rebuild and os.stat(m.__file__)[8] < os.stat(__file__)[8]: - del sys.modules[m.__name__] # soft unload extension module - os.remove(m.__file__) - raise ImportError,'%s is newer than %s' % (__file__, m.__file__) - except ImportError,msg: - assert str(msg).startswith('No module named'),str(msg) - print msg, ', recompiling %s.' % (modulename) - import tempfile - fname = tempfile.mktemp() + '.f90' - f = open(fname,'w') - f.write(fortran_code) - f.close() - sys_argv = [] - sys_argv.extend(['--build-dir','tmp']) - #sys_argv.extend(['-DF2PY_DEBUG_PYOBJ_TOFROM']) - from main import build_extension - sys_argv.extend(['-m',modulename, fname]) - build_extension(sys_argv) - os.remove(fname) - status = os.system(' '.join([sys.executable] + sys.argv)) - sys.exit(status) - return m - -fortran_code = ''' -subroutine foo(a) - type myt - integer flag - end type myt - type(myt) a -!f2py intent(in,out) a - a % flag = a % flag + 1 -end -function foo2(a) - type myt - integer flag - end type myt - type(myt) a - type(myt) foo2 - foo2 % flag = a % flag + 2 -end -''' - -# tester note: set rebuild=True when changing fortan_code and for SVN -m = build(fortran_code, rebuild=True) - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_simple(self, level=1): - a = m.myt(2) - assert_equal(a.flag,2) - assert isinstance(a,m.myt),`a` - r = m.foo(a) - assert isinstance(r,m.myt),`r` - assert r is a - assert_equal(r.flag,3) - assert_equal(a.flag,3) - - a.flag = 5 - assert_equal(r.flag,5) - - #s = m.foo((5,)) - - def check_foo2_simple(self, level=1): - a = m.myt(2) - assert_equal(a.flag,2) - assert isinstance(a,m.myt),`a` - r = m.foo2(a) - assert isinstance(r,m.myt),`r` - assert r is not a - assert_equal(a.flag,2) - assert_equal(r.flag,4) - - -if __name__ == "__main__": - NumpyTest().run() Deleted: trunk/numpy/f2py/lib/test_module_module.py =================================================================== --- trunk/numpy/f2py/lib/test_module_module.py 2007-05-18 16:41:44 UTC (rev 3776) +++ trunk/numpy/f2py/lib/test_module_module.py 2007-05-18 16:44:43 UTC (rev 3777) @@ -1,83 +0,0 @@ -#!/usr/bin/env python -""" -Tests for module with scalar derived types and subprograms. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * - -def build(fortran_code, rebuild=True, build_dir = 'tmp'): - modulename = os.path.splitext(os.path.basename(__file__))[0] + '_ext' - try: - exec ('import %s as m' % (modulename)) - if rebuild and os.stat(m.__file__)[8] < os.stat(__file__)[8]: - del sys.modules[m.__name__] # soft unload extension module - os.remove(m.__file__) - raise ImportError,'%s is newer than %s' % (__file__, m.__file__) - except ImportError,msg: - assert str(msg)==('No module named %s' % (modulename)) \ - or str(msg).startswith('%s is newer than' % (__file__)),str(msg) - print msg, ', recompiling %s.' % (modulename) - if not os.path.isdir(build_dir): os.makedirs(build_dir) - fname = os.path.join(build_dir, modulename + '_source.f90') - f = open(fname,'w') - f.write(fortran_code) - f.close() - sys_argv = [] - sys_argv.extend(['--build-dir',build_dir]) - #sys_argv.extend(['-DF2PY_DEBUG_PYOBJ_TOFROM']) - from main import build_extension - sys_argv.extend(['-m',modulename, fname]) - build_extension(sys_argv) - status = os.system(' '.join([sys.executable] + sys.argv)) - sys.exit(status) - return m - -fortran_code = ''' -module test_module_module_ext2 - type rat - integer n,d - end type rat - contains - subroutine foo2() - print*,"In foo2" - end subroutine foo2 -end module -module test_module_module_ext - contains - subroutine foo - use test_module_module_ext2 - print*,"In foo" - call foo2 - end subroutine foo - subroutine bar(a) - use test_module_module_ext2 - type(rat) a - print*,"In bar,a=",a - end subroutine bar -end module test_module_module_ext -''' - -# tester note: set rebuild=True when changing fortan_code and for SVN -m = build(fortran_code, rebuild=True) - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_simple(self, level=1): - foo = m.foo - foo() - -if __name__ == "__main__": - NumpyTest().run() Deleted: trunk/numpy/f2py/lib/test_module_scalar.py =================================================================== --- trunk/numpy/f2py/lib/test_module_scalar.py 2007-05-18 16:41:44 UTC (rev 3776) +++ trunk/numpy/f2py/lib/test_module_scalar.py 2007-05-18 16:44:43 UTC (rev 3777) @@ -1,83 +0,0 @@ -#!/usr/bin/env python -""" -Tests for module with scalar derived types and subprograms. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * - -def build(fortran_code, rebuild=True): - modulename = os.path.splitext(os.path.basename(__file__))[0] + '_ext' - try: - exec ('import %s as m' % (modulename)) - if rebuild and os.stat(m.__file__)[8] < os.stat(__file__)[8]: - del sys.modules[m.__name__] # soft unload extension module - os.remove(m.__file__) - raise ImportError,'%s is newer than %s' % (__file__, m.__file__) - except ImportError,msg: - assert str(msg)==('No module named %s' % (modulename)),str(msg) - print msg, ', recompiling %s.' % (modulename) - import tempfile - fname = tempfile.mktemp() + '.f90' - f = open(fname,'w') - f.write(fortran_code) - f.close() - sys_argv = [] - sys_argv.extend(['--build-dir','tmp']) - #sys_argv.extend(['-DF2PY_DEBUG_PYOBJ_TOFROM']) - from main import build_extension - sys_argv.extend(['-m',modulename, fname]) - build_extension(sys_argv) - os.remove(fname) - status = os.system(' '.join([sys.executable] + sys.argv)) - sys.exit(status) - return m - -fortran_code = ''' -module test_module_scalar_ext - - contains - subroutine foo(a) - integer a -!f2py intent(in,out) a - a = a + 1 - end subroutine foo - function foo2(a) - integer a - integer foo2 - foo2 = a + 2 - end function foo2 -end module test_module_scalar_ext -''' - -# tester note: set rebuild=True when changing fortan_code and for SVN -m = build(fortran_code, rebuild=True) - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_simple(self, level=1): - foo = m.foo - r = foo(2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,3) - - def check_foo2_simple(self, level=1): - foo2 = m.foo2 - r = foo2(2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,4) - -if __name__ == "__main__": - NumpyTest().run() Deleted: trunk/numpy/f2py/lib/test_scalar_function_in.py =================================================================== --- trunk/numpy/f2py/lib/test_scalar_function_in.py 2007-05-18 16:41:44 UTC (rev 3776) +++ trunk/numpy/f2py/lib/test_scalar_function_in.py 2007-05-18 16:44:43 UTC (rev 3777) @@ -1,554 +0,0 @@ -#!/usr/bin/env python -""" -Tests for intent(in) arguments in subroutine-wrapped Fortran functions. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * - -def build(fortran_code, rebuild=True): - modulename = os.path.splitext(os.path.basename(__file__))[0]+'_ext' - try: - exec ('import %s as m' % (modulename)) - if rebuild and os.stat(m.__file__)[8] < os.stat(__file__)[8]: - del sys.modules[m.__name__] # soft unload extension module - os.remove(m.__file__) - raise ImportError,'%s is newer than %s' % (__file__, m.__file__) - except ImportError,msg: - assert str(msg).startswith('No module named'),str(msg) - print msg, ', recompiling %s.' % (modulename) - import tempfile - fname = tempfile.mktemp() + '.f' - f = open(fname,'w') - f.write(fortran_code) - f.close() - sys_argv = ['--build-dir','tmp'] - #sys_argv.extend(['-DF2PY_DEBUG_PYOBJ_TOFROM']) - from main import build_extension - sys_argv.extend(['-m',modulename, fname]) - build_extension(sys_argv) - os.remove(fname) - os.system(' '.join([sys.executable] + sys.argv)) - sys.exit(0) - return m - -fortran_code = ''' - function fooint1(a) - integer*1 a - integer*1 fooint1 - fooint1 = a + 1 - end - function fooint2(a) - integer*2 a - integer*2 fooint2 - fooint2 = a + 1 - end - function fooint4(a) - integer*4 a - integer*4 fooint4 - fooint4 = a + 1 - end - function fooint8(a) - integer*8 a - integer*8 fooint8 - fooint8 = a + 1 - end - function foofloat4(a) - real*4 a - real*4 foofloat4 - foofloat4 = a + 1.0e0 - end - function foofloat8(a) - real*8 a - real*8 foofloat8 - foofloat8 = a + 1.0d0 - end - function foocomplex8(a) - complex*8 a - complex*8 foocomplex8 - foocomplex8 = a + 1.0e0 - end - function foocomplex16(a) - complex*16 a - complex*16 foocomplex16 - foocomplex16 = a + 1.0d0 - end - function foobool1(a) - logical*1 a - logical*1 foobool1 - foobool1 = .not. a - end - function foobool2(a) - logical*2 a - logical*2 foobool2 - foobool2 = .not. a - end - function foobool4(a) - logical*4 a - logical*4 foobool4 - foobool4 = .not. a - end - function foobool8(a) - logical*8 a - logical*8 foobool8 - foobool8 = .not. a - end - function foostring1(a) - character*1 a - character*1 foostring1 - foostring1 = "1" - end - function foostring5(a) - character*5 a - character*5 foostring5 - foostring5 = a - foostring5(1:2) = "12" - end -! function foostringstar(a) -! character*(*) a -! character*(*) foostringstar -! if (len(a).gt.0) then -! foostringstar = a -! foostringstar(1:1) = "1" -! endif -! end -''' - -# tester note: set rebuild=True when changing fortan_code and for SVN -m = build(fortran_code, rebuild=True) - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_integer1(self, level=1): - i = int8(2) - e = int8(3) - func = m.fooint1 - assert isinstance(i,int8),`type(i)` - r = func(i) - assert isinstance(r,int8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - for intx in [int64,int16,int32]: - r = func(intx(2)) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer2(self, level=1): - i = int16(2) - e = int16(3) - func = m.fooint2 - assert isinstance(i,int16),`type(i)` - r = func(i) - assert isinstance(r,int16),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - for intx in [int8,int64,int32]: - r = func(intx(2)) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer4(self, level=1): - i = int32(2) - e = int32(3) - func = m.fooint4 - assert isinstance(i,int32),`type(i)` - r = func(i) - assert isinstance(r,int32),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - for intx in [int8,int16,int64]: - r = func(intx(2)) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer8(self, level=1): - i = int64(2) - e = int64(3) - func = m.fooint8 - assert isinstance(i,int64),`type(i)` - r = func(i) - assert isinstance(r,int64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - for intx in [int8,int16,int32]: - r = func(intx(2)) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_real4(self, level=1): - i = float32(2) - e = float32(3) - func = m.foofloat4 - assert isinstance(i,float32),`type(i)` - r = func(i) - assert isinstance(r,float32),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e+float32(0.2)) - - r = func(float64(2.0)) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_real8(self, level=1): - i = float64(2) - e = float64(3) - func = m.foofloat8 - assert isinstance(i,float64),`type(i)` - r = func(i) - assert isinstance(r,float64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e+float64(0.2)) - - r = func(float32(2.0)) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_complex8(self, level=1): - i = complex64(2) - e = complex64(3) - func = m.foocomplex8 - assert isinstance(i,complex64),`type(i)` - r = func(i) - assert isinstance(r,complex64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(0.2)) - - r = func(2+1j) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(1j)) - - r = func(complex128(2.0)) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func([2,3]) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(3j)) - - self.assertRaises(TypeError,lambda :func([2,1,3])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_complex16(self, level=1): - i = complex128(2) - e = complex128(3) - func = m.foocomplex16 - assert isinstance(i,complex128),`type(i)` - r = func(i) - assert isinstance(r,complex128),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(0.2)) - - r = func(2+1j) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(1j)) - - r = func([2]) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func([2,3]) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(3j)) - - r = func(complex64(2.0)) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func([2,1,3])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_bool1(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool1 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool2(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool2 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool4(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool4 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool8(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool8 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_string1(self, level=1): - i = string0('a') - e = string0('1') - func = m.foostring1 - assert isinstance(i,string0),`type(i)` - r = func(i) - assert isinstance(r,string0),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func('ab') - assert isinstance(r,string0),`type(r)` - assert_equal(r,e) - - r = func('') - assert isinstance(r,string0),`type(r)` - assert_equal(r,e) - - def check_foo_string5(self, level=1): - i = string0('abcde') - e = string0('12cde') - func = m.foostring5 - assert isinstance(i,string0),`type(i)` - r = func(i) - assert isinstance(r,string0),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func('abc') - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12c ') - - r = func('abcdefghi') - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12cde') - - r = func([1]) - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12] ') - - def _check_foo_string0(self, level=1): - i = string0('abcde') - e = string0('12cde') - func = m.foostringstar - r = func('abcde') - assert_equal(r,'1bcde') - r = func('') - assert_equal(r,'') - -if __name__ == "__main__": - NumpyTest().run() Deleted: trunk/numpy/f2py/lib/test_scalar_in_out.py =================================================================== --- trunk/numpy/f2py/lib/test_scalar_in_out.py 2007-05-18 16:41:44 UTC (rev 3776) +++ trunk/numpy/f2py/lib/test_scalar_in_out.py 2007-05-18 16:44:43 UTC (rev 3777) @@ -1,552 +0,0 @@ -#!/usr/bin/env python -""" -Tests for intent(in,out) arguments in Fortran subroutine's. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * - -def build(fortran_code, rebuild=True, build_dir='tmp'): - modulename = os.path.splitext(os.path.basename(__file__))[0]+'_ext' - try: - exec ('import %s as m' % (modulename)) - if rebuild and os.stat(m.__file__)[8] < os.stat(__file__)[8]: - del sys.modules[m.__name__] # soft unload extension module - os.remove(m.__file__) - raise ImportError,'%s is newer than %s' % (__file__, m.__file__) - except ImportError,msg: - assert str(msg)==('No module named %s' % (modulename)) \ - or str(msg).startswith('%s is newer than' % (__file__)),str(msg) - print msg, ', recompiling %s.' % (modulename) - if not os.path.isdir(build_dir): os.makedirs(build_dir) - fname = os.path.join(build_dir,'%s_source.f' % (modulename)) - f = open(fname,'w') - f.write(fortran_code) - f.close() - sys_argv = ['--build-dir',build_dir] - #sys_argv.extend(['-DF2PY_DEBUG_PYOBJ_TOFROM']) - from main import build_extension - sys_argv.extend(['-m',modulename, fname]) - build_extension(sys_argv) - status = os.system(' '.join([sys.executable] + sys.argv)) - sys.exit(status) - return m - -fortran_code = ''' - subroutine fooint1(a) - integer*1 a -!f2py intent(in,out) a - a = a + 1 - end - subroutine fooint2(a) - integer*2 a -!f2py intent(in,out) a - a = a + 1 - end - subroutine fooint4(a) - integer*4 a -!f2py intent(in,out) a - a = a + 1 - end - subroutine fooint8(a) - integer*8 a -!f2py intent(in,out) a - a = a + 1 - end - subroutine foofloat4(a) - real*4 a -!f2py intent(in,out) a - a = a + 1.0e0 - end - subroutine foofloat8(a) - real*8 a -!f2py intent(in,out) a - a = a + 1.0d0 - end - subroutine foocomplex8(a) - complex*8 a -!f2py intent(in,out) a - a = a + 1.0e0 - end - subroutine foocomplex16(a) - complex*16 a -!f2py intent(in,out) a - a = a + 1.0d0 - end - subroutine foobool1(a) - logical*1 a -!f2py intent(in,out) a - a = .not. a - end - subroutine foobool2(a) - logical*2 a -!f2py intent(in,out) a - a = .not. a - end - subroutine foobool4(a) - logical*4 a -!f2py intent(in,out) a - a = .not. a - end - subroutine foobool8(a) - logical*8 a -!f2py intent(in,out) a - a = .not. a - end - subroutine foostring1(a) - character*1 a -!f2py intent(in,out) a - a = "1" - end - subroutine foostring5(a) - character*5 a -!f2py intent(in,out) a - a(1:2) = "12" - end - subroutine foostringstar(a) - character*(*) a -!f2py intent(in,out) a - if (len(a).gt.0) then - a(1:1) = "1" - endif - end -''' - -# tester note: set rebuild=True when changing fortan_code and for SVN -m = build(fortran_code, rebuild=True) - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_integer1(self, level=1): - i = int8(2) - e = int8(3) - func = m.fooint1 - assert isinstance(i,int8),`type(i)` - r = func(i) - assert isinstance(r,int8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - for intx in [int64,int16,int32]: - r = func(intx(2)) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer2(self, level=1): - i = int16(2) - e = int16(3) - func = m.fooint2 - assert isinstance(i,int16),`type(i)` - r = func(i) - assert isinstance(r,int16),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - for intx in [int8,int64,int32]: - r = func(intx(2)) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer4(self, level=1): - i = int32(2) - e = int32(3) - func = m.fooint4 - assert isinstance(i,int32),`type(i)` - r = func(i) - assert isinstance(r,int32),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - for intx in [int8,int16,int64]: - r = func(intx(2)) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer8(self, level=1): - i = int64(2) - e = int64(3) - func = m.fooint8 - assert isinstance(i,int64),`type(i)` - r = func(i) - assert isinstance(r,int64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - for intx in [int8,int16,int32]: - r = func(intx(2)) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_real4(self, level=1): - i = float32(2) - e = float32(3) - func = m.foofloat4 - assert isinstance(i,float32),`type(i)` - r = func(i) - assert isinstance(r,float32),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e+float32(0.2)) - - r = func(float64(2.0)) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_real8(self, level=1): - i = float64(2) - e = float64(3) - func = m.foofloat8 - assert isinstance(i,float64),`type(i)` - r = func(i) - assert isinstance(r,float64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e+float64(0.2)) - - r = func(float32(2.0)) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_complex8(self, level=1): - i = complex64(2) - e = complex64(3) - func = m.foocomplex8 - assert isinstance(i,complex64),`type(i)` - r = func(i) - assert isinstance(r,complex64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(0.2)) - - r = func(2+1j) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(1j)) - - r = func(complex128(2.0)) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func([2,3]) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(3j)) - - self.assertRaises(TypeError,lambda :func([2,1,3])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_complex16(self, level=1): - i = complex128(2) - e = complex128(3) - func = m.foocomplex16 - assert isinstance(i,complex128),`type(i)` - r = func(i) - assert isinstance(r,complex128),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(0.2)) - - r = func(2+1j) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(1j)) - - r = func([2]) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func([2,3]) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(3j)) - - r = func(complex64(2.0)) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func([2,1,3])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_bool1(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool1 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool2(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool2 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool4(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool4 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool8(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool8 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_string1(self, level=1): - i = string0('a') - e = string0('1') - func = m.foostring1 - assert isinstance(i,string0),`type(i)` - r = func(i) - assert isinstance(r,string0),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func('ab') - assert isinstance(r,string0),`type(r)` - assert_equal(r,e) - - r = func('') - assert isinstance(r,string0),`type(r)` - assert_equal(r,e) - - def check_foo_string5(self, level=1): - i = string0('abcde') - e = string0('12cde') - func = m.foostring5 - assert isinstance(i,string0),`type(i)` - r = func(i) - assert isinstance(r,string0),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func('abc') - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12c ') - - r = func('abcdefghi') - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12cde') - - r = func([1]) - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12] ') - - def check_foo_string0(self, level=1): - i = string0('abcde') - e = string0('12cde') - func = m.foostringstar - r = func('abcde') - assert_equal(r,'1bcde') - r = func('') - assert_equal(r,'') - -if __name__ == "__main__": - NumpyTest().run() Added: trunk/numpy/f2py/lib/tests/test_derived_scalar.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_derived_scalar.py 2007-05-18 16:41:44 UTC (rev 3776) +++ trunk/numpy/f2py/lib/tests/test_derived_scalar.py 2007-05-18 16:44:43 UTC (rev 3777) @@ -0,0 +1,74 @@ +#!/usr/bin/env python +""" +Tests for intent(in,out) derived type arguments in Fortran subroutine's. + +----- +Permission to use, modify, and distribute this software is given under the +terms of the NumPy License. See http://scipy.org. + +NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. +Author: Pearu Peterson +Created: Oct 2006 +----- +""" + +import os +import sys +from numpy.testing import * +set_package_path() +from lib.main import build_extension, compile +restore_path() + +fortran_code = ''' +subroutine foo(a) + type myt + integer flag + end type myt + type(myt) a +!f2py intent(in,out) a + a % flag = a % flag + 1 +end +function foo2(a) + type myt + integer flag + end type myt + type(myt) a + type(myt) foo2 + foo2 % flag = a % flag + 2 +end +''' + +m, = compile(fortran_code, 'test_derived_scalar_ext') + +from numpy import * + +class test_m(NumpyTestCase): + + def check_foo_simple(self, level=1): + a = m.myt(2) + assert_equal(a.flag,2) + assert isinstance(a,m.myt),`a` + r = m.foo(a) + assert isinstance(r,m.myt),`r` + assert r is a + assert_equal(r.flag,3) + assert_equal(a.flag,3) + + a.flag = 5 + assert_equal(r.flag,5) + + #s = m.foo((5,)) + + def check_foo2_simple(self, level=1): + a = m.myt(2) + assert_equal(a.flag,2) + assert isinstance(a,m.myt),`a` + r = m.foo2(a) + assert isinstance(r,m.myt),`r` + assert r is not a + assert_equal(a.flag,2) + assert_equal(r.flag,4) + + +if __name__ == "__main__": + NumpyTest().run() Copied: trunk/numpy/f2py/lib/tests/test_module_module.py (from rev 3764, trunk/numpy/f2py/lib/test_module_module.py) =================================================================== --- trunk/numpy/f2py/lib/test_module_module.py 2007-05-15 00:35:49 UTC (rev 3764) +++ trunk/numpy/f2py/lib/tests/test_module_module.py 2007-05-18 16:44:43 UTC (rev 3777) @@ -0,0 +1,61 @@ +#!/usr/bin/env python +""" +Tests for module with scalar derived types and subprograms. + +----- +Permission to use, modify, and distribute this software is given under the +terms of the NumPy License. See http://scipy.org. + +NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. +Author: Pearu Peterson +Created: Oct 2006 +----- +""" + +import os +import sys +from numpy.testing import * + +set_package_path() +from lib.main import build_extension, compile +restore_path() + +fortran_code = ''' +module test_module_module_ext2 + type rat + integer n,d + end type rat + contains + subroutine foo2() + print*,"In foo2" + end subroutine foo2 +end module +module test_module_module_ext + contains + subroutine foo + use test_module_module_ext2 + print*,"In foo" + call foo2 + end subroutine foo + subroutine bar(a) + use test_module_module_ext2 + type(rat) a + print*,"In bar,a=",a + end subroutine bar +end module test_module_module_ext +''' + +m,m2 = compile(fortran_code, modulenames=['test_module_module_ext', + 'test_module_module_ext2', + ]) + +from numpy import * + +class test_m(NumpyTestCase): + + def check_foo_simple(self, level=1): + foo = m.foo + foo() + +if __name__ == "__main__": + NumpyTest().run() Copied: trunk/numpy/f2py/lib/tests/test_module_scalar.py (from rev 3764, trunk/numpy/f2py/lib/test_module_scalar.py) =================================================================== --- trunk/numpy/f2py/lib/test_module_scalar.py 2007-05-15 00:35:49 UTC (rev 3764) +++ trunk/numpy/f2py/lib/tests/test_module_scalar.py 2007-05-18 16:44:43 UTC (rev 3777) @@ -0,0 +1,58 @@ +#!/usr/bin/env python +""" +Tests for module with scalar derived types and subprograms. + +----- +Permission to use, modify, and distribute this software is given under the +terms of the NumPy License. See http://scipy.org. + +NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. +Author: Pearu Peterson +Created: Oct 2006 +----- +""" + +import os +import sys +from numpy.testing import * +set_package_path() +from lib.main import build_extension, compile +restore_path() + +fortran_code = ''' +module test_module_scalar_ext + + contains + subroutine foo(a) + integer a +!f2py intent(in,out) a + a = a + 1 + end subroutine foo + function foo2(a) + integer a + integer foo2 + foo2 = a + 2 + end function foo2 +end module test_module_scalar_ext +''' + +m, = compile(fortran_code, modulenames = ['test_module_scalar_ext']) + +from numpy import * + +class test_m(NumpyTestCase): + + def check_foo_simple(self, level=1): + foo = m.foo + r = foo(2) + assert isinstance(r,int32),`type(r)` + assert_equal(r,3) + + def check_foo2_simple(self, level=1): + foo2 = m.foo2 + r = foo2(2) + assert isinstance(r,int32),`type(r)` + assert_equal(r,4) + +if __name__ == "__main__": + NumpyTest().run() Copied: trunk/numpy/f2py/lib/tests/test_scalar_function_in.py (from rev 3764, trunk/numpy/f2py/lib/test_scalar_function_in.py) =================================================================== --- trunk/numpy/f2py/lib/test_scalar_function_in.py 2007-05-15 00:35:49 UTC (rev 3764) +++ trunk/numpy/f2py/lib/tests/test_scalar_function_in.py 2007-05-18 16:44:43 UTC (rev 3777) @@ -0,0 +1,532 @@ +#!/usr/bin/env python +""" +Tests for intent(in) arguments in subroutine-wrapped Fortran functions. + +----- +Permission to use, modify, and distribute this software is given under the +terms of the NumPy License. See http://scipy.org. + +NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. +Author: Pearu Peterson +Created: Oct 2006 +----- +""" + +import os +import sys +from numpy.testing import * + +set_package_path() +from lib.main import build_extension, compile +restore_path() + +fortran_code = '''\ +! -*- f77 -*- + function fooint1(a) + integer*1 a + integer*1 fooint1 + fooint1 = a + 1 + end + function fooint2(a) + integer*2 a + integer*2 fooint2 + fooint2 = a + 1 + end + function fooint4(a) + integer*4 a + integer*4 fooint4 + fooint4 = a + 1 + end + function fooint8(a) + integer*8 a + integer*8 fooint8 + fooint8 = a + 1 + end + function foofloat4(a) + real*4 a + real*4 foofloat4 + foofloat4 = a + 1.0e0 + end + function foofloat8(a) + real*8 a + real*8 foofloat8 + foofloat8 = a + 1.0d0 + end + function foocomplex8(a) + complex*8 a + complex*8 foocomplex8 + foocomplex8 = a + 1.0e0 + end + function foocomplex16(a) + complex*16 a + complex*16 foocomplex16 + foocomplex16 = a + 1.0d0 + end + function foobool1(a) + logical*1 a + logical*1 foobool1 + foobool1 = .not. a + end + function foobool2(a) + logical*2 a + logical*2 foobool2 + foobool2 = .not. a + end + function foobool4(a) + logical*4 a + logical*4 foobool4 + foobool4 = .not. a + end + function foobool8(a) + logical*8 a + logical*8 foobool8 + foobool8 = .not. a + end + function foostring1(a) + character*1 a + character*1 foostring1 + foostring1 = "1" + end + function foostring5(a) + character*5 a + character*5 foostring5 + foostring5 = a + foostring5(1:2) = "12" + end +! function foostringstar(a) +! character*(*) a +! character*(*) foostringstar +! if (len(a).gt.0) then +! foostringstar = a +! foostringstar(1:1) = "1" +! endif +! end +''' + +m, = compile(fortran_code, 'test_scalar_function_in_ext') + +from numpy import * + +class test_m(NumpyTestCase): + + def check_foo_integer1(self, level=1): + i = int8(2) + e = int8(3) + func = m.fooint1 + assert isinstance(i,int8),`type(i)` + r = func(i) + assert isinstance(r,int8),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,int8),`type(r)` + assert_equal(r,e) + + for intx in [int64,int16,int32]: + r = func(intx(2)) + assert isinstance(r,int8),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,int8),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,int8),`type(r)` + assert_equal(r,e) + + r = func([2]) + assert isinstance(r,int8),`type(r)` + assert_equal(r,e) + + self.assertRaises(TypeError,lambda :func(2.2j)) + self.assertRaises(TypeError,lambda :func([2,1])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_integer2(self, level=1): + i = int16(2) + e = int16(3) + func = m.fooint2 + assert isinstance(i,int16),`type(i)` + r = func(i) + assert isinstance(r,int16),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,int16),`type(r)` + assert_equal(r,e) + + for intx in [int8,int64,int32]: + r = func(intx(2)) + assert isinstance(r,int16),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,int16),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,int16),`type(r)` + assert_equal(r,e) + + r = func([2]) + assert isinstance(r,int16),`type(r)` + assert_equal(r,e) + + self.assertRaises(TypeError,lambda :func(2.2j)) + self.assertRaises(TypeError,lambda :func([2,1])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_integer4(self, level=1): + i = int32(2) + e = int32(3) + func = m.fooint4 + assert isinstance(i,int32),`type(i)` + r = func(i) + assert isinstance(r,int32),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,int32),`type(r)` + assert_equal(r,e) + + for intx in [int8,int16,int64]: + r = func(intx(2)) + assert isinstance(r,int32),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,int32),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,int32),`type(r)` + assert_equal(r,e) + + r = func([2]) + assert isinstance(r,int32),`type(r)` + assert_equal(r,e) + + self.assertRaises(TypeError,lambda :func(2.2j)) + self.assertRaises(TypeError,lambda :func([2,1])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_integer8(self, level=1): + i = int64(2) + e = int64(3) + func = m.fooint8 + assert isinstance(i,int64),`type(i)` + r = func(i) + assert isinstance(r,int64),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,int64),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,int64),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,int64),`type(r)` + assert_equal(r,e) + + for intx in [int8,int16,int32]: + r = func(intx(2)) + assert isinstance(r,int64),`type(r)` + assert_equal(r,e) + + r = func([2]) + assert isinstance(r,int64),`type(r)` + assert_equal(r,e) + + self.assertRaises(TypeError,lambda :func(2.2j)) + self.assertRaises(TypeError,lambda :func([2,1])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_real4(self, level=1): + i = float32(2) + e = float32(3) + func = m.foofloat4 + assert isinstance(i,float32),`type(i)` + r = func(i) + assert isinstance(r,float32),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,float32),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,float32),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,float32),`type(r)` + assert_equal(r,e+float32(0.2)) + + r = func(float64(2.0)) + assert isinstance(r,float32),`type(r)` + assert_equal(r,e) + + r = func([2]) + assert isinstance(r,float32),`type(r)` + assert_equal(r,e) + + self.assertRaises(TypeError,lambda :func(2.2j)) + self.assertRaises(TypeError,lambda :func([2,1])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_real8(self, level=1): + i = float64(2) + e = float64(3) + func = m.foofloat8 + assert isinstance(i,float64),`type(i)` + r = func(i) + assert isinstance(r,float64),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,float64),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,float64),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,float64),`type(r)` + assert_equal(r,e+float64(0.2)) + + r = func(float32(2.0)) + assert isinstance(r,float64),`type(r)` + assert_equal(r,e) + + r = func([2]) + assert isinstance(r,float64),`type(r)` + assert_equal(r,e) + + self.assertRaises(TypeError,lambda :func(2.2j)) + self.assertRaises(TypeError,lambda :func([2,1])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_complex8(self, level=1): + i = complex64(2) + e = complex64(3) + func = m.foocomplex8 + assert isinstance(i,complex64),`type(i)` + r = func(i) + assert isinstance(r,complex64),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,complex64),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,complex64),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,complex64),`type(r)` + assert_equal(r,e+complex64(0.2)) + + r = func(2+1j) + assert isinstance(r,complex64),`type(r)` + assert_equal(r,e+complex64(1j)) + + r = func(complex128(2.0)) + assert isinstance(r,complex64),`type(r)` + assert_equal(r,e) + + r = func([2]) + assert isinstance(r,complex64),`type(r)` + assert_equal(r,e) + + r = func([2,3]) + assert isinstance(r,complex64),`type(r)` + assert_equal(r,e+complex64(3j)) + + self.assertRaises(TypeError,lambda :func([2,1,3])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_complex16(self, level=1): + i = complex128(2) + e = complex128(3) + func = m.foocomplex16 + assert isinstance(i,complex128),`type(i)` + r = func(i) + assert isinstance(r,complex128),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,complex128),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,complex128),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,complex128),`type(r)` + assert_equal(r,e+complex128(0.2)) + + r = func(2+1j) + assert isinstance(r,complex128),`type(r)` + assert_equal(r,e+complex128(1j)) + + r = func([2]) + assert isinstance(r,complex128),`type(r)` + assert_equal(r,e) + + r = func([2,3]) + assert isinstance(r,complex128),`type(r)` + assert_equal(r,e+complex128(3j)) + + r = func(complex64(2.0)) + assert isinstance(r,complex128),`type(r)` + assert_equal(r,e) + + self.assertRaises(TypeError,lambda :func([2,1,3])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_bool1(self, level=1): + i = bool8(True) + e = bool8(False) + func = m.foobool1 + assert isinstance(i,bool8),`type(i)` + r = func(i) + assert isinstance(r,bool8),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + for tv in [1,2,2.1,-1j,[0],True]: + r = func(tv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,e) + + for fv in [0,0.0,0j,False,(),{},[]]: + r = func(fv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,not e) + + def check_foo_bool2(self, level=1): + i = bool8(True) + e = bool8(False) + func = m.foobool2 + assert isinstance(i,bool8),`type(i)` + r = func(i) + assert isinstance(r,bool8),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + for tv in [1,2,2.1,-1j,[0],True]: + r = func(tv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,e) + + for fv in [0,0.0,0j,False,(),{},[]]: + r = func(fv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,not e) + + def check_foo_bool4(self, level=1): + i = bool8(True) + e = bool8(False) + func = m.foobool4 + assert isinstance(i,bool8),`type(i)` + r = func(i) + assert isinstance(r,bool8),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + for tv in [1,2,2.1,-1j,[0],True]: + r = func(tv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,e) + + for fv in [0,0.0,0j,False,(),{},[]]: + r = func(fv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,not e) + + def check_foo_bool8(self, level=1): + i = bool8(True) + e = bool8(False) + func = m.foobool8 + assert isinstance(i,bool8),`type(i)` + r = func(i) + assert isinstance(r,bool8),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + for tv in [1,2,2.1,-1j,[0],True]: + r = func(tv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,e) + + for fv in [0,0.0,0j,False,(),{},[]]: + r = func(fv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,not e) + + def check_foo_string1(self, level=1): + i = string0('a') + e = string0('1') + func = m.foostring1 + assert isinstance(i,string0),`type(i)` + r = func(i) + assert isinstance(r,string0),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func('ab') + assert isinstance(r,string0),`type(r)` + assert_equal(r,e) + + r = func('') + assert isinstance(r,string0),`type(r)` + assert_equal(r,e) + + def check_foo_string5(self, level=1): + i = string0('abcde') + e = string0('12cde') + func = m.foostring5 + assert isinstance(i,string0),`type(i)` + r = func(i) + assert isinstance(r,string0),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func('abc') + assert isinstance(r,string0),`type(r)` + assert_equal(r,'12c ') + + r = func('abcdefghi') + assert isinstance(r,string0),`type(r)` + assert_equal(r,'12cde') + + r = func([1]) + assert isinstance(r,string0),`type(r)` + assert_equal(r,'12] ') + + def _check_foo_string0(self, level=1): + i = string0('abcde') + e = string0('12cde') + func = m.foostringstar + r = func('abcde') + assert_equal(r,'1bcde') + r = func('') + assert_equal(r,'') + +if __name__ == "__main__": + NumpyTest().run() Copied: trunk/numpy/f2py/lib/tests/test_scalar_in_out.py (from rev 3764, trunk/numpy/f2py/lib/test_scalar_in_out.py) =================================================================== --- trunk/numpy/f2py/lib/test_scalar_in_out.py 2007-05-15 00:35:49 UTC (rev 3764) +++ trunk/numpy/f2py/lib/tests/test_scalar_in_out.py 2007-05-18 16:44:43 UTC (rev 3777) @@ -0,0 +1,529 @@ +#!/usr/bin/env python +""" +Tests for intent(in,out) arguments in Fortran subroutine's. + +----- +Permission to use, modify, and distribute this software is given under the +terms of the NumPy License. See http://scipy.org. + +NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. +Author: Pearu Peterson +Created: Oct 2006 +----- +""" + +import os +import sys +from numpy.testing import * + +set_package_path() +from lib.main import build_extension, compile +restore_path() + +fortran_code = ''' + subroutine fooint1(a) + integer*1 a +!f2py intent(in,out) a + a = a + 1 + end + subroutine fooint2(a) + integer*2 a +!f2py intent(in,out) a + a = a + 1 + end + subroutine fooint4(a) + integer*4 a +!f2py intent(in,out) a + a = a + 1 + end + subroutine fooint8(a) + integer*8 a +!f2py intent(in,out) a + a = a + 1 + end + subroutine foofloat4(a) + real*4 a +!f2py intent(in,out) a + a = a + 1.0e0 + end + subroutine foofloat8(a) + real*8 a +!f2py intent(in,out) a + a = a + 1.0d0 + end + subroutine foocomplex8(a) + complex*8 a +!f2py intent(in,out) a + a = a + 1.0e0 + end + subroutine foocomplex16(a) + complex*16 a +!f2py intent(in,out) a + a = a + 1.0d0 + end + subroutine foobool1(a) + logical*1 a +!f2py intent(in,out) a + a = .not. a + end + subroutine foobool2(a) + logical*2 a +!f2py intent(in,out) a + a = .not. a + end + subroutine foobool4(a) + logical*4 a +!f2py intent(in,out) a + a = .not. a + end + subroutine foobool8(a) + logical*8 a +!f2py intent(in,out) a + a = .not. a + end + subroutine foostring1(a) + character*1 a +!f2py intent(in,out) a + a = "1" + end + subroutine foostring5(a) + character*5 a +!f2py intent(in,out) a + a(1:2) = "12" + end + subroutine foostringstar(a) + character*(*) a +!f2py intent(in,out) a + if (len(a).gt.0) then + a(1:1) = "1" + endif + end +''' + +m, = compile(fortran_code, 'test_scalar_in_out_ext', source_ext = '.f') + +from numpy import * + +class test_m(NumpyTestCase): + + def check_foo_integer1(self, level=1): + i = int8(2) + e = int8(3) + func = m.fooint1 + assert isinstance(i,int8),`type(i)` + r = func(i) + assert isinstance(r,int8),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,int8),`type(r)` + assert_equal(r,e) + + for intx in [int64,int16,int32]: + r = func(intx(2)) + assert isinstance(r,int8),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,int8),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,int8),`type(r)` + assert_equal(r,e) + + r = func([2]) + assert isinstance(r,int8),`type(r)` + assert_equal(r,e) + + self.assertRaises(TypeError,lambda :func(2.2j)) + self.assertRaises(TypeError,lambda :func([2,1])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_integer2(self, level=1): + i = int16(2) + e = int16(3) + func = m.fooint2 + assert isinstance(i,int16),`type(i)` + r = func(i) + assert isinstance(r,int16),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,int16),`type(r)` + assert_equal(r,e) + + for intx in [int8,int64,int32]: + r = func(intx(2)) + assert isinstance(r,int16),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,int16),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,int16),`type(r)` + assert_equal(r,e) + + r = func([2]) + assert isinstance(r,int16),`type(r)` + assert_equal(r,e) + + self.assertRaises(TypeError,lambda :func(2.2j)) + self.assertRaises(TypeError,lambda :func([2,1])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_integer4(self, level=1): + i = int32(2) + e = int32(3) + func = m.fooint4 + assert isinstance(i,int32),`type(i)` + r = func(i) + assert isinstance(r,int32),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,int32),`type(r)` + assert_equal(r,e) + + for intx in [int8,int16,int64]: + r = func(intx(2)) + assert isinstance(r,int32),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,int32),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,int32),`type(r)` + assert_equal(r,e) + + r = func([2]) + assert isinstance(r,int32),`type(r)` + assert_equal(r,e) + + self.assertRaises(TypeError,lambda :func(2.2j)) + self.assertRaises(TypeError,lambda :func([2,1])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_integer8(self, level=1): + i = int64(2) + e = int64(3) + func = m.fooint8 + assert isinstance(i,int64),`type(i)` + r = func(i) + assert isinstance(r,int64),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,int64),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,int64),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,int64),`type(r)` + assert_equal(r,e) + + for intx in [int8,int16,int32]: + r = func(intx(2)) + assert isinstance(r,int64),`type(r)` + assert_equal(r,e) + + r = func([2]) + assert isinstance(r,int64),`type(r)` + assert_equal(r,e) + + self.assertRaises(TypeError,lambda :func(2.2j)) + self.assertRaises(TypeError,lambda :func([2,1])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_real4(self, level=1): + i = float32(2) + e = float32(3) + func = m.foofloat4 + assert isinstance(i,float32),`type(i)` + r = func(i) + assert isinstance(r,float32),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,float32),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,float32),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,float32),`type(r)` + assert_equal(r,e+float32(0.2)) + + r = func(float64(2.0)) + assert isinstance(r,float32),`type(r)` + assert_equal(r,e) + + r = func([2]) + assert isinstance(r,float32),`type(r)` + assert_equal(r,e) + + self.assertRaises(TypeError,lambda :func(2.2j)) + self.assertRaises(TypeError,lambda :func([2,1])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_real8(self, level=1): + i = float64(2) + e = float64(3) + func = m.foofloat8 + assert isinstance(i,float64),`type(i)` + r = func(i) + assert isinstance(r,float64),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,float64),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,float64),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,float64),`type(r)` + assert_equal(r,e+float64(0.2)) + + r = func(float32(2.0)) + assert isinstance(r,float64),`type(r)` + assert_equal(r,e) + + r = func([2]) + assert isinstance(r,float64),`type(r)` + assert_equal(r,e) + + self.assertRaises(TypeError,lambda :func(2.2j)) + self.assertRaises(TypeError,lambda :func([2,1])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_complex8(self, level=1): + i = complex64(2) + e = complex64(3) + func = m.foocomplex8 + assert isinstance(i,complex64),`type(i)` + r = func(i) + assert isinstance(r,complex64),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,complex64),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,complex64),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,complex64),`type(r)` + assert_equal(r,e+complex64(0.2)) + + r = func(2+1j) + assert isinstance(r,complex64),`type(r)` + assert_equal(r,e+complex64(1j)) + + r = func(complex128(2.0)) + assert isinstance(r,complex64),`type(r)` + assert_equal(r,e) + + r = func([2]) + assert isinstance(r,complex64),`type(r)` + assert_equal(r,e) + + r = func([2,3]) + assert isinstance(r,complex64),`type(r)` + assert_equal(r,e+complex64(3j)) + + self.assertRaises(TypeError,lambda :func([2,1,3])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_complex16(self, level=1): + i = complex128(2) + e = complex128(3) + func = m.foocomplex16 + assert isinstance(i,complex128),`type(i)` + r = func(i) + assert isinstance(r,complex128),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func(2) + assert isinstance(r,complex128),`type(r)` + assert_equal(r,e) + + r = func(2.0) + assert isinstance(r,complex128),`type(r)` + assert_equal(r,e) + + r = func(2.2) + assert isinstance(r,complex128),`type(r)` + assert_equal(r,e+complex128(0.2)) + + r = func(2+1j) + assert isinstance(r,complex128),`type(r)` + assert_equal(r,e+complex128(1j)) + + r = func([2]) + assert isinstance(r,complex128),`type(r)` + assert_equal(r,e) + + r = func([2,3]) + assert isinstance(r,complex128),`type(r)` + assert_equal(r,e+complex128(3j)) + + r = func(complex64(2.0)) + assert isinstance(r,complex128),`type(r)` + assert_equal(r,e) + + self.assertRaises(TypeError,lambda :func([2,1,3])) + self.assertRaises(TypeError,lambda :func({})) + + def check_foo_bool1(self, level=1): + i = bool8(True) + e = bool8(False) + func = m.foobool1 + assert isinstance(i,bool8),`type(i)` + r = func(i) + assert isinstance(r,bool8),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + for tv in [1,2,2.1,-1j,[0],True]: + r = func(tv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,e) + + for fv in [0,0.0,0j,False,(),{},[]]: + r = func(fv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,not e) + + def check_foo_bool2(self, level=1): + i = bool8(True) + e = bool8(False) + func = m.foobool2 + assert isinstance(i,bool8),`type(i)` + r = func(i) + assert isinstance(r,bool8),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + for tv in [1,2,2.1,-1j,[0],True]: + r = func(tv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,e) + + for fv in [0,0.0,0j,False,(),{},[]]: + r = func(fv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,not e) + + def check_foo_bool4(self, level=1): + i = bool8(True) + e = bool8(False) + func = m.foobool4 + assert isinstance(i,bool8),`type(i)` + r = func(i) + assert isinstance(r,bool8),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + for tv in [1,2,2.1,-1j,[0],True]: + r = func(tv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,e) + + for fv in [0,0.0,0j,False,(),{},[]]: + r = func(fv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,not e) + + def check_foo_bool8(self, level=1): + i = bool8(True) + e = bool8(False) + func = m.foobool8 + assert isinstance(i,bool8),`type(i)` + r = func(i) + assert isinstance(r,bool8),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + for tv in [1,2,2.1,-1j,[0],True]: + r = func(tv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,e) + + for fv in [0,0.0,0j,False,(),{},[]]: + r = func(fv) + assert isinstance(r,bool8),`type(r)` + assert_equal(r,not e) + + def check_foo_string1(self, level=1): + i = string0('a') + e = string0('1') + func = m.foostring1 + assert isinstance(i,string0),`type(i)` + r = func(i) + assert isinstance(r,string0),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func('ab') + assert isinstance(r,string0),`type(r)` + assert_equal(r,e) + + r = func('') + assert isinstance(r,string0),`type(r)` + assert_equal(r,e) + + def check_foo_string5(self, level=1): + i = string0('abcde') + e = string0('12cde') + func = m.foostring5 + assert isinstance(i,string0),`type(i)` + r = func(i) + assert isinstance(r,string0),`type(r)` + assert i is not r,`id(i),id(r)` + assert_equal(r,e) + + r = func('abc') + assert isinstance(r,string0),`type(r)` + assert_equal(r,'12c ') + + r = func('abcdefghi') + assert isinstance(r,string0),`type(r)` + assert_equal(r,'12cde') + + r = func([1]) + assert isinstance(r,string0),`type(r)` + assert_equal(r,'12] ') + + def check_foo_string0(self, level=1): + i = string0('abcde') + e = string0('12cde') + func = m.foostringstar + r = func('abcde') + assert_equal(r,'1bcde') + r = func('') + assert_equal(r,'') + +if __name__ == "__main__": + NumpyTest().run() From numpy-svn at scipy.org Fri May 18 12:58:33 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 18 May 2007 11:58:33 -0500 (CDT) Subject: [Numpy-svn] r3778 - trunk/numpy/f2py/lib Message-ID: <20070518165833.A76FE39C0AF@new.scipy.org> Author: pearu Date: 2007-05-18 11:58:30 -0500 (Fri, 18 May 2007) New Revision: 3778 Modified: trunk/numpy/f2py/lib/main.py Log: Minor for Python 2.3 support. Modified: trunk/numpy/f2py/lib/main.py =================================================================== --- trunk/numpy/f2py/lib/main.py 2007-05-18 16:44:43 UTC (rev 3777) +++ trunk/numpy/f2py/lib/main.py 2007-05-18 16:58:30 UTC (rev 3778) @@ -495,9 +495,9 @@ if modulenames is None: modulenames = jobname, - if os.path.isdir(tmpdir): + if os.path.isdir(tmpdir): + sys.path.insert(0, tmpdir) try: - sys.path.insert(0, tmpdir) modules = [] for modulename in modulenames: exec('import %s as m' % (modulename)) @@ -506,8 +506,7 @@ return modules except ImportError: pass - finally: - sys.path.pop(0) + sys.path.pop(0) else: os.mkdir(tmpdir) From numpy-svn at scipy.org Fri May 18 13:33:21 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 18 May 2007 12:33:21 -0500 (CDT) Subject: [Numpy-svn] r3779 - in trunk/numpy/distutils: . command Message-ID: <20070518173321.DD22E39C0EF@new.scipy.org> Author: pearu Date: 2007-05-18 12:33:15 -0500 (Fri, 18 May 2007) New Revision: 3779 Modified: trunk/numpy/distutils/command/build_ext.py trunk/numpy/distutils/misc_util.py Log: Fixed warnings on language changes. Modified: trunk/numpy/distutils/command/build_ext.py =================================================================== --- trunk/numpy/distutils/command/build_ext.py 2007-05-18 16:58:30 UTC (rev 3778) +++ trunk/numpy/distutils/command/build_ext.py 2007-05-18 17:33:15 UTC (rev 3779) @@ -138,7 +138,7 @@ ext_language = 'f77' else: ext_language = 'c' # default - if l and l!=ext_language: + if l and l!=ext_language and ext.language: log.warn('resetting extension %r language from %r to %r.' % (ext.name,l,ext_language)) ext.language = ext_language # global language Modified: trunk/numpy/distutils/misc_util.py =================================================================== --- trunk/numpy/distutils/misc_util.py 2007-05-18 16:58:30 UTC (rev 3778) +++ trunk/numpy/distutils/misc_util.py 2007-05-18 17:33:15 UTC (rev 3779) @@ -309,8 +309,9 @@ return [seq] def get_language(sources): + # not used in numpy/scipy packages, use build_ext.detect_language instead """ Determine language value (c,f77,f90) from sources """ - language = 'c' + language = None for source in sources: if isinstance(source, str): if f90_ext_match(source): @@ -1030,11 +1031,7 @@ ext_args = copy.copy(kw) ext_args['name'] = dot_join(self.name,name) ext_args['sources'] = sources - - language = ext_args.get('language',None) - if language is None: - ext_args['language'] = get_language(sources) - + if ext_args.has_key('extra_info'): extra_info = ext_args['extra_info'] del ext_args['extra_info'] @@ -1099,10 +1096,6 @@ name = name #+ '__OF__' + self.name build_info['sources'] = sources - language = build_info.get('language',None) - if language is None: - build_info['language'] = get_language(sources) - self._fix_paths_dict(build_info) self.libraries.append((name,build_info)) From numpy-svn at scipy.org Fri May 18 16:18:02 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 18 May 2007 15:18:02 -0500 (CDT) Subject: [Numpy-svn] r3780 - trunk/numpy/distutils/command Message-ID: <20070518201802.6914FC7C06D@new.scipy.org> Author: pearu Date: 2007-05-18 15:17:48 -0500 (Fri, 18 May 2007) New Revision: 3780 Modified: trunk/numpy/distutils/command/build.py trunk/numpy/distutils/command/build_clib.py trunk/numpy/distutils/command/build_ext.py trunk/numpy/distutils/command/config_compiler.py Log: unify config_fc, build_clib, build_ext commands --fcompiler options so that --fcompiler can be specified only once in a command line Modified: trunk/numpy/distutils/command/build.py =================================================================== --- trunk/numpy/distutils/command/build.py 2007-05-18 17:33:15 UTC (rev 3779) +++ trunk/numpy/distutils/command/build.py 2007-05-18 20:17:48 UTC (rev 3780) @@ -5,7 +5,7 @@ class build(old_build): - sub_commands = [('config_fc', lambda *args: 1), + sub_commands = [('config_fc', lambda *args: True), ('build_src', old_build.has_ext_modules), ] + old_build.sub_commands @@ -16,3 +16,6 @@ if build_scripts is None: self.build_scripts = os.path.join(self.build_base, 'scripts' + plat_specifier) + return + +#EOF Modified: trunk/numpy/distutils/command/build_clib.py =================================================================== --- trunk/numpy/distutils/command/build_clib.py 2007-05-18 17:33:15 UTC (rev 3779) +++ trunk/numpy/distutils/command/build_clib.py 2007-05-18 20:17:48 UTC (rev 3780) @@ -29,12 +29,8 @@ def initialize_options(self): old_build_clib.initialize_options(self) self.fcompiler = None + return - def finalize_options(self): - old_build_clib.finalize_options(self) - self.set_undefined_options('build_ext', - ('fcompiler', 'fcompiler')) - def have_f_sources(self): for (lib_name, build_info) in self.libraries: if has_f_sources(build_info.get('sources',[])): Modified: trunk/numpy/distutils/command/build_ext.py =================================================================== --- trunk/numpy/distutils/command/build_ext.py 2007-05-18 17:33:15 UTC (rev 3779) +++ trunk/numpy/distutils/command/build_ext.py 2007-05-18 20:17:48 UTC (rev 3780) @@ -38,9 +38,7 @@ incl_dirs = self.include_dirs old_build_ext.finalize_options(self) if incl_dirs is not None: - self.include_dirs.extend(self.distribution.include_dirs or []) - self.set_undefined_options('config_fc', - ('fcompiler', 'fcompiler')) + self.include_dirs.extend(self.distribution.include_dirs or []) return def run(self): @@ -224,7 +222,6 @@ modpath = string.split(fullname, '.') package = string.join(modpath[0:-1], '.') base = modpath[-1] - build_py = self.get_finalized_command('build_py') package_dir = build_py.get_package_dir(package) ext_filename = os.path.join(package_dir, Modified: trunk/numpy/distutils/command/config_compiler.py =================================================================== --- trunk/numpy/distutils/command/config_compiler.py 2007-05-18 17:33:15 UTC (rev 3779) +++ trunk/numpy/distutils/command/config_compiler.py 2007-05-18 20:17:48 UTC (rev 3780) @@ -1,5 +1,6 @@ import sys from distutils.core import Command +from numpy.distutils import log #XXX: Implement confic_cc for enhancing C/C++ compiler options. #XXX: Linker flags @@ -56,7 +57,22 @@ return def finalize_options(self): - # Do nothing. + log.info('unifing config_fc, build_ext, build_clib commands fcompiler options') + build_clib = self.get_finalized_command('build_clib') + build_ext = self.get_finalized_command('build_ext') + for a in ['fcompiler']: + l = [] + for c in [self, build_clib, build_ext]: + v = getattr(c,a) + if v is not None and v not in l: l.append(v) + if not l: v1 = None + else: v1 = l[0] + if len(l)>1: + log.warn(' commands have different --%s options: %s'\ + ', using first in list as default' % (a, l)) + if v1: + for c in [self, build_clib, build_ext]: + if getattr(c,a) is None: setattr(c, a, v1) return def run(self): From numpy-svn at scipy.org Fri May 18 16:41:19 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 18 May 2007 15:41:19 -0500 (CDT) Subject: [Numpy-svn] r3781 - in trunk/numpy/distutils: . command Message-ID: <20070518204119.B53FD39C0A8@new.scipy.org> Author: pearu Date: 2007-05-18 15:41:10 -0500 (Fri, 18 May 2007) New Revision: 3781 Modified: trunk/numpy/distutils/command/build.py trunk/numpy/distutils/command/config.py trunk/numpy/distutils/command/config_compiler.py trunk/numpy/distutils/core.py Log: added config to --fcompiler option unification method. introduced config_cc for unifying --compiler options. Modified: trunk/numpy/distutils/command/build.py =================================================================== --- trunk/numpy/distutils/command/build.py 2007-05-18 20:17:48 UTC (rev 3780) +++ trunk/numpy/distutils/command/build.py 2007-05-18 20:41:10 UTC (rev 3781) @@ -5,7 +5,8 @@ class build(old_build): - sub_commands = [('config_fc', lambda *args: True), + sub_commands = [('config_cc', lambda *args: True), + ('config_fc', lambda *args: True), ('build_src', old_build.has_ext_modules), ] + old_build.sub_commands Modified: trunk/numpy/distutils/command/config.py =================================================================== --- trunk/numpy/distutils/command/config.py 2007-05-18 20:17:48 UTC (rev 3780) +++ trunk/numpy/distutils/command/config.py 2007-05-18 20:41:10 UTC (rev 3781) @@ -14,8 +14,7 @@ class config(old_config): old_config.user_options += [ - ('fcompiler=', None, - "specify the Fortran compiler type"), + ('fcompiler=', None, "specify the Fortran compiler type"), ] def initialize_options(self): @@ -23,13 +22,6 @@ old_config.initialize_options(self) return - def finalize_options(self): - old_config.finalize_options(self) - f = self.distribution.get_command_obj('config_fc') - self.set_undefined_options('config_fc', - ('fcompiler', 'fcompiler')) - return - def _check_compiler (self): old_config._check_compiler(self) from numpy.distutils.fcompiler import FCompiler, new_fcompiler Modified: trunk/numpy/distutils/command/config_compiler.py =================================================================== --- trunk/numpy/distutils/command/config_compiler.py 2007-05-18 20:17:48 UTC (rev 3780) +++ trunk/numpy/distutils/command/config_compiler.py 2007-05-18 20:41:10 UTC (rev 3781) @@ -57,12 +57,14 @@ return def finalize_options(self): - log.info('unifing config_fc, build_ext, build_clib commands fcompiler options') + log.info('unifing config_fc, config, build_clib, build_ext commands --fcompiler options') build_clib = self.get_finalized_command('build_clib') build_ext = self.get_finalized_command('build_ext') + config = self.get_finalized_command('config') + cmd_list = [self, config, build_clib, build_ext] for a in ['fcompiler']: l = [] - for c in [self, build_clib, build_ext]: + for c in cmd_list: v = getattr(c,a) if v is not None and v not in l: l.append(v) if not l: v1 = None @@ -71,10 +73,48 @@ log.warn(' commands have different --%s options: %s'\ ', using first in list as default' % (a, l)) if v1: - for c in [self, build_clib, build_ext]: + for c in cmd_list: if getattr(c,a) is None: setattr(c, a, v1) return def run(self): # Do nothing. return + +class config_cc(Command): + """ Distutils command to hold user specified options + to C/C++ compilers. + """ + + user_options = [ + ('compiler=',None,"specify C/C++ compiler type"), + ] + + def initialize_options(self): + self.compiler = None + return + + def finalize_options(self): + log.info('unifing config_cc, config, build_clib, build_ext commands --compiler options') + build_clib = self.get_finalized_command('build_clib') + build_ext = self.get_finalized_command('build_ext') + config = self.get_finalized_command('config') + cmd_list = [self, config, build_clib, build_ext] + for a in ['compiler']: + l = [] + for c in cmd_list: + v = getattr(c,a) + if v is not None and v not in l: l.append(v) + if not l: v1 = None + else: v1 = l[0] + if len(l)>1: + log.warn(' commands have different --%s options: %s'\ + ', using first in list as default' % (a, l)) + if v1: + for c in cmd_list: + if getattr(c,a) is None: setattr(c, a, v1) + return + + def run(self): + # Do nothing. + return Modified: trunk/numpy/distutils/core.py =================================================================== --- trunk/numpy/distutils/core.py 2007-05-18 20:17:48 UTC (rev 3780) +++ trunk/numpy/distutils/core.py 2007-05-18 20:41:10 UTC (rev 3781) @@ -35,6 +35,7 @@ numpy_cmdclass = {'build': build.build, 'build_src': build_src.build_src, 'build_scripts': build_scripts.build_scripts, + 'config_cc': config_compiler.config_cc, 'config_fc': config_compiler.config_fc, 'config': config.config, 'build_ext': build_ext.build_ext, From numpy-svn at scipy.org Fri May 18 16:49:15 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 18 May 2007 15:49:15 -0500 (CDT) Subject: [Numpy-svn] r3782 - trunk/numpy/distutils/command Message-ID: <20070518204915.B481C39C0A8@new.scipy.org> Author: pearu Date: 2007-05-18 15:49:09 -0500 (Fri, 18 May 2007) New Revision: 3782 Modified: trunk/numpy/distutils/command/build_ext.py Log: Added --help-fcompiler option to build_ext command. Modified: trunk/numpy/distutils/command/build_ext.py =================================================================== --- trunk/numpy/distutils/command/build_ext.py 2007-05-18 20:41:10 UTC (rev 3781) +++ trunk/numpy/distutils/command/build_ext.py 2007-05-18 20:49:09 UTC (rev 3782) @@ -18,8 +18,8 @@ from numpy.distutils.misc_util import filter_sources, has_f_sources, \ has_cxx_sources, get_ext_source_files, all_strings, \ get_numpy_include_dirs, is_sequence +from numpy.distutils.command.config_compiler import show_fortran_compilers - class build_ext (old_build_ext): description = "build C/C++/F extensions (compile/link to build directory)" @@ -29,6 +29,11 @@ "specify the Fortran compiler type"), ] + help_options = old_build_ext.help_options + [ + ('help-fcompiler',None, "list available Fortran compilers", + show_fortran_compilers), + ] + def initialize_options(self): old_build_ext.initialize_options(self) self.fcompiler = None From numpy-svn at scipy.org Fri May 18 17:00:22 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 18 May 2007 16:00:22 -0500 (CDT) Subject: [Numpy-svn] r3783 - trunk/numpy/distutils/command Message-ID: <20070518210022.61C0F39C0A8@new.scipy.org> Author: pearu Date: 2007-05-18 16:00:17 -0500 (Fri, 18 May 2007) New Revision: 3783 Modified: trunk/numpy/distutils/command/config_compiler.py Log: show less messages in --help-fcompiler Modified: trunk/numpy/distutils/command/config_compiler.py =================================================================== --- trunk/numpy/distutils/command/config_compiler.py 2007-05-18 20:49:09 UTC (rev 3782) +++ trunk/numpy/distutils/command/config_compiler.py 2007-05-18 21:00:17 UTC (rev 3783) @@ -9,7 +9,7 @@ # Using cache to prevent infinite recursion if _cache: return _cache.append(1) - + log.set_verbosity(-2) from numpy.distutils.fcompiler import show_fcompilers import distutils.core dist = distutils.core._setup_distribution From numpy-svn at scipy.org Fri May 18 17:25:33 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 18 May 2007 16:25:33 -0500 (CDT) Subject: [Numpy-svn] r3784 - in trunk/numpy/distutils: command fcompiler Message-ID: <20070518212533.545BF39C0A8@new.scipy.org> Author: pearu Date: 2007-05-18 16:25:23 -0500 (Fri, 18 May 2007) New Revision: 3784 Modified: trunk/numpy/distutils/command/build.py trunk/numpy/distutils/command/config_compiler.py trunk/numpy/distutils/fcompiler/__init__.py Log: Added --fcompiler,--help-fcompiler options to build command parallel to --compiler,--help-compiler options. Modified: trunk/numpy/distutils/command/build.py =================================================================== --- trunk/numpy/distutils/command/build.py 2007-05-18 21:00:17 UTC (rev 3783) +++ trunk/numpy/distutils/command/build.py 2007-05-18 21:25:23 UTC (rev 3784) @@ -2,6 +2,7 @@ import sys from distutils.command.build import build as old_build from distutils.util import get_platform +from numpy.distutils.command.config_compiler import show_fortran_compilers class build(old_build): @@ -10,6 +11,21 @@ ('build_src', old_build.has_ext_modules), ] + old_build.sub_commands + user_options = old_build.user_options + [ + ('fcompiler=', None, + "specify the Fortran compiler type"), + ] + + help_options = old_build.help_options + [ + ('help-fcompiler',None, "list available Fortran compilers", + show_fortran_compilers), + ] + + def initialize_options(self): + old_build.initialize_options(self) + self.fcompiler = None + return + def finalize_options(self): build_scripts = self.build_scripts old_build.finalize_options(self) Modified: trunk/numpy/distutils/command/config_compiler.py =================================================================== --- trunk/numpy/distutils/command/config_compiler.py 2007-05-18 21:00:17 UTC (rev 3783) +++ trunk/numpy/distutils/command/config_compiler.py 2007-05-18 21:25:23 UTC (rev 3784) @@ -9,7 +9,6 @@ # Using cache to prevent infinite recursion if _cache: return _cache.append(1) - log.set_verbosity(-2) from numpy.distutils.fcompiler import show_fcompilers import distutils.core dist = distutils.core._setup_distribution @@ -57,11 +56,12 @@ return def finalize_options(self): - log.info('unifing config_fc, config, build_clib, build_ext commands --fcompiler options') + log.info('unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options') build_clib = self.get_finalized_command('build_clib') build_ext = self.get_finalized_command('build_ext') config = self.get_finalized_command('config') - cmd_list = [self, config, build_clib, build_ext] + build = self.get_finalized_command('build') + cmd_list = [self, config, build_clib, build_ext, build] for a in ['fcompiler']: l = [] for c in cmd_list: @@ -95,11 +95,12 @@ return def finalize_options(self): - log.info('unifing config_cc, config, build_clib, build_ext commands --compiler options') + log.info('unifing config_cc, config, build_clib, build_ext, build commands --compiler options') build_clib = self.get_finalized_command('build_clib') build_ext = self.get_finalized_command('build_ext') config = self.get_finalized_command('config') - cmd_list = [self, config, build_clib, build_ext] + build = self.get_finalized_command('build') + cmd_list = [self, config, build_clib, build_ext, build] for a in ['compiler']: l = [] for c in cmd_list: Modified: trunk/numpy/distutils/fcompiler/__init__.py =================================================================== --- trunk/numpy/distutils/fcompiler/__init__.py 2007-05-18 21:00:17 UTC (rev 3783) +++ trunk/numpy/distutils/fcompiler/__init__.py 2007-05-18 21:25:23 UTC (rev 3784) @@ -710,12 +710,12 @@ dist.cmdclass['config_fc'] = config_fc dist.parse_config_files() dist.parse_command_line() - compilers = [] compilers_na = [] compilers_ni = [] for compiler in fcompiler_class.keys(): v = 'N/A' + log.set_verbosity(-2) try: c = new_fcompiler(compiler=compiler) c.customize(dist) @@ -724,6 +724,7 @@ pass except Exception, msg: log.warn(msg) + if v is None: compilers_na.append(("fcompiler="+compiler, None, fcompiler_class[compiler][2])) From numpy-svn at scipy.org Fri May 18 17:33:13 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 18 May 2007 16:33:13 -0500 (CDT) Subject: [Numpy-svn] r3785 - trunk/numpy/distutils/command Message-ID: <20070518213313.1923339C0A8@new.scipy.org> Author: pearu Date: 2007-05-18 16:33:07 -0500 (Fri, 18 May 2007) New Revision: 3785 Modified: trunk/numpy/distutils/command/config_compiler.py Log: Add descriptions to config_fc and config_cc commands. Modified: trunk/numpy/distutils/command/config_compiler.py =================================================================== --- trunk/numpy/distutils/command/config_compiler.py 2007-05-18 21:25:23 UTC (rev 3784) +++ trunk/numpy/distutils/command/config_compiler.py 2007-05-18 21:33:07 UTC (rev 3785) @@ -2,7 +2,6 @@ from distutils.core import Command from numpy.distutils import log -#XXX: Implement confic_cc for enhancing C/C++ compiler options. #XXX: Linker flags def show_fortran_compilers(_cache=[]): @@ -22,6 +21,8 @@ config_fc command is used by the FCompiler.customize() method. """ + description = "specify Fortran 77/Fortran 90 compiler information" + user_options = [ ('fcompiler=',None,"specify Fortran compiler type"), ('f77exec=', None, "specify F77 compiler command"), @@ -86,6 +87,8 @@ to C/C++ compilers. """ + description = "specify C/C++ compiler information" + user_options = [ ('compiler=',None,"specify C/C++ compiler type"), ] From numpy-svn at scipy.org Sat May 19 05:54:14 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 19 May 2007 04:54:14 -0500 (CDT) Subject: [Numpy-svn] r3786 - trunk/numpy/distutils/command Message-ID: <20070519095414.D2EE639C0BA@new.scipy.org> Author: pearu Date: 2007-05-19 04:54:00 -0500 (Sat, 19 May 2007) New Revision: 3786 Modified: trunk/numpy/distutils/command/build_ext.py Log: Fix for win32 platform. Modified: trunk/numpy/distutils/command/build_ext.py =================================================================== --- trunk/numpy/distutils/command/build_ext.py 2007-05-18 21:33:07 UTC (rev 3785) +++ trunk/numpy/distutils/command/build_ext.py 2007-05-19 09:54:00 UTC (rev 3786) @@ -390,6 +390,7 @@ return def _libs_with_msvc_and_fortran(self, fcompiler, c_libraries, c_library_dirs): + if fcompiler is None: return # Always use system linker when using MSVC compiler. f_lib_dirs = [] for dir in fcompiler.library_dirs: From numpy-svn at scipy.org Sat May 19 06:23:31 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 19 May 2007 05:23:31 -0500 (CDT) Subject: [Numpy-svn] r3787 - trunk/numpy/distutils/command Message-ID: <20070519102331.6EE0439C089@new.scipy.org> Author: pearu Date: 2007-05-19 05:23:16 -0500 (Sat, 19 May 2007) New Revision: 3787 Modified: trunk/numpy/distutils/command/config_compiler.py Log: Fix fcompiler/compiler unification warning. Modified: trunk/numpy/distutils/command/config_compiler.py =================================================================== --- trunk/numpy/distutils/command/config_compiler.py 2007-05-19 09:54:00 UTC (rev 3786) +++ trunk/numpy/distutils/command/config_compiler.py 2007-05-19 10:23:16 UTC (rev 3787) @@ -67,7 +67,9 @@ l = [] for c in cmd_list: v = getattr(c,a) - if v is not None and v not in l: l.append(v) + if v is not None: + if not isinstance(v, str): v = v.compiler_type + if v not in l: l.append(v) if not l: v1 = None else: v1 = l[0] if len(l)>1: @@ -108,7 +110,9 @@ l = [] for c in cmd_list: v = getattr(c,a) - if v is not None and v not in l: l.append(v) + if v is not None: + if not isinstance(v, str): v = v.compiler_type + if v not in l: l.append(v) if not l: v1 = None else: v1 = l[0] if len(l)>1: From numpy-svn at scipy.org Sat May 19 11:20:55 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 19 May 2007 10:20:55 -0500 (CDT) Subject: [Numpy-svn] r3788 - in trunk/numpy/distutils: . command Message-ID: <20070519152055.85E6C39C13C@new.scipy.org> Author: pearu Date: 2007-05-19 10:20:48 -0500 (Sat, 19 May 2007) New Revision: 3788 Modified: trunk/numpy/distutils/command/config.py trunk/numpy/distutils/system_info.py Log: Fix atlas version detection when using MSVC compiler Modified: trunk/numpy/distutils/command/config.py =================================================================== --- trunk/numpy/distutils/command/config.py 2007-05-19 10:23:16 UTC (rev 3787) +++ trunk/numpy/distutils/command/config.py 2007-05-19 15:20:48 UTC (rev 3788) @@ -7,6 +7,7 @@ from distutils.command.config import config as old_config from distutils.command.config import LANG_EXT from distutils import log +from distutils.file_util import copy_file from numpy.distutils.exec_command import exec_command LANG_EXT['f77'] = '.f' @@ -54,9 +55,46 @@ def _link (self, body, headers, include_dirs, libraries, library_dirs, lang): + if self.compiler.compiler_type=='msvc': + libraries = libraries[:] + library_dirs = library_dirs[:] + if lang in ['f77','f90']: + lang = 'c' # always use system linker when using MSVC compiler + if self.fcompiler: + f_lib_dirs = [] + for d in self.fcompiler.library_dirs or []: + # correct path when compiling in Cygwin but with normal Win + # Python + if dir.startswith('/usr/lib'): + s,o = exec_command(['cygpath', '-w', d], use_tee=False) + if not s: d = o + f_lib_dirs.append(d) + library_dirs.extend(f_lib_dirs) + for libname in self.fcompiler.libraries or []: + if libname not in libraries: + libraries.append(libname) + for libname in libraries or []: + if libname.startswith('msvcr'): continue + fileexists = False + for libdir in library_dirs or []: + libfile = os.path.join(libdir,'%s.lib' % (libname)) + if os.path.isfile(libfile): + fileexists = True + break + if fileexists: + continue + # make g77-compiled static libs available to MSVC + for libdir in library_dirs or []: + libfile = os.path.join(libdir,'lib%s.a' % (libname)) + if os.path.isfile(libfile): + # copy libname.a file to name.lib so that MSVC linker + # can find it + copy_file(libfile, os.path.join(libdir,'%s.lib' % (libname))) + break return self._wrap_method(old_config._link,lang, (body, headers, include_dirs, libraries, library_dirs, lang)) + def check_func(self, func, headers=None, include_dirs=None, Modified: trunk/numpy/distutils/system_info.py =================================================================== --- trunk/numpy/distutils/system_info.py 2007-05-19 10:23:16 UTC (rev 3787) +++ trunk/numpy/distutils/system_info.py 2007-05-19 15:20:48 UTC (rev 3788) @@ -1107,7 +1107,7 @@ self.set_info(**info) atlas_version_c_text = r''' -/* This file is generated from numpy_distutils/system_info.py */ +/* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); From numpy-svn at scipy.org Sat May 19 11:21:45 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 19 May 2007 10:21:45 -0500 (CDT) Subject: [Numpy-svn] r3789 - trunk/numpy/distutils/command Message-ID: <20070519152145.731E839C13C@new.scipy.org> Author: pearu Date: 2007-05-19 10:21:41 -0500 (Sat, 19 May 2007) New Revision: 3789 Modified: trunk/numpy/distutils/command/config.py Log: Fix typo. Modified: trunk/numpy/distutils/command/config.py =================================================================== --- trunk/numpy/distutils/command/config.py 2007-05-19 15:20:48 UTC (rev 3788) +++ trunk/numpy/distutils/command/config.py 2007-05-19 15:21:41 UTC (rev 3789) @@ -65,7 +65,7 @@ for d in self.fcompiler.library_dirs or []: # correct path when compiling in Cygwin but with normal Win # Python - if dir.startswith('/usr/lib'): + if d.startswith('/usr/lib'): s,o = exec_command(['cygpath', '-w', d], use_tee=False) if not s: d = o f_lib_dirs.append(d) From numpy-svn at scipy.org Sat May 19 11:24:27 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 19 May 2007 10:24:27 -0500 (CDT) Subject: [Numpy-svn] r3790 - trunk/numpy/distutils/command Message-ID: <20070519152427.9D5A439C13C@new.scipy.org> Author: pearu Date: 2007-05-19 10:24:20 -0500 (Sat, 19 May 2007) New Revision: 3790 Modified: trunk/numpy/distutils/command/config.py Log: More typo fixes. Modified: trunk/numpy/distutils/command/config.py =================================================================== --- trunk/numpy/distutils/command/config.py 2007-05-19 15:21:41 UTC (rev 3789) +++ trunk/numpy/distutils/command/config.py 2007-05-19 15:24:20 UTC (rev 3790) @@ -61,15 +61,13 @@ if lang in ['f77','f90']: lang = 'c' # always use system linker when using MSVC compiler if self.fcompiler: - f_lib_dirs = [] for d in self.fcompiler.library_dirs or []: # correct path when compiling in Cygwin but with normal Win # Python if d.startswith('/usr/lib'): s,o = exec_command(['cygpath', '-w', d], use_tee=False) if not s: d = o - f_lib_dirs.append(d) - library_dirs.extend(f_lib_dirs) + library_dirs.append(d) for libname in self.fcompiler.libraries or []: if libname not in libraries: libraries.append(libname) From numpy-svn at scipy.org Sat May 19 13:01:45 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 19 May 2007 12:01:45 -0500 (CDT) Subject: [Numpy-svn] r3791 - trunk/numpy/distutils/command Message-ID: <20070519170145.9ACA639C143@new.scipy.org> Author: pearu Date: 2007-05-19 12:01:39 -0500 (Sat, 19 May 2007) New Revision: 3791 Modified: trunk/numpy/distutils/command/config.py Log: win32: fix install when build has been carried out earlier. Modified: trunk/numpy/distutils/command/config.py =================================================================== --- trunk/numpy/distutils/command/config.py 2007-05-19 15:24:20 UTC (rev 3790) +++ trunk/numpy/distutils/command/config.py 2007-05-19 17:01:39 UTC (rev 3791) @@ -56,8 +56,8 @@ headers, include_dirs, libraries, library_dirs, lang): if self.compiler.compiler_type=='msvc': - libraries = libraries[:] - library_dirs = library_dirs[:] + libraries = (libraries or [])[:] + library_dirs = (library_dirs or [])[:] if lang in ['f77','f90']: lang = 'c' # always use system linker when using MSVC compiler if self.fcompiler: @@ -71,7 +71,7 @@ for libname in self.fcompiler.libraries or []: if libname not in libraries: libraries.append(libname) - for libname in libraries or []: + for libname in libraries: if libname.startswith('msvcr'): continue fileexists = False for libdir in library_dirs or []: @@ -82,7 +82,7 @@ if fileexists: continue # make g77-compiled static libs available to MSVC - for libdir in library_dirs or []: + for libdir in library_dirs: libfile = os.path.join(libdir,'lib%s.a' % (libname)) if os.path.isfile(libfile): # copy libname.a file to name.lib so that MSVC linker From numpy-svn at scipy.org Sat May 19 15:44:49 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 19 May 2007 14:44:49 -0500 (CDT) Subject: [Numpy-svn] r3792 - in trunk/numpy/distutils: . command Message-ID: <20070519194449.D6D0839C17E@new.scipy.org> Author: pearu Date: 2007-05-19 14:44:42 -0500 (Sat, 19 May 2007) New Revision: 3792 Modified: trunk/numpy/distutils/ccompiler.py trunk/numpy/distutils/command/build_ext.py trunk/numpy/distutils/command/config.py Log: Clean up and completed (hopefully) MSVC support. Modified: trunk/numpy/distutils/ccompiler.py =================================================================== --- trunk/numpy/distutils/ccompiler.py 2007-05-19 17:01:39 UTC (rev 3791) +++ trunk/numpy/distutils/ccompiler.py 2007-05-19 19:44:42 UTC (rev 3792) @@ -288,6 +288,7 @@ replace_method(CCompiler, 'get_version', CCompiler_get_version) def CCompiler_cxx_compiler(self): + if self.compiler_type=='msvc': return self cxx = copy(self) cxx.compiler_so = [cxx.compiler_cxx[0]] + cxx.compiler_so[1:] if sys.platform.startswith('aix') and 'ld_so_aix' in cxx.linker_so[0]: Modified: trunk/numpy/distutils/command/build_ext.py =================================================================== --- trunk/numpy/distutils/command/build_ext.py 2007-05-19 17:01:39 UTC (rev 3791) +++ trunk/numpy/distutils/command/build_ext.py 2007-05-19 19:44:42 UTC (rev 3792) @@ -391,6 +391,33 @@ def _libs_with_msvc_and_fortran(self, fcompiler, c_libraries, c_library_dirs): if fcompiler is None: return + + for libname in c_libraries: + if libname.startswith('msvc'): continue + fileexists = False + for libdir in c_library_dirs or []: + libfile = os.path.join(libdir,'%s.lib' % (libname)) + if os.path.isfile(libfile): + fileexists = True + break + if fileexists: continue + # make g77-compiled static libs available to MSVC + fileexists = False + for libdir in c_library_dirs: + libfile = os.path.join(libdir,'lib%s.a' % (libname)) + if os.path.isfile(libfile): + # copy libname.a file to name.lib so that MSVC linker + # can find it + libfile2 = os.path.join(self.build_temp, libname + '.lib') + copy_file(libfile, libfile2) + if self.build_temp not in c_library_dirs: + c_library_dirs.append(self.build_temp) + fileexists = True + break + if fileexists: continue + log.warn('could not find library %r in directories %s' \ + % (libname, c_library_dirs)) + # Always use system linker when using MSVC compiler. f_lib_dirs = [] for dir in fcompiler.library_dirs: @@ -404,17 +431,16 @@ c_library_dirs.extend(f_lib_dirs) # make g77-compiled static libs available to MSVC - lib_added = False for lib in fcompiler.libraries: - if not lib.startswith('msvcr'): + if not lib.startswith('msvc'): c_libraries.append(lib) p = combine_paths(f_lib_dirs, 'lib' + lib + '.a') if p: dst_name = os.path.join(self.build_temp, lib + '.lib') - copy_file(p[0], dst_name) - if not lib_added: + if not os.path.isfile(dst_name): + copy_file(p[0], dst_name) + if self.build_temp not in c_library_dirs: c_library_dirs.append(self.build_temp) - lib_added = True return def get_source_files (self): Modified: trunk/numpy/distutils/command/config.py =================================================================== --- trunk/numpy/distutils/command/config.py 2007-05-19 17:01:39 UTC (rev 3791) +++ trunk/numpy/distutils/command/config.py 2007-05-19 19:44:42 UTC (rev 3792) @@ -72,23 +72,29 @@ if libname not in libraries: libraries.append(libname) for libname in libraries: - if libname.startswith('msvcr'): continue + if libname.startswith('msvc'): continue fileexists = False for libdir in library_dirs or []: libfile = os.path.join(libdir,'%s.lib' % (libname)) if os.path.isfile(libfile): fileexists = True break - if fileexists: - continue + if fileexists: continue # make g77-compiled static libs available to MSVC + fileexists = False for libdir in library_dirs: libfile = os.path.join(libdir,'lib%s.a' % (libname)) if os.path.isfile(libfile): # copy libname.a file to name.lib so that MSVC linker # can find it - copy_file(libfile, os.path.join(libdir,'%s.lib' % (libname))) + libfile2 = os.path.join(libdir,'%s.lib' % (libname)) + copy_file(libfile, libfile2) + self.temp_files.append(libfile2) + fileexists = True break + if fileexists: continue + log.warn('could not find library %r in directories %s' \ + % (libname, library_dirs)) return self._wrap_method(old_config._link,lang, (body, headers, include_dirs, libraries, library_dirs, lang)) From numpy-svn at scipy.org Mon May 21 08:51:35 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 21 May 2007 07:51:35 -0500 (CDT) Subject: [Numpy-svn] r3793 - branches/distutils-revamp/fcompiler Message-ID: <20070521125135.BF22E39C00C@new.scipy.org> Author: cookedm Date: 2007-05-21 07:51:32 -0500 (Mon, 21 May 2007) New Revision: 3793 Modified: branches/distutils-revamp/fcompiler/__init__.py Log: [distutils-rework] fix getting ar flags Modified: branches/distutils-revamp/fcompiler/__init__.py =================================================================== --- branches/distutils-revamp/fcompiler/__init__.py 2007-05-19 19:44:42 UTC (rev 3792) +++ branches/distutils-revamp/fcompiler/__init__.py 2007-05-21 12:51:32 UTC (rev 3793) @@ -454,7 +454,7 @@ ar = self.command_vars.archiver if ar: - arflags = to_list(self.flag_vars.arflags) + arflags = to_list(self.flag_vars.ar) self.set_executables(archiver=[ar]+arflags) ranlib = self.command_vars.ranlib From numpy-svn at scipy.org Mon May 21 09:01:23 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 21 May 2007 08:01:23 -0500 (CDT) Subject: [Numpy-svn] r3794 - trunk/numpy/distutils/command Message-ID: <20070521130123.AC5C039C00C@new.scipy.org> Author: cookedm Date: 2007-05-21 08:01:20 -0500 (Mon, 21 May 2007) New Revision: 3794 Modified: trunk/numpy/distutils/command/build.py trunk/numpy/distutils/command/build_clib.py trunk/numpy/distutils/command/build_ext.py trunk/numpy/distutils/command/config.py trunk/numpy/distutils/command/config_compiler.py Log: minor cleanups in numpy.distutils (style mostly) Modified: trunk/numpy/distutils/command/build.py =================================================================== --- trunk/numpy/distutils/command/build.py 2007-05-21 12:51:32 UTC (rev 3793) +++ trunk/numpy/distutils/command/build.py 2007-05-21 13:01:20 UTC (rev 3794) @@ -24,7 +24,6 @@ def initialize_options(self): old_build.initialize_options(self) self.fcompiler = None - return def finalize_options(self): build_scripts = self.build_scripts @@ -33,6 +32,3 @@ if build_scripts is None: self.build_scripts = os.path.join(self.build_base, 'scripts' + plat_specifier) - return - -#EOF Modified: trunk/numpy/distutils/command/build_clib.py =================================================================== --- trunk/numpy/distutils/command/build_clib.py 2007-05-21 12:51:32 UTC (rev 3793) +++ trunk/numpy/distutils/command/build_clib.py 2007-05-21 13:01:20 UTC (rev 3794) @@ -86,7 +86,6 @@ self.fcompiler.show_customization() self.build_libraries(self.libraries) - return def get_source_files(self): self.check_library_list(self.libraries) @@ -113,7 +112,7 @@ = filter_sources(sources) requiref90 = not not fmodule_sources or \ build_info.get('language','c')=='f90' - + # save source type information so that build_ext can use it. source_languages = [] if c_sources: source_languages.append('c') @@ -160,7 +159,7 @@ module_dirs = build_info.get('module_dirs') or [] module_build_dir = os.path.dirname(lib_file) if requiref90: self.mkpath(module_build_dir) - + if compiler.compiler_type=='msvc': # this hack works around the msvc compiler attributes # problem, msvc uses its own convention :( @@ -187,7 +186,7 @@ debug=self.debug, extra_postargs=extra_postargs) objects.extend(cxx_objects) - + if f_sources or fmodule_sources: extra_postargs = [] f_objects = [] @@ -249,5 +248,3 @@ clib_libraries.extend(binfo[1].get('libraries',[])) if clib_libraries: build_info['libraries'] = clib_libraries - return -#EOF Modified: trunk/numpy/distutils/command/build_ext.py =================================================================== --- trunk/numpy/distutils/command/build_ext.py 2007-05-21 12:51:32 UTC (rev 3793) +++ trunk/numpy/distutils/command/build_ext.py 2007-05-21 13:01:20 UTC (rev 3794) @@ -2,7 +2,6 @@ """ import os -import string import sys from glob import glob @@ -16,10 +15,15 @@ from numpy.distutils.exec_command import exec_command from numpy.distutils.system_info import combine_paths from numpy.distutils.misc_util import filter_sources, has_f_sources, \ - has_cxx_sources, get_ext_source_files, all_strings, \ + has_cxx_sources, get_ext_source_files, \ get_numpy_include_dirs, is_sequence from numpy.distutils.command.config_compiler import show_fortran_compilers +try: + set +except NameError: + from sets import Set as set + class build_ext (old_build_ext): description = "build C/C++/F extensions (compile/link to build directory)" @@ -37,14 +41,12 @@ def initialize_options(self): old_build_ext.initialize_options(self) self.fcompiler = None - return def finalize_options(self): incl_dirs = self.include_dirs old_build_ext.finalize_options(self) if incl_dirs is not None: - self.include_dirs.extend(self.distribution.include_dirs or []) - return + self.include_dirs.extend(self.distribution.include_dirs or []) def run(self): if not self.extensions: @@ -56,7 +58,7 @@ if self.distribution.has_c_libraries(): self.run_command('build_clib') build_clib = self.get_finalized_command('build_clib') - self.library_dirs.append(build_clib.build_clib) + self.library_dirs.append(build_clib.build_clib) else: build_clib = None @@ -96,9 +98,9 @@ # Determine if C++/Fortran 77/Fortran 90 compilers are needed. # Update extension libraries, library_dirs, and macros. - all_languages = [] + all_languages = set() for ext in self.extensions: - ext_languages = [] + ext_languages = set() c_libs = [] c_lib_dirs = [] macros = [] @@ -107,31 +109,30 @@ binfo = clibs[libname] c_libs += binfo.get('libraries',[]) c_lib_dirs += binfo.get('library_dirs',[]) - for m in binfo.get('macros',[]): - if m not in macros: macros.append(m) + for m in binfo.get('macros',[]): + if m not in macros: + macros.append(m) for l in clibs.get(libname,{}).get('source_languages',[]): - if l not in ext_languages: ext_languages.append(l) + ext_languages.add(l) if c_libs: new_c_libs = ext.libraries + c_libs - log.info('updating extension %r libraries from %r to %r' \ + log.info('updating extension %r libraries from %r to %r' % (ext.name, ext.libraries, new_c_libs)) ext.libraries = new_c_libs ext.library_dirs = ext.library_dirs + c_lib_dirs if macros: - log.info('extending extension %r defined_macros with %r' \ + log.info('extending extension %r defined_macros with %r' % (ext.name, macros)) ext.define_macros = ext.define_macros + macros - + # determine extension languages - if 'f77' not in ext_languages and has_f_sources(ext.sources): - ext_languages.append('f77') - if 'c++' not in ext_languages and has_cxx_sources(ext.sources): - ext_languages.append('c++') - if sys.version[:3]>='2.3': - l = ext.language or self.compiler.detect_language(ext.sources) - else: - l = ext.language - if l and l not in ext_languages: ext_languages.append(l) + if has_f_sources(ext.sources): + ext_languages.add('f77') + if has_cxx_sources(ext.sources): + ext_languages.add('c++') + l = ext.language or self.compiler.detect_language(ext.sources) + if l: + ext_languages.add(l) # reset language attribute for choosing proper linker if 'c++' in ext_languages: ext_language = 'c++' @@ -141,12 +142,12 @@ ext_language = 'f77' else: ext_language = 'c' # default - if l and l!=ext_language and ext.language: - log.warn('resetting extension %r language from %r to %r.' % (ext.name,l,ext_language)) + if l and l != ext_language and ext.language: + log.warn('resetting extension %r language from %r to %r.' % + (ext.name,l,ext_language)) ext.language = ext_language # global language - for l in ext_languages: - if l not in all_languages: all_languages.append(l) + all_languages.update(ext_languages) need_f90_compiler = 'f90' in all_languages need_f77_compiler = 'f77' in all_languages @@ -179,7 +180,8 @@ fcompiler.customize_cmd(self) fcompiler.show_customization() else: - self.warn('f77_compiler=%s is not available.' % (fcompiler.compiler_type)) + self.warn('f77_compiler=%s is not available.' % + (fcompiler.compiler_type)) self._f77_compiler = None else: self._f77_compiler = None @@ -197,14 +199,14 @@ fcompiler.customize_cmd(self) fcompiler.show_customization() else: - self.warn('f90_compiler=%s is not available.' % (fcompiler.compiler_type)) + self.warn('f90_compiler=%s is not available.' % + (fcompiler.compiler_type)) self._f90_compiler = None else: self._f90_compiler = None # Build extensions self.build_extensions() - return def swig_sources(self, sources): # Do nothing. Swig sources have beed handled in build_src command. @@ -213,10 +215,10 @@ def build_extension(self, ext): sources = ext.sources if sources is None or not is_sequence(sources): - raise DistutilsSetupError, \ - ("in 'ext_modules' option (extension '%s'), " + - "'sources' must be present and must be " + - "a list of source filenames") % ext.name + raise DistutilsSetupError( + ("in 'ext_modules' option (extension '%s'), " + + "'sources' must be present and must be " + + "a list of source filenames") % ext.name) sources = list(sources) if not sources: @@ -224,8 +226,8 @@ fullname = self.get_ext_fullname(ext.name) if self.inplace: - modpath = string.split(fullname, '.') - package = string.join(modpath[0:-1], '.') + modpath = fullname.split('.') + package = '.'.join(modpath[0:-1]) base = modpath[-1] build_py = self.get_finalized_command('build_py') package_dir = build_py.get_package_dir(package) @@ -269,7 +271,7 @@ else: # in case ext.language is c++, for instance fcompiler = self._f90_compiler or self._f77_compiler cxx_compiler = self._cxx_compiler - + # check for the availability of required compilers if cxx_sources and cxx_compiler is None: raise DistutilsError, "extension %r has C++ sources" \ @@ -301,7 +303,7 @@ **kws) if cxx_sources: - log.info("compiling C++ sources") + log.info("compiling C++ sources") c_objects += cxx_compiler.compile(cxx_sources, output_dir=output_dir, macros=macros, @@ -315,15 +317,15 @@ if fmodule_sources: log.info("compiling Fortran 90 module sources") module_dirs = ext.module_dirs[:] - module_build_dir = os.path.join(\ - self.build_temp,os.path.dirname(\ + module_build_dir = os.path.join( + self.build_temp,os.path.dirname( self.get_ext_filename(fullname))) - + self.mkpath(module_build_dir) if fcompiler.module_dir_switch is None: existing_modules = glob('*.mod') - extra_postargs += fcompiler.module_options(\ - module_dirs,module_build_dir) + extra_postargs += fcompiler.module_options( + module_dirs,module_build_dir) f_objects += fcompiler.compile(fmodule_sources, output_dir=self.build_temp, macros=macros, @@ -344,7 +346,8 @@ try: self.move_file(f, module_build_dir) except DistutilsFileError: - log.warn('failed to move %r to %r' % (f, module_build_dir)) + log.warn('failed to move %r to %r' % + (f, module_build_dir)) if f_sources: log.info("compiling Fortran sources") f_objects += fcompiler.compile(f_sources, @@ -362,7 +365,7 @@ extra_args = ext.extra_link_args or [] libraries = self.get_libraries(ext)[:] library_dirs = ext.library_dirs[:] - + linker = self.compiler.link_shared_object # Always use system linker when using MSVC compiler. if self.compiler.compiler_type=='msvc': @@ -387,11 +390,11 @@ export_symbols=self.get_export_symbols(ext), debug=self.debug, build_temp=self.build_temp,**kws) - return - def _libs_with_msvc_and_fortran(self, fcompiler, c_libraries, c_library_dirs): + def _libs_with_msvc_and_fortran(self, fcompiler, c_libraries, + c_library_dirs): if fcompiler is None: return - + for libname in c_libraries: if libname.startswith('msvc'): continue fileexists = False @@ -415,9 +418,9 @@ fileexists = True break if fileexists: continue - log.warn('could not find library %r in directories %s' \ + log.warn('could not find library %r in directories %s' % (libname, c_library_dirs)) - + # Always use system linker when using MSVC compiler. f_lib_dirs = [] for dir in fcompiler.library_dirs: @@ -441,7 +444,6 @@ copy_file(p[0], dst_name) if self.build_temp not in c_library_dirs: c_library_dirs.append(self.build_temp) - return def get_source_files (self): self.check_extensions_list(self.extensions) Modified: trunk/numpy/distutils/command/config.py =================================================================== --- trunk/numpy/distutils/command/config.py 2007-05-21 12:51:32 UTC (rev 3793) +++ trunk/numpy/distutils/command/config.py 2007-05-21 13:01:20 UTC (rev 3794) @@ -21,7 +21,6 @@ def initialize_options(self): self.fcompiler = None old_config.initialize_options(self) - return def _check_compiler (self): old_config._check_compiler(self) @@ -32,7 +31,6 @@ self.fcompiler.customize(self.distribution) self.fcompiler.customize_cmd(self) self.fcompiler.show_customization() - return def _wrap_method(self,mth,lang,args): from distutils.ccompiler import CompileError @@ -62,10 +60,11 @@ lang = 'c' # always use system linker when using MSVC compiler if self.fcompiler: for d in self.fcompiler.library_dirs or []: - # correct path when compiling in Cygwin but with normal Win - # Python + # correct path when compiling in Cygwin but with + # normal Win Python if d.startswith('/usr/lib'): - s,o = exec_command(['cygpath', '-w', d], use_tee=False) + s,o = exec_command(['cygpath', '-w', d], + use_tee=False) if not s: d = o library_dirs.append(d) for libname in self.fcompiler.libraries or []: @@ -98,7 +97,6 @@ return self._wrap_method(old_config._link,lang, (body, headers, include_dirs, libraries, library_dirs, lang)) - def check_func(self, func, headers=None, include_dirs=None, Modified: trunk/numpy/distutils/command/config_compiler.py =================================================================== --- trunk/numpy/distutils/command/config_compiler.py 2007-05-21 12:51:32 UTC (rev 3793) +++ trunk/numpy/distutils/command/config_compiler.py 2007-05-21 13:01:20 UTC (rev 3794) @@ -12,7 +12,6 @@ import distutils.core dist = distutils.core._setup_distribution show_fcompilers(dist) - return class config_fc(Command): """ Distutils command to hold user specified options @@ -54,7 +53,6 @@ self.debug = None self.noopt = None self.noarch = None - return def finalize_options(self): log.info('unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options') @@ -78,7 +76,6 @@ if v1: for c in cmd_list: if getattr(c,a) is None: setattr(c, a, v1) - return def run(self): # Do nothing. @@ -97,7 +94,6 @@ def initialize_options(self): self.compiler = None - return def finalize_options(self): log.info('unifing config_cc, config, build_clib, build_ext, build commands --compiler options') From numpy-svn at scipy.org Mon May 21 09:15:52 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 21 May 2007 08:15:52 -0500 (CDT) Subject: [Numpy-svn] r3795 - in branches/distutils-revamp: . command fcompiler Message-ID: <20070521131552.E607139C061@new.scipy.org> Author: cookedm Date: 2007-05-21 08:15:45 -0500 (Mon, 21 May 2007) New Revision: 3795 Modified: branches/distutils-revamp/ branches/distutils-revamp/ccompiler.py branches/distutils-revamp/command/build.py branches/distutils-revamp/command/build_clib.py branches/distutils-revamp/command/build_ext.py branches/distutils-revamp/command/build_src.py branches/distutils-revamp/command/config.py branches/distutils-revamp/command/config_compiler.py branches/distutils-revamp/core.py branches/distutils-revamp/fcompiler/__init__.py branches/distutils-revamp/misc_util.py branches/distutils-revamp/system_info.py Log: [distutils-revamp] Merged revisions 3769-3794 via svnmerge from http://svn.scipy.org/svn/numpy/trunk/numpy/distutils ........ r3775 | pearu | 2007-05-18 10:32:33 -0400 (Fri, 18 May 2007) | 1 line build_src: introduced --swig and other related options (as in std distutils build_ext command), use --f2py-opts instead of --f2pyflags, improved error messages. ........ r3776 | pearu | 2007-05-18 12:41:44 -0400 (Fri, 18 May 2007) | 1 line Extension modules and libraries are built with suitable compilers/linkers. Improved failure handling. ........ r3779 | pearu | 2007-05-18 13:33:15 -0400 (Fri, 18 May 2007) | 1 line Fixed warnings on language changes. ........ r3780 | pearu | 2007-05-18 16:17:48 -0400 (Fri, 18 May 2007) | 1 line unify config_fc, build_clib, build_ext commands --fcompiler options so that --fcompiler can be specified only once in a command line ........ r3781 | pearu | 2007-05-18 16:41:10 -0400 (Fri, 18 May 2007) | 1 line added config to --fcompiler option unification method. introduced config_cc for unifying --compiler options. ........ r3782 | pearu | 2007-05-18 16:49:09 -0400 (Fri, 18 May 2007) | 1 line Added --help-fcompiler option to build_ext command. ........ r3783 | pearu | 2007-05-18 17:00:17 -0400 (Fri, 18 May 2007) | 1 line show less messages in --help-fcompiler ........ r3784 | pearu | 2007-05-18 17:25:23 -0400 (Fri, 18 May 2007) | 1 line Added --fcompiler,--help-fcompiler options to build command parallel to --compiler,--help-compiler options. ........ r3785 | pearu | 2007-05-18 17:33:07 -0400 (Fri, 18 May 2007) | 1 line Add descriptions to config_fc and config_cc commands. ........ r3786 | pearu | 2007-05-19 05:54:00 -0400 (Sat, 19 May 2007) | 1 line Fix for win32 platform. ........ r3787 | pearu | 2007-05-19 06:23:16 -0400 (Sat, 19 May 2007) | 1 line Fix fcompiler/compiler unification warning. ........ r3788 | pearu | 2007-05-19 11:20:48 -0400 (Sat, 19 May 2007) | 1 line Fix atlas version detection when using MSVC compiler ........ r3789 | pearu | 2007-05-19 11:21:41 -0400 (Sat, 19 May 2007) | 1 line Fix typo. ........ r3790 | pearu | 2007-05-19 11:24:20 -0400 (Sat, 19 May 2007) | 1 line More typo fixes. ........ r3791 | pearu | 2007-05-19 13:01:39 -0400 (Sat, 19 May 2007) | 1 line win32: fix install when build has been carried out earlier. ........ r3792 | pearu | 2007-05-19 15:44:42 -0400 (Sat, 19 May 2007) | 1 line Clean up and completed (hopefully) MSVC support. ........ r3794 | cookedm | 2007-05-21 09:01:20 -0400 (Mon, 21 May 2007) | 1 line minor cleanups in numpy.distutils (style mostly) ........ Property changes on: branches/distutils-revamp ___________________________________________________________________ Name: svnmerge-integrated - /branches/distutils-revamp:1-2756 /trunk/numpy/distutils:1-3768 + /branches/distutils-revamp:1-2756 /trunk/numpy/distutils:1-3794 Modified: branches/distutils-revamp/ccompiler.py =================================================================== --- branches/distutils-revamp/ccompiler.py 2007-05-21 13:01:20 UTC (rev 3794) +++ branches/distutils-revamp/ccompiler.py 2007-05-21 13:15:45 UTC (rev 3795) @@ -288,6 +288,7 @@ replace_method(CCompiler, 'get_version', CCompiler_get_version) def CCompiler_cxx_compiler(self): + if self.compiler_type=='msvc': return self cxx = copy(self) cxx.compiler_so = [cxx.compiler_cxx[0]] + cxx.compiler_so[1:] if sys.platform.startswith('aix') and 'ld_so_aix' in cxx.linker_so[0]: Modified: branches/distutils-revamp/command/build.py =================================================================== --- branches/distutils-revamp/command/build.py 2007-05-21 13:01:20 UTC (rev 3794) +++ branches/distutils-revamp/command/build.py 2007-05-21 13:15:45 UTC (rev 3795) @@ -2,13 +2,29 @@ import sys from distutils.command.build import build as old_build from distutils.util import get_platform +from numpy.distutils.command.config_compiler import show_fortran_compilers class build(old_build): - sub_commands = [('config_fc', lambda *args: 1), + sub_commands = [('config_cc', lambda *args: True), + ('config_fc', lambda *args: True), ('build_src', old_build.has_ext_modules), ] + old_build.sub_commands + user_options = old_build.user_options + [ + ('fcompiler=', None, + "specify the Fortran compiler type"), + ] + + help_options = old_build.help_options + [ + ('help-fcompiler',None, "list available Fortran compilers", + show_fortran_compilers), + ] + + def initialize_options(self): + old_build.initialize_options(self) + self.fcompiler = None + def finalize_options(self): build_scripts = self.build_scripts old_build.finalize_options(self) Modified: branches/distutils-revamp/command/build_clib.py =================================================================== --- branches/distutils-revamp/command/build_clib.py 2007-05-21 13:01:20 UTC (rev 3794) +++ branches/distutils-revamp/command/build_clib.py 2007-05-21 13:15:45 UTC (rev 3795) @@ -1,19 +1,15 @@ """ Modified version of build_clib that handles fortran source files. """ +import os from distutils.command.build_clib import build_clib as old_build_clib -from distutils.errors import DistutilsSetupError +from distutils.errors import DistutilsSetupError, DistutilsError from numpy.distutils import log from distutils.dep_util import newer_group from numpy.distutils.misc_util import filter_sources, has_f_sources,\ has_cxx_sources, all_strings, get_lib_source_files, is_sequence -try: - set -except NameError: - from sets import Set as set - # Fix Python distutils bug sf #1718574: _l = old_build_clib.user_options for _i in range(len(_l)): @@ -33,46 +29,32 @@ def initialize_options(self): old_build_clib.initialize_options(self) self.fcompiler = None + return - def finalize_options(self): - old_build_clib.finalize_options(self) - self._languages = None - self.set_undefined_options('config_fc', - ('fcompiler', 'fcompiler')) - # we set this to the appropiate Fortran compiler object - # (f77 or f90) in the .run() method - self._fcompiler = None - - def languages(self): - """Return a set of language names used in this library. - Valid language names are 'c', 'f77', and 'f90'. - """ - if self._languages is None: - languages = set() - for (lib_name, build_info) in self.libraries: - l = build_info.get('language',None) - if l: - languages.add(l) - self._languages = languages - return self._languages - def have_f_sources(self): - l = self.languages() - return 'f90' in l or 'f77' in l + for (lib_name, build_info) in self.libraries: + if has_f_sources(build_info.get('sources',[])): + return True + return False def have_cxx_sources(self): - l = self.languages() - return 'c++' in l + for (lib_name, build_info) in self.libraries: + if has_cxx_sources(build_info.get('sources',[])): + return True + return False def run(self): if not self.libraries: return # Make sure that library sources are complete. - self.run_command('build_src') + languages = [] + for (lib_name, build_info) in self.libraries: + if not all_strings(build_info.get('sources',[])): + self.run_command('build_src') + l = build_info.get('language',None) + if l and l not in languages: languages.append(l) - languages = self.languages() - from distutils.ccompiler import new_compiler self.compiler = new_compiler(compiler=self.compiler, dry_run=self.dry_run, @@ -88,17 +70,20 @@ self.compiler.show_customization() if self.have_f_sources(): - if 'f90' in languages: - fc = self.fcompiler.f90() - else: - fc = self.fcompiler.f77() + from numpy.distutils.fcompiler import new_fcompiler + self.fcompiler = new_fcompiler(compiler=self.fcompiler, + verbose=self.verbose, + dry_run=self.dry_run, + force=self.force, + requiref90='f90' in languages) + self.fcompiler.customize(self.distribution) + libraries = self.libraries self.libraries = None - fc.customize_cmd(self) + self.fcompiler.customize_cmd(self) self.libraries = libraries - fc.show_customization() - self._fcompiler = fc + self.fcompiler.show_customization() self.build_libraries(self.libraries) @@ -110,10 +95,10 @@ return filenames def build_libraries(self, libraries): - fcompiler = self._fcompiler - compiler = self.compiler - for (lib_name, build_info) in libraries: + # default compilers + compiler = self.compiler + fcompiler = self.fcompiler sources = build_info.get('sources') if sources is None or not is_sequence(sources): @@ -123,9 +108,21 @@ "a list of source filenames") % lib_name sources = list(sources) + c_sources, cxx_sources, f_sources, fmodule_sources \ + = filter_sources(sources) + requiref90 = not not fmodule_sources or \ + build_info.get('language','c')=='f90' + + # save source type information so that build_ext can use it. + source_languages = [] + if c_sources: source_languages.append('c') + if cxx_sources: source_languages.append('c++') + if requiref90: source_languages.append('f90') + elif f_sources: source_languages.append('f77') + build_info['source_languages'] = source_languages + lib_file = compiler.library_filename(lib_name, output_dir=self.build_clib) - depends = sources + build_info.get('depends',[]) if not (self.force or newer_group(depends, lib_file, 'newer')): log.debug("skipping '%s' library (up-to-date)", lib_name) @@ -133,34 +130,42 @@ else: log.info("building '%s' library", lib_name) - config_fc = build_info.get('config_fc',{}) if fcompiler is not None and config_fc: log.info('using additional config_fc from setup script '\ 'for fortran compiler: %s' \ % (config_fc,)) + from numpy.distutils.fcompiler import new_fcompiler + fcompiler = new_fcompiler(compiler=fcompiler.compiler_type, + verbose=self.verbose, + dry_run=self.dry_run, + force=self.force, + requiref90=requiref90) dist = self.distribution base_config_fc = dist.get_option_dict('config_fc').copy() base_config_fc.update(config_fc) fcompiler.customize(base_config_fc) + # check availability of Fortran compilers + if (f_sources or fmodule_sources) and fcompiler is None: + raise DistutilsError, "library %s has Fortran sources"\ + " but no Fortran compiler found" % (lib_name) + macros = build_info.get('macros') include_dirs = build_info.get('include_dirs') extra_postargs = build_info.get('extra_compiler_args') or [] - c_sources, cxx_sources, f_sources, fmodule_sources \ - = filter_sources(sources) + # where compiled F90 module files are: + module_dirs = build_info.get('module_dirs') or [] + module_build_dir = os.path.dirname(lib_file) + if requiref90: self.mkpath(module_build_dir) - if self.compiler.compiler_type=='msvc': + if compiler.compiler_type=='msvc': # this hack works around the msvc compiler attributes # problem, msvc uses its own convention :( c_sources += cxx_sources cxx_sources = [] - if fmodule_sources: - print 'XXX: Fortran 90 module support not implemented or tested' - f_sources.extend(fmodule_sources) - objects = [] if c_sources: log.info("compiling C sources") @@ -182,20 +187,61 @@ extra_postargs=extra_postargs) objects.extend(cxx_objects) - if f_sources: - log.info("compiling Fortran sources") - f_objects = fcompiler.compile(f_sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=[]) - objects.extend(f_objects) + if f_sources or fmodule_sources: + extra_postargs = [] + f_objects = [] - self.compiler.create_static_lib(objects, lib_name, - output_dir=self.build_clib, - debug=self.debug) + if requiref90: + if fcompiler.module_dir_switch is None: + existing_modules = glob('*.mod') + extra_postargs += fcompiler.module_options(\ + module_dirs,module_build_dir) + if fmodule_sources: + log.info("compiling Fortran 90 module sources") + f_objects += fcompiler.compile(fmodule_sources, + output_dir=self.build_temp, + macros=macros, + include_dirs=include_dirs, + debug=self.debug, + extra_postargs=extra_postargs) + + if requiref90 and self.fcompiler.module_dir_switch is None: + # move new compiled F90 module files to module_build_dir + for f in glob('*.mod'): + if f in existing_modules: + continue + t = os.path.join(module_build_dir, f) + if os.path.abspath(f)==os.path.abspath(t): + continue + if os.path.isfile(t): + os.remove(t) + try: + self.move_file(f, module_build_dir) + except DistutilsFileError: + log.warn('failed to move %r to %r' \ + % (f, module_build_dir)) + + if f_sources: + log.info("compiling Fortran sources") + f_objects += fcompiler.compile(f_sources, + output_dir=self.build_temp, + macros=macros, + include_dirs=include_dirs, + debug=self.debug, + extra_postargs=extra_postargs) + else: + f_objects = [] + + objects.extend(f_objects) + + # assume that default linker is suitable for + # linking Fortran object files + compiler.create_static_lib(objects, lib_name, + output_dir=self.build_clib, + debug=self.debug) + + # fix library dependencies clib_libraries = build_info.get('libraries',[]) for lname, binfo in libraries: if lname in clib_libraries: Modified: branches/distutils-revamp/command/build_ext.py =================================================================== --- branches/distutils-revamp/command/build_ext.py 2007-05-21 13:01:20 UTC (rev 3794) +++ branches/distutils-revamp/command/build_ext.py 2007-05-21 13:15:45 UTC (rev 3795) @@ -2,26 +2,28 @@ """ import os -import string import sys from glob import glob from distutils.dep_util import newer_group from distutils.command.build_ext import build_ext as old_build_ext -from distutils.errors import DistutilsFileError, DistutilsSetupError +from distutils.errors import DistutilsFileError, DistutilsSetupError,\ + DistutilsError from distutils.file_util import copy_file from numpy.distutils import log from numpy.distutils.exec_command import exec_command from numpy.distutils.system_info import combine_paths from numpy.distutils.misc_util import filter_sources, has_f_sources, \ - has_cxx_sources, get_ext_source_files, all_strings, \ + has_cxx_sources, get_ext_source_files, \ get_numpy_include_dirs, is_sequence +from numpy.distutils.command.config_compiler import show_fortran_compilers +try: + set +except NameError: + from sets import Set as set -def ext_language(ext): - return getattr(ext, 'language', 'c') - class build_ext (old_build_ext): description = "build C/C++/F extensions (compile/link to build directory)" @@ -31,6 +33,11 @@ "specify the Fortran compiler type"), ] + help_options = old_build_ext.help_options + [ + ('help-fcompiler',None, "list available Fortran compilers", + show_fortran_compilers), + ] + def initialize_options(self): old_build_ext.initialize_options(self) self.fcompiler = None @@ -40,45 +47,7 @@ old_build_ext.finalize_options(self) if incl_dirs is not None: self.include_dirs.extend(self.distribution.include_dirs or []) - self.set_undefined_options('config_fc', - ('fcompiler', 'fcompiler')) - self._fcompiler = None - def initialize_fcompiler(self, build_clib): - # Determine if Fortran compiler is needed. - requiref77 = requiref90 = False - if build_clib: - lang = build_clib.languages() - requiref77 = 'f77' in lang - requiref90 = 'f90' in lang - else: - for ext in self.extensions: - language = ext_language(ext) - if language == 'f77': - requiref77 = True - elif language == 'f90': - requiref90 = True - elif has_f_sources(ext.sources): - # because we don't know any better, assume F77 - requiref77 = True - - if not (requiref77 or requiref90): - return - - if requiref90: - self.fcompiler.need_f90() - if requiref77: - self.fcompiler.need_f77() - - fc = self.fcompiler.fortran(requiref90) - if fc.get_version(): - fc.customize_cmd(self) - fc.show_customization() - else: - self.warn('fcompiler=%s is not available.' % ( - fc.compiler_type,)) - self._fcompiler = fc - def run(self): if not self.extensions: return @@ -86,10 +55,6 @@ # Make sure that extension sources are complete. self.run_command('build_src') - # Not including C libraries to the list of - # extension libraries automatically to prevent - # bogus linking commands. Extensions must - # explicitly specify the C libraries that they use. if self.distribution.has_c_libraries(): self.run_command('build_clib') build_clib = self.get_finalized_command('build_clib') @@ -97,96 +62,163 @@ else: build_clib = None - # Determine if C++ compiler is needed. - need_cxx_compiler = False - for ext in self.extensions: - if has_cxx_sources(ext.sources): - need_cxx_compiler = True - break - if getattr(ext,'language','c') == 'c++': - need_cxx_compiler = True - break + # Not including C libraries to the list of + # extension libraries automatically to prevent + # bogus linking commands. Extensions must + # explicitly specify the C libraries that they use. from distutils.ccompiler import new_compiler - self.compiler = new_compiler(compiler=self.compiler, + from numpy.distutils.fcompiler import new_fcompiler + + compiler_type = self.compiler + # Initialize C compiler: + self.compiler = new_compiler(compiler=compiler_type, verbose=self.verbose, dry_run=self.dry_run, force=self.force) - self.compiler.customize(self.distribution,need_cxx=need_cxx_compiler) + self.compiler.customize(self.distribution) self.compiler.customize_cmd(self) self.compiler.show_customization() - self.initialize_fcompiler(build_clib) + # Create mapping of libraries built by build_clib: + clibs = {} + if build_clib is not None: + for libname,build_info in build_clib.libraries or []: + if clibs.has_key(libname): + log.warn('library %r defined more than once,'\ + ' overwriting build_info %r with %r.' \ + % (libname, clibs[libname], build_info)) + clibs[libname] = build_info + # .. and distribution libraries: + for libname,build_info in self.distribution.libraries or []: + if clibs.has_key(libname): + # build_clib libraries have a precedence before distribution ones + continue + clibs[libname] = build_info - # Build extensions - self.build_extensions() + # Determine if C++/Fortran 77/Fortran 90 compilers are needed. + # Update extension libraries, library_dirs, and macros. + all_languages = set() + for ext in self.extensions: + ext_languages = set() + c_libs = [] + c_lib_dirs = [] + macros = [] + for libname in ext.libraries: + if clibs.has_key(libname): + binfo = clibs[libname] + c_libs += binfo.get('libraries',[]) + c_lib_dirs += binfo.get('library_dirs',[]) + for m in binfo.get('macros',[]): + if m not in macros: + macros.append(m) + for l in clibs.get(libname,{}).get('source_languages',[]): + ext_languages.add(l) + if c_libs: + new_c_libs = ext.libraries + c_libs + log.info('updating extension %r libraries from %r to %r' + % (ext.name, ext.libraries, new_c_libs)) + ext.libraries = new_c_libs + ext.library_dirs = ext.library_dirs + c_lib_dirs + if macros: + log.info('extending extension %r defined_macros with %r' + % (ext.name, macros)) + ext.define_macros = ext.define_macros + macros - def swig_sources(self, sources): - # Do nothing. Swig sources have beed handled in build_src command. - return sources + # determine extension languages + if has_f_sources(ext.sources): + ext_languages.add('f77') + if has_cxx_sources(ext.sources): + ext_languages.add('c++') + l = ext.language or self.compiler.detect_language(ext.sources) + if l: + ext_languages.add(l) + # reset language attribute for choosing proper linker + if 'c++' in ext_languages: + ext_language = 'c++' + elif 'f90' in ext_languages: + ext_language = 'f90' + elif 'f77' in ext_languages: + ext_language = 'f77' + else: + ext_language = 'c' # default + if l and l != ext_language and ext.language: + log.warn('resetting extension %r language from %r to %r.' % + (ext.name,l,ext_language)) + ext.language = ext_language + # global language + all_languages.update(ext_languages) - def get_fortran_objects(self, ext, f_sources, fmodule_sources, - macros, include_dirs): - if not f_sources and not fmodule_sources: - return None, [] + need_f90_compiler = 'f90' in all_languages + need_f77_compiler = 'f77' in all_languages + need_cxx_compiler = 'c++' in all_languages - fcompiler = self._fcompiler + # Initialize C++ compiler: + if need_cxx_compiler: + self._cxx_compiler = new_compiler(compiler=compiler_type, + verbose=self.verbose, + dry_run=self.dry_run, + force=self.force) + compiler = self._cxx_compiler + compiler.customize(self.distribution,need_cxx=need_cxx_compiler) + compiler.customize_cmd(self) + compiler.show_customization() + self._cxx_compiler = compiler.cxx_compiler() + else: + self._cxx_compiler = None - extra_postargs = [] - module_dirs = ext.module_dirs[:] + # Initialize Fortran 77 compiler: + if need_f77_compiler: + self._f77_compiler = new_fcompiler(compiler=self.fcompiler, + verbose=self.verbose, + dry_run=self.dry_run, + force=self.force, + requiref90=False) + fcompiler = self._f77_compiler + if fcompiler.get_version(): + fcompiler.customize(self.distribution) + fcompiler.customize_cmd(self) + fcompiler.show_customization() + else: + self.warn('f77_compiler=%s is not available.' % + (fcompiler.compiler_type)) + self._f77_compiler = None + else: + self._f77_compiler = None - macros = [] + # Initialize Fortran 90 compiler: + if need_f90_compiler: + self._f90_compiler = new_fcompiler(compiler=self.fcompiler, + verbose=self.verbose, + dry_run=self.dry_run, + force=self.force, + requiref90=True) + fcompiler = self._f90_compiler + if fcompiler.get_version(): + fcompiler.customize(self.distribution) + fcompiler.customize_cmd(self) + fcompiler.show_customization() + else: + self.warn('f90_compiler=%s is not available.' % + (fcompiler.compiler_type)) + self._f90_compiler = None + else: + self._f90_compiler = None - if fmodule_sources: - module_build_dir = os.path.join( - self.build_temp,os.path.dirname( - self.get_ext_filename(fullname))) + # Build extensions + self.build_extensions() - self.mkpath(module_build_dir) - if fcompiler.module_dir_switch is None: - existing_modules = glob('*.mod') - extra_postargs += fcompiler.module_options(\ - module_dirs,module_build_dir) + def swig_sources(self, sources): + # Do nothing. Swig sources have beed handled in build_src command. + return sources - f_objects = [] - if fmodule_sources: - log.info("compiling Fortran 90 module sources") - f_objects = fcompiler.compile(fmodule_sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=extra_postargs, - depends=ext.depends) - - if fmodule_sources and fcompiler.module_dir_switch is None: - for f in glob('*.mod'): - if f in existing_modules: - continue - try: - self.move_file(f, module_build_dir) - except DistutilsFileError: # already exists in destination - os.remove(f) - - if f_sources: - log.info("compiling Fortran sources") - f_objects += fcompiler.compile(f_sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=include_dirs, - debug=self.debug, - extra_postargs=extra_postargs, - depends=ext.depends) - - return fcompiler, f_objects - def build_extension(self, ext): sources = ext.sources if sources is None or not is_sequence(sources): - raise DistutilsSetupError, \ - ("in 'ext_modules' option (extension '%s'), " + - "'sources' must be present and must be " + - "a list of source filenames") % ext.name + raise DistutilsSetupError( + ("in 'ext_modules' option (extension '%s'), " + + "'sources' must be present and must be " + + "a list of source filenames") % ext.name) sources = list(sources) if not sources: @@ -194,10 +226,9 @@ fullname = self.get_ext_fullname(ext.name) if self.inplace: - modpath = string.split(fullname, '.') - package = string.join(modpath[0:-1], '.') + modpath = fullname.split('.') + package = '.'.join(modpath[0:-1]) base = modpath[-1] - build_py = self.get_finalized_command('build_py') package_dir = build_py.get_package_dir(package) ext_filename = os.path.join(package_dir, @@ -218,17 +249,11 @@ for undef in ext.undef_macros: macros.append((undef,)) - clib_libraries = [] - clib_library_dirs = [] - if self.distribution.libraries: - for libname,build_info in self.distribution.libraries: - if libname in ext.libraries: - macros.extend(build_info.get('macros',[])) - clib_libraries.extend(build_info.get('libraries',[])) - clib_library_dirs.extend(build_info.get('library_dirs',[])) - c_sources, cxx_sources, f_sources, fmodule_sources = \ filter_sources(ext.sources) + + + if self.compiler.compiler_type=='msvc': if cxx_sources: # Needed to compile kiva.agg._agg extension. @@ -238,7 +263,29 @@ c_sources += cxx_sources cxx_sources = [] + # Set Fortran/C++ compilers for compilation and linking. + if ext.language=='f90': + fcompiler = self._f90_compiler + elif ext.language=='f77': + fcompiler = self._f77_compiler + else: # in case ext.language is c++, for instance + fcompiler = self._f90_compiler or self._f77_compiler + cxx_compiler = self._cxx_compiler + # check for the availability of required compilers + if cxx_sources and cxx_compiler is None: + raise DistutilsError, "extension %r has C++ sources" \ + "but no C++ compiler found" % (ext.name) + if (f_sources or fmodule_sources) and fcompiler is None: + raise DistutilsError, "extension %r has Fortran sources " \ + "but no Fortran compiler found" % (ext.name) + if ext.language in ['f77','f90'] and fcompiler is None: + self.warn("extension %r has Fortran libraries " \ + "but no Fortran linker found, using default linker" % (ext.name)) + if ext.language=='c++' and cxx_compiler is None: + self.warn("extension %r has C++ libraries " \ + "but no C++ linker found, using default linker" % (ext.name)) + kws = {'depends':ext.depends} output_dir = self.build_temp @@ -254,11 +301,9 @@ debug=self.debug, extra_postargs=extra_args, **kws) + if cxx_sources: log.info("compiling C++ sources") - - cxx_compiler = self.compiler.cxx_compiler() - c_objects += cxx_compiler.compile(cxx_sources, output_dir=output_dir, macros=macros, @@ -267,80 +312,117 @@ extra_postargs=extra_args, **kws) - fcompiler, f_objects = self.get_fortran_objects(ext, - f_sources, - fmodule_sources, - macros, include_dirs) + extra_postargs = [] + f_objects = [] + if fmodule_sources: + log.info("compiling Fortran 90 module sources") + module_dirs = ext.module_dirs[:] + module_build_dir = os.path.join( + self.build_temp,os.path.dirname( + self.get_ext_filename(fullname))) + self.mkpath(module_build_dir) + if fcompiler.module_dir_switch is None: + existing_modules = glob('*.mod') + extra_postargs += fcompiler.module_options( + module_dirs,module_build_dir) + f_objects += fcompiler.compile(fmodule_sources, + output_dir=self.build_temp, + macros=macros, + include_dirs=include_dirs, + debug=self.debug, + extra_postargs=extra_postargs, + depends=ext.depends) + + if fcompiler.module_dir_switch is None: + for f in glob('*.mod'): + if f in existing_modules: + continue + t = os.path.join(module_build_dir, f) + if os.path.abspath(f)==os.path.abspath(t): + continue + if os.path.isfile(t): + os.remove(t) + try: + self.move_file(f, module_build_dir) + except DistutilsFileError: + log.warn('failed to move %r to %r' % + (f, module_build_dir)) + if f_sources: + log.info("compiling Fortran sources") + f_objects += fcompiler.compile(f_sources, + output_dir=self.build_temp, + macros=macros, + include_dirs=include_dirs, + debug=self.debug, + extra_postargs=extra_postargs, + depends=ext.depends) + objects = c_objects + f_objects if ext.extra_objects: objects.extend(ext.extra_objects) extra_args = ext.extra_link_args or [] + libraries = self.get_libraries(ext)[:] + library_dirs = ext.library_dirs[:] linker = self.compiler.link_shared_object - - use_fortran_linker = getattr(ext,'language','c') in ['f77','f90'] - c_libraries = [] - c_library_dirs = [] - if not use_fortran_linker and self.distribution.has_c_libraries(): - build_clib = self.get_finalized_command('build_clib') - f_libs = [] - for (lib_name, build_info) in build_clib.libraries: - if has_f_sources(build_info.get('sources',[])): - f_libs.append(lib_name) - if lib_name in ext.libraries: - # XXX: how to determine if c_libraries contain - # fortran compiled sources? - c_libraries.extend(build_info.get('libraries',[])) - c_library_dirs.extend(build_info.get('library_dirs',[])) - for l in ext.libraries: - if l in f_libs: - use_fortran_linker = True - fcompiler = self.fcompiler.fortran() - break - - if use_fortran_linker and not fcompiler: - fcompiler = self.fcompiler.fortran() - # Always use system linker when using MSVC compiler. - if self.compiler.compiler_type=='msvc' and use_fortran_linker: - self._libs_with_msvc_and_fortran(c_libraries, c_library_dirs) - use_fortran_linker = False - - if use_fortran_linker: - if cxx_sources: - # XXX: Which linker should be used, Fortran or C++? - log.warn('mixing Fortran and C++ is untested') + if self.compiler.compiler_type=='msvc': + # expand libraries with fcompiler libraries as we are + # not using fcompiler linker + self._libs_with_msvc_and_fortran(fcompiler, libraries, library_dirs) + elif ext.language in ['f77','f90'] and fcompiler is not None: linker = fcompiler.link_shared_object - language = ext.language or fcompiler.detect_language(f_sources) - else: - linker = self.compiler.link_shared_object - if sys.version[:3]>='2.3': - language = ext.language or self.compiler.detect_language(sources) - else: - language = ext.language - if cxx_sources: - linker = self.compiler.cxx_compiler().link_shared_object + if ext.language=='c++' and cxx_compiler is not None: + linker = cxx_compiler.link_shared_object if sys.version[:3]>='2.3': - kws = {'target_lang':language} + kws = {'target_lang':ext.language} else: kws = {} linker(objects, ext_filename, - libraries=self.get_libraries(ext) + c_libraries + clib_libraries, - library_dirs=ext.library_dirs+c_library_dirs+clib_library_dirs, + libraries=libraries, + library_dirs=library_dirs, runtime_library_dirs=ext.runtime_library_dirs, extra_postargs=extra_args, export_symbols=self.get_export_symbols(ext), debug=self.debug, build_temp=self.build_temp,**kws) - def _libs_with_msvc_and_fortran(self, c_libraries, c_library_dirs): + def _libs_with_msvc_and_fortran(self, fcompiler, c_libraries, + c_library_dirs): + if fcompiler is None: return + + for libname in c_libraries: + if libname.startswith('msvc'): continue + fileexists = False + for libdir in c_library_dirs or []: + libfile = os.path.join(libdir,'%s.lib' % (libname)) + if os.path.isfile(libfile): + fileexists = True + break + if fileexists: continue + # make g77-compiled static libs available to MSVC + fileexists = False + for libdir in c_library_dirs: + libfile = os.path.join(libdir,'lib%s.a' % (libname)) + if os.path.isfile(libfile): + # copy libname.a file to name.lib so that MSVC linker + # can find it + libfile2 = os.path.join(self.build_temp, libname + '.lib') + copy_file(libfile, libfile2) + if self.build_temp not in c_library_dirs: + c_library_dirs.append(self.build_temp) + fileexists = True + break + if fileexists: continue + log.warn('could not find library %r in directories %s' + % (libname, c_library_dirs)) + # Always use system linker when using MSVC compiler. f_lib_dirs = [] - fcompiler = self.fcompiler.fortran() for dir in fcompiler.library_dirs: # correct path when compiling in Cygwin but with normal Win # Python @@ -352,17 +434,16 @@ c_library_dirs.extend(f_lib_dirs) # make g77-compiled static libs available to MSVC - lib_added = False for lib in fcompiler.libraries: - if not lib.startswith('msvcr'): + if not lib.startswith('msvc'): c_libraries.append(lib) p = combine_paths(f_lib_dirs, 'lib' + lib + '.a') if p: dst_name = os.path.join(self.build_temp, lib + '.lib') - copy_file(p[0], dst_name) - if not lib_added: + if not os.path.isfile(dst_name): + copy_file(p[0], dst_name) + if self.build_temp not in c_library_dirs: c_library_dirs.append(self.build_temp) - lib_added = True def get_source_files (self): self.check_extensions_list(self.extensions) Modified: branches/distutils-revamp/command/build_src.py =================================================================== --- branches/distutils-revamp/command/build_src.py 2007-05-21 13:01:20 UTC (rev 3794) +++ branches/distutils-revamp/command/build_src.py 2007-05-21 13:15:45 UTC (rev 3795) @@ -8,6 +8,7 @@ from distutils.command import build_ext from distutils.dep_util import newer_group, newer from distutils.util import get_platform +from distutils.errors import DistutilsError, DistutilsSetupError try: from Pyrex.Compiler import Main @@ -23,6 +24,7 @@ appendpath, is_string, is_sequence from numpy.distutils.from_template import process_file as process_f_file from numpy.distutils.conv_template import process_file as process_c_file +from numpy.distutils.exec_command import splitcmdline class build_src(build_ext.build_ext): @@ -30,8 +32,12 @@ user_options = [ ('build-src=', 'd', "directory to \"build\" sources to"), - ('f2pyflags=', None, "additonal flags to f2py"), - ('swigflags=', None, "additional flags to swig"), + ('f2py-opts=', None, "list of f2py command line options"), + ('swig=', None, "path to the SWIG executable"), + ('swig-opts=', None, "list of SWIG command line options"), + ('swig-cpp', None, "make SWIG create C++ files (default is autodetected from sources)"), + ('f2pyflags=', None, "additional flags to f2py (use --f2py-opts= instead)"), # obsolete + ('swigflags=', None, "additional flags to swig (use --swig-opts= instead)"), # obsolete ('force', 'f', "forcibly build everything (ignore file timestamps)"), ('inplace', 'i', "ignore build-lib and put compiled extensions into the source " + @@ -53,8 +59,12 @@ self.force = None self.inplace = None self.package_dir = None - self.f2pyflags = None - self.swigflags = None + self.f2pyflags = None # obsolete + self.f2py_opts = None + self.swigflags = None # obsolete + self.swig_opts = None + self.swig_cpp = None + self.swig = None def finalize_options(self): self.set_undefined_options('build', @@ -71,23 +81,49 @@ if self.build_src is None: plat_specifier = ".%s-%s" % (get_platform(), sys.version[0:3]) self.build_src = os.path.join(self.build_base, 'src'+plat_specifier) - if self.inplace is None: - build_ext = self.get_finalized_command('build_ext') - self.inplace = build_ext.inplace # py_modules_dict is used in build_py.find_package_modules self.py_modules_dict = {} - if self.f2pyflags is None: - self.f2pyflags = [] + if self.f2pyflags: + if self.f2py_opts: + log.warn('ignoring --f2pyflags as --f2py-opts already used') + else: + self.f2py_opts = self.f2pyflags + self.f2pyflags = None + if self.f2py_opts is None: + self.f2py_opts = [] else: - self.f2pyflags = self.f2pyflags.split() # XXX spaces?? + self.f2py_opts = splitcmdline(self.f2py_opts) - if self.swigflags is None: - self.swigflags = [] + if self.swigflags: + if self.swig_opts: + log.warn('ignoring --swigflags as --swig-opts already used') + else: + self.swig_opts = self.swigflags + self.swigflags = None + + if self.swig_opts is None: + self.swig_opts = [] else: - self.swigflags = self.swigflags.split() # XXX spaces?? + self.swig_opts = splitcmdline(self.swig_opts) + # use options from build_ext command + build_ext = self.get_finalized_command('build_ext') + if self.inplace is None: + self.inplace = build_ext.inplace + if self.swig_cpp is None: + self.swig_cpp = build_ext.swig_cpp + for c in ['swig','swig_opt']: + o = '--'+c.replace('_','-') + v = getattr(build_ext,c,None) + if v: + if getattr(self,c): + log.warn('both build_src and build_ext define %s option' % (o)) + else: + log.info('using "%s=%s" option from build_ext command' % (o,v)) + setattr(self, c, v) + def run(self): if not (self.extensions or self.libraries): return @@ -144,7 +180,7 @@ filenames = get_data_files((d,files)) new_data_files.append((d, filenames)) else: - raise + raise TypeError(repr(data)) self.data_files[:] = new_data_files def build_py_modules_sources(self): @@ -354,16 +390,14 @@ output_file=target_file) pyrex_result = Main.compile(source, options=options) if pyrex_result.num_errors != 0: - raise RuntimeError("%d errors in Pyrex compile" % - pyrex_result.num_errors) + raise DistutilsError,"%d errors while compiling %r with Pyrex" \ + % (pyrex_result.num_errors, source) elif os.path.isfile(target_file): - log.warn("Pyrex needed to compile %s but not available."\ - " Using old target %s"\ + log.warn("Pyrex required for compiling %r but not available,"\ + " using old target %r"\ % (source, target_file)) else: - raise SystemError,"Non-existing target %r. "\ - "Perhaps you need to install Pyrex."\ - % (target_file) + raise DistutilsError,"Pyrex required for compiling %r but not available" % (source) new_sources.append(target_file) else: new_sources.append(source) @@ -388,9 +422,9 @@ if os.path.isfile(source): name = get_f2py_modulename(source) if name != ext_name: - raise ValueError('mismatch of extension names: %s ' - 'provides %r but expected %r' % ( - source, name, ext_name)) + raise DistutilsSetupError('mismatch of extension names: %s ' + 'provides %r but expected %r' % ( + source, name, ext_name)) target_file = os.path.join(target_dir,name+'module.c') else: log.debug(' source %s does not exist: skipping f2py\'ing.' \ @@ -399,16 +433,16 @@ skip_f2py = 1 target_file = os.path.join(target_dir,name+'module.c') if not os.path.isfile(target_file): - log.debug(' target %s does not exist:\n '\ - 'Assuming %smodule.c was generated with '\ - '"build_src --inplace" command.' \ - % (target_file, name)) + log.warn(' target %s does not exist:\n '\ + 'Assuming %smodule.c was generated with '\ + '"build_src --inplace" command.' \ + % (target_file, name)) target_dir = os.path.dirname(base) target_file = os.path.join(target_dir,name+'module.c') if not os.path.isfile(target_file): - raise ValueError("%r missing" % (target_file,)) - log.debug(' Yes! Using %s as up-to-date target.' \ - % (target_file)) + raise DistutilsSetupError("%r missing" % (target_file,)) + log.info(' Yes! Using %r as up-to-date target.' \ + % (target_file)) target_dirs.append(target_dir) f2py_sources.append(source) f2py_targets[source] = target_file @@ -423,7 +457,7 @@ map(self.mkpath, target_dirs) - f2py_options = extension.f2py_options + self.f2pyflags + f2py_options = extension.f2py_options + self.f2py_opts if self.distribution.libraries: for name,build_info in self.distribution.libraries: @@ -434,7 +468,7 @@ if f2py_sources: if len(f2py_sources) != 1: - raise ValueError( + raise DistutilsSetupError( 'only one .pyf file is allowed per extension module but got'\ ' more: %r' % (f2py_sources,)) source = f2py_sources[0] @@ -472,7 +506,7 @@ % (target_file)) if not os.path.isfile(target_file): - raise ValueError("%r missing" % (target_file,)) + raise DistutilsError("f2py target file %r not generated" % (target_file,)) target_c = os.path.join(self.build_src,'fortranobject.c') target_h = os.path.join(self.build_src,'fortranobject.h') @@ -494,9 +528,9 @@ self.copy_file(source_h,target_h) else: if not os.path.isfile(target_c): - raise ValueError("%r missing" % (target_c,)) + raise DistutilsSetupError("f2py target_c file %r not found" % (target_c,)) if not os.path.isfile(target_h): - raise ValueError("%r missing" % (target_h,)) + raise DistutilsSetupError("f2py target_h file %r not found" % (target_h,)) for name_ext in ['-f2pywrappers.f','-f2pywrappers2.f90']: filename = os.path.join(target_dir,ext_name + name_ext) @@ -516,8 +550,12 @@ target_dirs = [] py_files = [] # swig generated .py files target_ext = '.c' - typ = None - is_cpp = 0 + if self.swig_cpp: + typ = 'c++' + is_cpp = True + else: + typ = None + is_cpp = False skip_swig = 0 ext_name = extension.name.split('.')[-1] @@ -533,35 +571,43 @@ if os.path.isfile(source): name = get_swig_modulename(source) if name != ext_name[1:]: - raise ValueError( + raise DistutilsSetupError( 'mismatch of extension names: %s provides %r' ' but expected %r' % (source, name, ext_name[1:])) if typ is None: typ = get_swig_target(source) is_cpp = typ=='c++' - if is_cpp: - target_ext = '.cpp' + if is_cpp: target_ext = '.cpp' else: - assert typ == get_swig_target(source), repr(typ) + typ2 = get_swig_target(source) + if typ!=typ2: + log.warn('expected %r but source %r defines %r swig target' \ + % (typ, source, typ2)) + if typ2=='c++': + log.warn('resetting swig target to c++ (some targets may have .c extension)') + is_cpp = True + target_ext = '.cpp' + else: + log.warn('assuming that %r has c++ swig target' % (source)) target_file = os.path.join(target_dir,'%s_wrap%s' \ % (name, target_ext)) else: - log.debug(' source %s does not exist: skipping swig\'ing.' \ + log.warn(' source %s does not exist: skipping swig\'ing.' \ % (source)) name = ext_name[1:] skip_swig = 1 target_file = _find_swig_target(target_dir, name) if not os.path.isfile(target_file): - log.debug(' target %s does not exist:\n '\ - 'Assuming %s_wrap.{c,cpp} was generated with '\ - '"build_src --inplace" command.' \ + log.warn(' target %s does not exist:\n '\ + 'Assuming %s_wrap.{c,cpp} was generated with '\ + '"build_src --inplace" command.' \ % (target_file, name)) target_dir = os.path.dirname(base) target_file = _find_swig_target(target_dir, name) if not os.path.isfile(target_file): - raise ValueError("%r missing" % (target_file,)) - log.debug(' Yes! Using %s as up-to-date target.' \ - % (target_file)) + raise DistutilsSetupError("%r missing" % (target_file,)) + log.warn(' Yes! Using %r as up-to-date target.' \ + % (target_file)) target_dirs.append(target_dir) new_sources.append(target_file) py_files.append(os.path.join(py_target_dir, name+'.py')) @@ -577,7 +623,7 @@ return new_sources + py_files map(self.mkpath, target_dirs) - swig = self.find_swig() + swig = self.swig or self.find_swig() swig_cmd = [swig, "-python"] if is_cpp: swig_cmd.append('-c++') @@ -589,7 +635,7 @@ if self.force or newer_group(depends, target, 'newer'): log.info("%s: %s" % (os.path.basename(swig) \ + (is_cpp and '++' or ''), source)) - self.spawn(swig_cmd + self.swigflags \ + self.spawn(swig_cmd + self.swig_opts \ + ["-o", target, '-outdir', py_target_dir, source]) else: log.debug(" skipping '%s' swig interface (up-to-date)" \ Modified: branches/distutils-revamp/command/config.py =================================================================== --- branches/distutils-revamp/command/config.py 2007-05-21 13:01:20 UTC (rev 3794) +++ branches/distutils-revamp/command/config.py 2007-05-21 13:15:45 UTC (rev 3795) @@ -3,56 +3,41 @@ # compilers (they must define linker_exe first). # Pearu Peterson -import os, signal, copy +import os, signal from distutils.command.config import config as old_config from distutils.command.config import LANG_EXT from distutils import log +from distutils.file_util import copy_file from numpy.distutils.exec_command import exec_command -from numpy.distutils.fcompiler import FCompiler, new_fcompiler LANG_EXT['f77'] = '.f' LANG_EXT['f90'] = '.f90' class config(old_config): old_config.user_options += [ - ('fcompiler=', None, - "specify the Fortran compiler type"), + ('fcompiler=', None, "specify the Fortran compiler type"), ] def initialize_options(self): self.fcompiler = None old_config.initialize_options(self) - def finalize_options(self): - old_config.finalize_options(self) - f = self.distribution.get_command_obj('config_fc') - self.set_undefined_options('config_fc', - ('fcompiler', 'fcompiler')) - self._fcompiler = None - - def run(self): - self._check_compiler() - - def _check_compiler(self): + def _check_compiler (self): old_config._check_compiler(self) + from numpy.distutils.fcompiler import FCompiler, new_fcompiler + if not isinstance(self.fcompiler, FCompiler): + self.fcompiler = new_fcompiler(compiler=self.fcompiler, + dry_run=self.dry_run, force=1) + self.fcompiler.customize(self.distribution) + self.fcompiler.customize_cmd(self) + self.fcompiler.show_customization() - def get_fcompiler(self): - if self._fcompiler is None: - fc = self.fcompiler.fortran() - fc.force = 1 - fc.dry_run = self.dry_run - fc.customize(self.distribution) - fc.customize_cmd(self) - fc.show_customization() - self._fcompiler = fc - return self._fcompiler - - def _wrap_method(self, mth, lang, args): + def _wrap_method(self,mth,lang,args): from distutils.ccompiler import CompileError from distutils.errors import DistutilsExecError save_compiler = self.compiler - if lang in ('f77', 'f90'): - self.compiler = self.get_fcompiler() + if lang in ['f77','f90']: + self.compiler = self.fcompiler try: ret = mth(*((self,)+args)) except (DistutilsExecError,CompileError),msg: @@ -68,6 +53,47 @@ def _link (self, body, headers, include_dirs, libraries, library_dirs, lang): + if self.compiler.compiler_type=='msvc': + libraries = (libraries or [])[:] + library_dirs = (library_dirs or [])[:] + if lang in ['f77','f90']: + lang = 'c' # always use system linker when using MSVC compiler + if self.fcompiler: + for d in self.fcompiler.library_dirs or []: + # correct path when compiling in Cygwin but with + # normal Win Python + if d.startswith('/usr/lib'): + s,o = exec_command(['cygpath', '-w', d], + use_tee=False) + if not s: d = o + library_dirs.append(d) + for libname in self.fcompiler.libraries or []: + if libname not in libraries: + libraries.append(libname) + for libname in libraries: + if libname.startswith('msvc'): continue + fileexists = False + for libdir in library_dirs or []: + libfile = os.path.join(libdir,'%s.lib' % (libname)) + if os.path.isfile(libfile): + fileexists = True + break + if fileexists: continue + # make g77-compiled static libs available to MSVC + fileexists = False + for libdir in library_dirs: + libfile = os.path.join(libdir,'lib%s.a' % (libname)) + if os.path.isfile(libfile): + # copy libname.a file to name.lib so that MSVC linker + # can find it + libfile2 = os.path.join(libdir,'%s.lib' % (libname)) + copy_file(libfile, libfile2) + self.temp_files.append(libfile2) + fileexists = True + break + if fileexists: continue + log.warn('could not find library %r in directories %s' \ + % (libname, library_dirs)) return self._wrap_method(old_config._link,lang, (body, headers, include_dirs, libraries, library_dirs, lang)) Modified: branches/distutils-revamp/command/config_compiler.py =================================================================== --- branches/distutils-revamp/command/config_compiler.py 2007-05-21 13:01:20 UTC (rev 3794) +++ branches/distutils-revamp/command/config_compiler.py 2007-05-21 13:15:45 UTC (rev 3795) @@ -1,81 +1,18 @@ import sys -import copy -import distutils.core from distutils.core import Command -from distutils.errors import DistutilsSetupError -from distutils import log -from numpy.distutils.fcompiler import show_fcompilers, new_fcompiler +from numpy.distutils import log -#XXX: Implement confic_cc for enhancing C/C++ compiler options. #XXX: Linker flags def show_fortran_compilers(_cache=[]): # Using cache to prevent infinite recursion - if _cache: - return + if _cache: return _cache.append(1) - show_fcompilers() + from numpy.distutils.fcompiler import show_fcompilers + import distutils.core + dist = distutils.core._setup_distribution + show_fcompilers(dist) -class FCompilerProxy(object): - """ - A layer of indirection to simplify choosing the correct Fortran compiler. - - If need_f90(), f90(), or fortran(requiref90=True) is called at any time, - a Fortran 90 compiler is found and used for *all* Fortran sources, - including Fortran 77 sources. - """ - #XXX The ability to use a separate F77 compiler is likely not - # necessary: of all the compilers we support, only the 'gnu' - # compiler (g77) doesn't support F90, and everything else supports - # both. - - def __init__(self, compiler_type, distribution): - self._fcompiler = None - self._have_f77 = None - self._have_f90 = None - self._compiler_type = compiler_type - self.distribution = distribution - - def _set_fcompiler(self, requiref90=False): - fc = new_fcompiler(compiler=self._compiler_type, - dry_run=self.distribution.dry_run, - verbose=self.distribution.verbose, - requiref90=requiref90) - if fc is None: - raise DistutilsSetupError("could not find a Fortran compiler") - fc.customize(self.distribution) - self._fcompiler = fc - self._have_f77 = fc.compiler_f77 is not None - if requiref90: - self._have_f90 = fc.compiler_f90 is not None - log.info('%s (%s)' % (fc.description, fc.get_version())) - - def need_f77(self): - if self._fcompiler is None: - self._set_fcompiler(requiref90=False) - if not self._have_f77: - raise DistutilsSetupError("could not find a Fortran 77 compiler") - - def need_f90(self): - if self._fcompiler is None or self._have_f90 is None: - self._set_fcompiler(requiref90=True) - if not self._have_f90: - raise DistutilsSetupError("could not find a Fortran 90 compiler") - - def f77(self): - self.need_f77() - return copy.copy(self._fcompiler) - - def f90(self): - self.need_f90() - return copy.copy(self._fcompiler) - - def fortran(self, requiref90=False): - if requiref90: - return self.f90() - else: - return self.f77() - class config_fc(Command): """ Distutils command to hold user specified options to Fortran compilers. @@ -83,24 +20,19 @@ config_fc command is used by the FCompiler.customize() method. """ + description = "specify Fortran 77/Fortran 90 compiler information" + user_options = [ ('fcompiler=',None,"specify Fortran compiler type"), ('f77exec=', None, "specify F77 compiler command"), ('f90exec=', None, "specify F90 compiler command"), ('f77flags=',None,"specify F77 compiler flags"), ('f90flags=',None,"specify F90 compiler flags"), - ('ldshared=',None,"shared-library linker command"), - ('ld=',None,"static library linker command"), - ('ar=',None,"archiver command (ar)"), - ('ranlib=',None,"ranlib command"), ('opt=',None,"specify optimization flags"), ('arch=',None,"specify architecture specific optimization flags"), ('debug','g',"compile with debugging information"), ('noopt',None,"compile without optimization"), ('noarch',None,"compile without arch-dependent optimization"), - ('fflags=',None,"extra flags for Fortran compiler"), - ('ldflags=',None,"linker flags"), - ('arflags=',None,"flags for ar"), ] help_options = [ @@ -116,21 +48,77 @@ self.f90exec = None self.f77flags = None self.f90flags = None - self.ldshared = None - self.ld = None - self.ar = None - self.ranlib = None self.opt = None self.arch = None self.debug = None self.noopt = None self.noarch = None - self.fflags = None - self.ldflags = None - self.arflags = None def finalize_options(self): - self.fcompiler = FCompilerProxy(self.fcompiler, self.distribution) + log.info('unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options') + build_clib = self.get_finalized_command('build_clib') + build_ext = self.get_finalized_command('build_ext') + config = self.get_finalized_command('config') + build = self.get_finalized_command('build') + cmd_list = [self, config, build_clib, build_ext, build] + for a in ['fcompiler']: + l = [] + for c in cmd_list: + v = getattr(c,a) + if v is not None: + if not isinstance(v, str): v = v.compiler_type + if v not in l: l.append(v) + if not l: v1 = None + else: v1 = l[0] + if len(l)>1: + log.warn(' commands have different --%s options: %s'\ + ', using first in list as default' % (a, l)) + if v1: + for c in cmd_list: + if getattr(c,a) is None: setattr(c, a, v1) def run(self): - pass + # Do nothing. + return + +class config_cc(Command): + """ Distutils command to hold user specified options + to C/C++ compilers. + """ + + description = "specify C/C++ compiler information" + + user_options = [ + ('compiler=',None,"specify C/C++ compiler type"), + ] + + def initialize_options(self): + self.compiler = None + + def finalize_options(self): + log.info('unifing config_cc, config, build_clib, build_ext, build commands --compiler options') + build_clib = self.get_finalized_command('build_clib') + build_ext = self.get_finalized_command('build_ext') + config = self.get_finalized_command('config') + build = self.get_finalized_command('build') + cmd_list = [self, config, build_clib, build_ext, build] + for a in ['compiler']: + l = [] + for c in cmd_list: + v = getattr(c,a) + if v is not None: + if not isinstance(v, str): v = v.compiler_type + if v not in l: l.append(v) + if not l: v1 = None + else: v1 = l[0] + if len(l)>1: + log.warn(' commands have different --%s options: %s'\ + ', using first in list as default' % (a, l)) + if v1: + for c in cmd_list: + if getattr(c,a) is None: setattr(c, a, v1) + return + + def run(self): + # Do nothing. + return Modified: branches/distutils-revamp/core.py =================================================================== --- branches/distutils-revamp/core.py 2007-05-21 13:01:20 UTC (rev 3794) +++ branches/distutils-revamp/core.py 2007-05-21 13:15:45 UTC (rev 3795) @@ -39,6 +39,7 @@ numpy_cmdclass = {'build': build.build, 'build_src': build_src.build_src, 'build_scripts': build_scripts.build_scripts, + 'config_cc': config_compiler.config_cc, 'config_fc': config_compiler.config_fc, 'config': config.config, 'build_ext': build_ext.build_ext, Modified: branches/distutils-revamp/fcompiler/__init__.py =================================================================== --- branches/distutils-revamp/fcompiler/__init__.py 2007-05-21 13:01:20 UTC (rev 3794) +++ branches/distutils-revamp/fcompiler/__init__.py 2007-05-21 13:15:45 UTC (rev 3795) @@ -770,7 +770,6 @@ dist.cmdclass['config_fc'] = config_fc dist.parse_config_files() dist.parse_command_line() - compilers = [] compilers_na = [] compilers_ni = [] @@ -779,12 +778,14 @@ platform_compilers = available_fcompilers_for_platform() for compiler in platform_compilers: v = None + log.set_verbosity(-2) try: c = new_fcompiler(compiler=compiler, verbose=dist.verbose) c.customize(dist) v = c.get_version() except (DistutilsModuleError, CompilerNotFound): pass + if v is None: compilers_na.append(("fcompiler="+compiler, None, fcompiler_class[compiler][2])) Modified: branches/distutils-revamp/misc_util.py =================================================================== --- branches/distutils-revamp/misc_util.py 2007-05-21 13:01:20 UTC (rev 3794) +++ branches/distutils-revamp/misc_util.py 2007-05-21 13:15:45 UTC (rev 3795) @@ -309,8 +309,9 @@ return [seq] def get_language(sources): + # not used in numpy/scipy packages, use build_ext.detect_language instead """ Determine language value (c,f77,f90) from sources """ - language = 'c' + language = None for source in sources: if isinstance(source, str): if f90_ext_match(source): @@ -1020,11 +1021,7 @@ ext_args = copy.copy(kw) ext_args['name'] = dot_join(self.name,name) ext_args['sources'] = sources - - language = ext_args.get('language',None) - if language is None: - ext_args['language'] = get_language(sources) - + if ext_args.has_key('extra_info'): extra_info = ext_args['extra_info'] del ext_args['extra_info'] @@ -1089,10 +1086,6 @@ name = name #+ '__OF__' + self.name build_info['sources'] = sources - language = build_info.get('language',None) - if language is None: - build_info['language'] = get_language(sources) - self._fix_paths_dict(build_info) self.libraries.append((name,build_info)) Modified: branches/distutils-revamp/system_info.py =================================================================== --- branches/distutils-revamp/system_info.py 2007-05-21 13:01:20 UTC (rev 3794) +++ branches/distutils-revamp/system_info.py 2007-05-21 13:15:45 UTC (rev 3795) @@ -1107,7 +1107,7 @@ self.set_info(**info) atlas_version_c_text = r''' -/* This file is generated from numpy_distutils/system_info.py */ +/* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); From numpy-svn at scipy.org Mon May 21 19:34:48 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 21 May 2007 18:34:48 -0500 (CDT) Subject: [Numpy-svn] r3796 - trunk/numpy/distutils Message-ID: <20070521233448.D451839C01F@new.scipy.org> Author: rkern Date: 2007-05-21 18:34:44 -0500 (Mon, 21 May 2007) New Revision: 3796 Modified: trunk/numpy/distutils/misc_util.py Log: Use a more robust method for finding the directory of the setup.py file calling Configuration(). easy_install spoofs __name__, thus confusing the old method. Modified: trunk/numpy/distutils/misc_util.py =================================================================== --- trunk/numpy/distutils/misc_util.py 2007-05-21 13:15:45 UTC (rev 3795) +++ trunk/numpy/distutils/misc_util.py 2007-05-21 23:34:44 UTC (rev 3796) @@ -40,29 +40,35 @@ path = apath[len(pd)+1:] return path -def get_path(mod_name, parent_path=None): - """ Return path of the module. +def get_path_from_frame(frame, parent_path=None): + """ Return path of the module given a frame object from the call stack. Returned path is relative to parent_path when given, otherwise it is absolute path. """ - if mod_name == '__builtin__': - #builtin if/then added by Pearu for use in core.run_setup. - d = os.path.dirname(os.path.abspath(sys.argv[0])) - else: - __import__(mod_name) - mod = sys.modules[mod_name] - if hasattr(mod,'__file__'): - filename = mod.__file__ + + # First, try to find if the file name is in the frame. + try: + caller_file = eval('__file__', frame.f_globals, frame.f_locals) + d = os.path.dirname(os.path.abspath(caller_file)) + except NameError: + # __file__ is not defined, so let's try __name__. We try this second + # because setuptools spoofs __name__ to be '__main__' even though + # sys.modules['__main__'] might be something else, like easy_install(1). + caller_name = eval('__name__', frame.f_globals, frame.f_locals) + __import__(caller_name) + mod = sys.modules[caller_name] + if hasattr(mod, '__file__'): d = os.path.dirname(os.path.abspath(mod.__file__)) else: # we're probably running setup.py as execfile("setup.py") # (likely we're building an egg) d = os.path.abspath('.') # hmm, should we use sys.argv[0] like in __builtin__ case? - + if parent_path is not None: d = rel_path(d, parent_path) + return d or '.' def njoin(*path): @@ -529,8 +535,7 @@ self.version = None caller_frame = get_frame(caller_level) - caller_name = eval('__name__',caller_frame.f_globals,caller_frame.f_locals) - self.local_path = get_path(caller_name, top_path) + self.local_path = get_path_from_frame(caller_frame, top_path) # local_path -- directory of a file (usually setup.py) that # defines a configuration() function. if top_path is None: From numpy-svn at scipy.org Mon May 21 20:54:43 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 21 May 2007 19:54:43 -0500 (CDT) Subject: [Numpy-svn] r3797 - trunk/numpy/distutils Message-ID: <20070522005443.8F2BA39C04B@new.scipy.org> Author: rkern Date: 2007-05-21 19:54:42 -0500 (Mon, 21 May 2007) New Revision: 3797 Modified: trunk/numpy/distutils/misc_util.py Log: Be robust when the shared distribution object exists does not have data_files set. This can happen when easy_install automatically builds dependencies. Modified: trunk/numpy/distutils/misc_util.py =================================================================== --- trunk/numpy/distutils/misc_util.py 2007-05-21 23:34:44 UTC (rev 3796) +++ trunk/numpy/distutils/misc_util.py 2007-05-22 00:54:42 UTC (rev 3797) @@ -860,7 +860,7 @@ assert not is_glob_pattern(d),`d` dist = self.get_distribution() - if dist is not None: + if dist is not None and dist.data_files is not None: data_files = dist.data_files else: data_files = self.data_files @@ -957,7 +957,7 @@ assert not is_glob_pattern(d),`d,filepat` dist = self.get_distribution() - if dist is not None: + if dist is not None and dist.data_files is not None: data_files = dist.data_files else: data_files = self.data_files From numpy-svn at scipy.org Tue May 22 02:26:05 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 22 May 2007 01:26:05 -0500 (CDT) Subject: [Numpy-svn] r3798 - trunk/numpy/core/include/numpy Message-ID: <20070522062605.67BFF39C10A@new.scipy.org> Author: stefan Date: 2007-05-22 01:25:44 -0500 (Tue, 22 May 2007) New Revision: 3798 Modified: trunk/numpy/core/include/numpy/ndarrayobject.h Log: Fix array interface url. Modified: trunk/numpy/core/include/numpy/ndarrayobject.h =================================================================== --- trunk/numpy/core/include/numpy/ndarrayobject.h 2007-05-22 00:54:42 UTC (rev 3797) +++ trunk/numpy/core/include/numpy/ndarrayobject.h 2007-05-22 06:25:44 UTC (rev 3798) @@ -1773,7 +1773,7 @@ /* This is the form of the struct that's returned pointed by the PyCObject attribute of an array __array_struct__. See - http://numeric.scipy.org/array_interface.html for the full + http://numpy.scipy.org/array_interface.shtml for the full documentation. */ typedef struct { int two; /* contains the integer 2 as a sanity check */ From numpy-svn at scipy.org Tue May 22 05:18:45 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 22 May 2007 04:18:45 -0500 (CDT) Subject: [Numpy-svn] r3799 - trunk/numpy/core Message-ID: <20070522091845.CDC7739C02A@new.scipy.org> Author: oliphant Date: 2007-05-22 04:18:38 -0500 (Tue, 22 May 2007) New Revision: 3799 Modified: trunk/numpy/core/numeric.py Log: Fix scalar inf comparison in allclose. Modified: trunk/numpy/core/numeric.py =================================================================== --- trunk/numpy/core/numeric.py 2007-05-22 06:25:44 UTC (rev 3798) +++ trunk/numpy/core/numeric.py 2007-05-22 09:18:38 UTC (rev 3799) @@ -818,10 +818,10 @@ a = array([1]+n*[0],dtype=dtype) b = empty((n,n),dtype=dtype) - # Note that this assignment depends on the convention that since the a array - # is shorter than the flattened b array, then the a array will be repeated - # until it is the appropriate size. Given a's construction, this nicely sets - # the diagonal to all ones. + # Note that this assignment depends on the convention that since the a + # array is shorter than the flattened b array, then the a array will + # be repeated until it is the appropriate size. Given a's construction, + # this nicely sets the diagonal to all ones. b.flat = a return b @@ -840,11 +840,12 @@ yinf = isinf(y) if (not xinf.any() and not yinf.any()): return d1.all() - d2 = (xinf != yinf) d3 = (x[xinf] == y[yinf]) d4 = (~xinf & ~yinf) - if d3.size == 0: - return False + if d3.size < 2: + if d3.size==0: + return False + return d3 if d3.all(): return d1[d4].all() else: From numpy-svn at scipy.org Tue May 22 18:36:13 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 22 May 2007 17:36:13 -0500 (CDT) Subject: [Numpy-svn] r3800 - trunk/numpy/core/src Message-ID: <20070522223613.A314239C071@new.scipy.org> Author: oliphant Date: 2007-05-22 17:36:10 -0500 (Tue, 22 May 2007) New Revision: 3800 Modified: trunk/numpy/core/src/arrayobject.c trunk/numpy/core/src/scalartypes.inc.src Log: Add a few more checks to make sure that numpy unicode scalars report correctly on narrow builds. Fix a long-standing seg-fault that arose when calling u.imag on an object with numpy.unicode_ type. Modified: trunk/numpy/core/src/arrayobject.c =================================================================== --- trunk/numpy/core/src/arrayobject.c 2007-05-22 09:18:38 UTC (rev 3799) +++ trunk/numpy/core/src/arrayobject.c 2007-05-22 22:36:10 UTC (rev 3800) @@ -1378,7 +1378,7 @@ byte_swap_vector(destptr, length, 4); #else /* need aligned data buffer */ - if (!PyArray_ISBEHAVED(base)) { + if ((((intp)data) % descr->alignment) != 0) { buffer = _pya_malloc(itemsize); if (buffer == NULL) return PyErr_NoMemory(); Modified: trunk/numpy/core/src/scalartypes.inc.src =================================================================== --- trunk/numpy/core/src/scalartypes.inc.src 2007-05-22 09:18:38 UTC (rev 3799) +++ trunk/numpy/core/src/scalartypes.inc.src 2007-05-22 22:36:10 UTC (rev 3800) @@ -158,6 +158,9 @@ where only a reference for flexible types is returned */ +/* This may not work right on narrow builds for NumPy unicode scalars. + */ + /*OBJECT_API Cast Scalar to c-type */ @@ -752,13 +755,7 @@ static PyObject * gentype_data_get(PyObject *self) { - PyArray_Descr *typecode; - PyObject *ret; - - typecode = PyArray_DescrFromScalar(self); - ret = PyBuffer_FromObject(self, 0, typecode->elsize); - Py_DECREF(typecode); - return ret; + return PyBuffer_FromObject(self, 0, Py_END_OF_BUFFER); } @@ -767,9 +764,16 @@ { PyArray_Descr *typecode; PyObject *ret; + int elsize; typecode = PyArray_DescrFromScalar(self); - ret = PyInt_FromLong((long) typecode->elsize); + elsize = typecode->elsize; +#ifndef Py_UNICODE_WIDE + if (typecode->type_num == NPY_UNICODE) { + elsize >>= 1; + } +#endif + ret = PyInt_FromLong((long) elsize); Py_DECREF(typecode); return ret; } @@ -928,9 +932,11 @@ } else { char *temp; + int elsize; typecode = PyArray_DescrFromScalar(self); - temp = PyDataMem_NEW(typecode->elsize); - memset(temp, '\0', typecode->elsize); + elsize = typecode->elsize; + temp = PyDataMem_NEW(elsize); + memset(temp, '\0', elsize); ret = PyArray_Scalar(temp, typecode, NULL); PyDataMem_FREE(temp); } @@ -1633,6 +1639,11 @@ numbytes = outcode->elsize; *ptrptr = (void *)scalar_value(self, outcode); +#ifndef Py_UNICODE_WIDE + if (outcode->type_num == NPY_UNICODE) { + numbytes >>= 1; + } +#endif Py_DECREF(outcode); return numbytes; } @@ -1643,8 +1654,14 @@ PyArray_Descr *outcode; outcode = PyArray_DescrFromScalar(self); - if (lenp) + if (lenp) { *lenp = outcode->elsize; +#ifndef Py_UNICODE_WIDE + if (outcode->type_num == NPY_UNICODE) { + *lenp >>= 1; + } +#endif + } Py_DECREF(outcode); return 1; } From numpy-svn at scipy.org Tue May 22 18:55:16 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 22 May 2007 17:55:16 -0500 (CDT) Subject: [Numpy-svn] r3801 - trunk/numpy/core Message-ID: <20070522225516.3F2F439C074@new.scipy.org> Author: oliphant Date: 2007-05-22 17:55:13 -0500 (Tue, 22 May 2007) New Revision: 3801 Modified: trunk/numpy/core/ma.py Log: Added most of patch from #422. Modified: trunk/numpy/core/ma.py =================================================================== --- trunk/numpy/core/ma.py 2007-05-22 22:36:10 UTC (rev 3800) +++ trunk/numpy/core/ma.py 2007-05-22 22:55:13 UTC (rev 3801) @@ -797,7 +797,10 @@ m = self._mask dout = self._data[i] if m is nomask: - return dout + if dout.size == 1: + return dout + else: + return masked_array(dout, fill_value=self._fill_value, copy=False) mi = m[i] if mi.size == 1: if mi: @@ -807,16 +810,6 @@ else: return masked_array(dout, mi, fill_value=self._fill_value) - def __getslice__(self, i, j): - "Get slice described by i, j" - self.unshare_mask() - m = self._mask - dout = self._data[i:j] - if m is nomask: - return masked_array(dout, fill_value=self._fill_value) - else: - return masked_array(dout, mask = m[i:j], fill_value=self._fill_value) - # -------- # setitem and setslice notes # note that if value is masked, it means to mask those locations. @@ -826,7 +819,7 @@ "Set item described by index. If value is masked, mask those locations." d = self._data if self is masked: - raise MAError, 'Cannot alter the masked element.' + raise MAError, 'Cannot alter masked elements.' if value is masked: if self._mask is nomask: self._mask = make_mask_none(d.shape) @@ -850,30 +843,6 @@ self.unshare_mask() self._mask[index] = m - def __setslice__(self, i, j, value): - "Set slice i:j; if value is masked, mask those locations." - d = self._data - if self is masked: - raise MAError, "Cannot alter the 'masked' object." - if value is masked: - if self._mask is nomask: - self._mask = make_mask_none(d.shape) - self._shared_mask = False - self._mask[i:j] = True - return - m = getmask(value) - value = filled(value).astype(d.dtype) - d[i:j] = value - if m is nomask: - if self._mask is not nomask: - self.unshare_mask() - self._mask[i:j] = False - else: - if self._mask is nomask: - self._mask = make_mask_none(self._data.shape) - self._shared_mask = False - self._mask[i:j] = m - def __nonzero__(self): """returns true if any element is non-zero or masked From numpy-svn at scipy.org Tue May 22 19:12:05 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 22 May 2007 18:12:05 -0500 (CDT) Subject: [Numpy-svn] r3802 - trunk/numpy/core Message-ID: <20070522231205.AF8E639C074@new.scipy.org> Author: oliphant Date: 2007-05-22 18:12:03 -0500 (Tue, 22 May 2007) New Revision: 3802 Modified: trunk/numpy/core/arrayprint.py Log: Fix ticket #501 which caused some array printing problems Modified: trunk/numpy/core/arrayprint.py =================================================================== --- trunk/numpy/core/arrayprint.py 2007-05-22 22:55:13 UTC (rev 3801) +++ trunk/numpy/core/arrayprint.py 2007-05-22 23:12:03 UTC (rev 3802) @@ -344,6 +344,7 @@ def _floatFormat(data, precision, suppress_small, sign = 0): exp_format = 0 + errstate = _gen.seterr(all='ignore') non_zero = _uf.absolute(data.compress(_uf.not_equal(data, 0))) ##non_zero = _numeric_compress(data) ## if len(non_zero) == 0: @@ -357,6 +358,7 @@ if not suppress_small and (min_val < 0.0001 or max_val/min_val > 1000.): exp_format = 1 + _gen.seterr(**errstate) if exp_format: large_exponent = 0 < min_val < 1e-99 or max_val >= 1e100 max_str_len = 8 + precision + large_exponent From numpy-svn at scipy.org Tue May 22 19:33:06 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 22 May 2007 18:33:06 -0500 (CDT) Subject: [Numpy-svn] r3803 - in trunk/numpy/core: . src tests Message-ID: <20070522233306.4C8FC39C06A@new.scipy.org> Author: oliphant Date: 2007-05-22 18:33:01 -0500 (Tue, 22 May 2007) New Revision: 3803 Modified: trunk/numpy/core/ma.py trunk/numpy/core/src/arraymethods.c trunk/numpy/core/tests/test_unicode.py Log: Remove tests for inequality on unicode scalars --- not sure why they were there in the first place. Fix bug in masked_array. Modified: trunk/numpy/core/ma.py =================================================================== --- trunk/numpy/core/ma.py 2007-05-22 23:12:03 UTC (rev 3802) +++ trunk/numpy/core/ma.py 2007-05-22 23:33:01 UTC (rev 3803) @@ -797,10 +797,13 @@ m = self._mask dout = self._data[i] if m is nomask: - if dout.size == 1: + try: + if dout.size == 1: + return dout + else: + return masked_array(dout, fill_value=self._fill_value) + except AttributeError: return dout - else: - return masked_array(dout, fill_value=self._fill_value, copy=False) mi = m[i] if mi.size == 1: if mi: Modified: trunk/numpy/core/src/arraymethods.c =================================================================== --- trunk/numpy/core/src/arraymethods.c 2007-05-22 23:12:03 UTC (rev 3802) +++ trunk/numpy/core/src/arraymethods.c 2007-05-22 23:33:01 UTC (rev 3803) @@ -653,14 +653,19 @@ &descr)) return NULL; if (descr == self->descr) { - obj = _ARET(PyArray_NewCopy(self,0)); + obj = _ARET(PyArray_NewCopy(self,NPY_ANYORDER)); Py_XDECREF(descr); return obj; } if (descr->names != NULL) { - return PyArray_FromArray(self, descr, NPY_FORCECAST); + int flags; + flags = NPY_FORCECAST; + if (PyArray_ISFORTRAN(self)) { + flags |= NPY_FORTRAN; + } + return PyArray_FromArray(self, descr, flags); } - return PyArray_CastToType(self, descr, 0); + return PyArray_CastToType(self, descr, PyArray_ISFORTRAN(self)); } /* default sub-type implementation */ Modified: trunk/numpy/core/tests/test_unicode.py =================================================================== --- trunk/numpy/core/tests/test_unicode.py 2007-05-22 23:12:03 UTC (rev 3802) +++ trunk/numpy/core/tests/test_unicode.py 2007-05-22 23:33:01 UTC (rev 3803) @@ -240,9 +240,7 @@ """Check byteorder of 0-dimensional objects""" ua = array(self.ucs_value*self.ulen, dtype='U%s' % self.ulen) ua2 = ua.newbyteorder() - # Scalars must be different - # Problems here because it seems that ua.view() != ua (!) - self.assert_(ua[()] != ua2[()]) + self.assert_(ua[()] == ua2[()]) ua3 = ua2.newbyteorder() # Arrays must be equal after the round-trip assert_equal(ua, ua3) @@ -251,9 +249,8 @@ """Check byteorder of single-dimensional objects""" ua = array([self.ucs_value*self.ulen]*2, dtype='U%s' % self.ulen) ua2 = ua.newbyteorder() - # Scalars must be different - self.assert_(ua[0] != ua2[0]) - self.assert_(ua[-1] != ua2[-1]) + self.assert_(ua[0] == ua2[0]) + self.assert_(ua[-1] == ua2[-1]) ua3 = ua2.newbyteorder() # Arrays must be equal after the round-trip assert_equal(ua, ua3) @@ -263,9 +260,8 @@ ua = array([[[self.ucs_value*self.ulen]*2]*3]*4, dtype='U%s' % self.ulen) ua2 = ua.newbyteorder() - # Scalars must be different - self.assert_(ua[0,0,0] != ua2[0,0,0]) - self.assert_(ua[-1,-1,-1] != ua2[-1,-1,-1]) + self.assert_(ua[0,0,0] == ua2[0,0,0]) + self.assert_(ua[-1,-1,-1] == ua2[-1,-1,-1]) ua3 = ua2.newbyteorder() # Arrays must be equal after the round-trip assert_equal(ua, ua3) From numpy-svn at scipy.org Tue May 22 22:50:01 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 22 May 2007 21:50:01 -0500 (CDT) Subject: [Numpy-svn] r3804 - in trunk/numpy/core: src tests Message-ID: <20070523025001.A0CDE39C01A@new.scipy.org> Author: oliphant Date: 2007-05-22 21:49:54 -0500 (Tue, 22 May 2007) New Revision: 3804 Modified: trunk/numpy/core/src/arrayobject.c trunk/numpy/core/tests/test_unicode.py Log: Re-think the byte-swapping unicode tests. They were correct to begin with. Try to fix the new bug on narrow builds. Modified: trunk/numpy/core/src/arrayobject.c =================================================================== --- trunk/numpy/core/src/arrayobject.c 2007-05-22 23:33:01 UTC (rev 3803) +++ trunk/numpy/core/src/arrayobject.c 2007-05-23 02:49:54 UTC (rev 3804) @@ -1378,13 +1378,13 @@ byte_swap_vector(destptr, length, 4); #else /* need aligned data buffer */ - if ((((intp)data) % descr->alignment) != 0) { + if ((swap) || ((((intp)data) % descr->alignment) != 0)) { buffer = _pya_malloc(itemsize); if (buffer == NULL) return PyErr_NoMemory(); alloc = 1; memcpy(buffer, data, itemsize); - if (!PyArray_ISNOTSWAPPED(base)) { + if (swap) { byte_swap_vector(buffer, itemsize >> 2, 4); } Modified: trunk/numpy/core/tests/test_unicode.py =================================================================== --- trunk/numpy/core/tests/test_unicode.py 2007-05-22 23:33:01 UTC (rev 3803) +++ trunk/numpy/core/tests/test_unicode.py 2007-05-23 02:49:54 UTC (rev 3804) @@ -240,7 +240,10 @@ """Check byteorder of 0-dimensional objects""" ua = array(self.ucs_value*self.ulen, dtype='U%s' % self.ulen) ua2 = ua.newbyteorder() - self.assert_(ua[()] == ua2[()]) + # This changes the interpretation of the data region (but not the + # actual data), therefore the returned scalars are not + # the same (they are byte-swapped versions of each other). + self.assert_(ua[()] != ua2[()]) ua3 = ua2.newbyteorder() # Arrays must be equal after the round-trip assert_equal(ua, ua3) @@ -249,8 +252,8 @@ """Check byteorder of single-dimensional objects""" ua = array([self.ucs_value*self.ulen]*2, dtype='U%s' % self.ulen) ua2 = ua.newbyteorder() - self.assert_(ua[0] == ua2[0]) - self.assert_(ua[-1] == ua2[-1]) + self.assert_(ua[0] != ua2[0]) + self.assert_(ua[-1] != ua2[-1]) ua3 = ua2.newbyteorder() # Arrays must be equal after the round-trip assert_equal(ua, ua3) @@ -260,8 +263,8 @@ ua = array([[[self.ucs_value*self.ulen]*2]*3]*4, dtype='U%s' % self.ulen) ua2 = ua.newbyteorder() - self.assert_(ua[0,0,0] == ua2[0,0,0]) - self.assert_(ua[-1,-1,-1] == ua2[-1,-1,-1]) + self.assert_(ua[0,0,0] != ua2[0,0,0]) + self.assert_(ua[-1,-1,-1] != ua2[-1,-1,-1]) ua3 = ua2.newbyteorder() # Arrays must be equal after the round-trip assert_equal(ua, ua3) From numpy-svn at scipy.org Wed May 23 08:43:24 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 23 May 2007 07:43:24 -0500 (CDT) Subject: [Numpy-svn] r3805 - trunk/numpy/core/tests Message-ID: <20070523124324.D5A5039C152@new.scipy.org> Author: stefan Date: 2007-05-23 07:43:09 -0500 (Wed, 23 May 2007) New Revision: 3805 Modified: trunk/numpy/core/tests/test_regression.py Log: Add regression test for ticket #501 [patch by Andrew Straw]. Modified: trunk/numpy/core/tests/test_regression.py =================================================================== --- trunk/numpy/core/tests/test_regression.py 2007-05-23 02:49:54 UTC (rev 3804) +++ trunk/numpy/core/tests/test_regression.py 2007-05-23 12:43:09 UTC (rev 3805) @@ -658,6 +658,15 @@ N.take(x,[0,2],axis=1,out=b) assert_array_equal(a,b) + def check_array_str_64bit(self, level=rlevel): + """Ticket #501""" + s = N.array([1, N.nan],dtype=N.float64) + errstate = N.seterr(all='raise') + try: + sstr = N.array_str(s) + finally: + N.seterr(**errstate) + def check_frompyfunc_endian(self, level=rlevel): """Ticket #503""" from math import radians From numpy-svn at scipy.org Wed May 23 14:07:33 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 23 May 2007 13:07:33 -0500 (CDT) Subject: [Numpy-svn] r3806 - in trunk/numpy/core: . src Message-ID: <20070523180733.F24B039C197@new.scipy.org> Author: oliphant Date: 2007-05-23 13:07:27 -0500 (Wed, 23 May 2007) New Revision: 3806 Modified: trunk/numpy/core/_internal.py trunk/numpy/core/src/multiarraymodule.c Log: Remove import multiarray from top of _internal.py Modified: trunk/numpy/core/_internal.py =================================================================== --- trunk/numpy/core/_internal.py 2007-05-23 12:43:09 UTC (rev 3805) +++ trunk/numpy/core/_internal.py 2007-05-23 18:07:27 UTC (rev 3806) @@ -2,7 +2,6 @@ # that implements more complicated stuff. import re -from multiarray import dtype, ndarray import sys if (sys.byteorder == 'little'): @@ -11,6 +10,7 @@ _nbo = '>' def _makenames_list(adict): + from multiarray import dtype allfields = [] fnames = adict.keys() for fname in fnames: @@ -44,6 +44,7 @@ # a dictionary without "names" and "formats" # fields is used as a data-type descriptor. def _usefields(adict, align): + from multiarray import dtype try: names = adict[-1] except KeyError: @@ -109,6 +110,7 @@ # so don't remove the name here, or you'll # break backward compatibilty. def _reconstruct(subtype, shape, dtype): + from multiarray import ndarray return ndarray.__new__(subtype, shape, dtype) @@ -193,6 +195,7 @@ return result def _getintp_ctype(): + from multiarray import dtype val = _getintp_ctype.cache if val is not None: return val Modified: trunk/numpy/core/src/multiarraymodule.c =================================================================== --- trunk/numpy/core/src/multiarraymodule.c 2007-05-23 12:43:09 UTC (rev 3805) +++ trunk/numpy/core/src/multiarraymodule.c 2007-05-23 18:07:27 UTC (rev 3806) @@ -7584,10 +7584,8 @@ if (set_typeinfo(d) != 0) goto err; - if (_numpy_internal == NULL) { - _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal != NULL) return; - } + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal != NULL) return; err: if (!PyErr_Occurred()) { From numpy-svn at scipy.org Wed May 23 14:12:57 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 23 May 2007 13:12:57 -0500 (CDT) Subject: [Numpy-svn] r3807 - trunk/numpy/core Message-ID: <20070523181257.12A5239C1A7@new.scipy.org> Author: oliphant Date: 2007-05-23 13:12:52 -0500 (Wed, 23 May 2007) New Revision: 3807 Modified: trunk/numpy/core/__init__.py Log: Add an dummy import statement so that freeze programs pick up _internal.p Modified: trunk/numpy/core/__init__.py =================================================================== --- trunk/numpy/core/__init__.py 2007-05-23 18:07:27 UTC (rev 3806) +++ trunk/numpy/core/__init__.py 2007-05-23 18:12:52 UTC (rev 3807) @@ -4,6 +4,7 @@ import multiarray import umath +import _internal # for freeze programs import numerictypes as nt multiarray.set_typeDict(nt.sctypeDict) import _sort From numpy-svn at scipy.org Wed May 23 14:47:15 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 23 May 2007 13:47:15 -0500 (CDT) Subject: [Numpy-svn] r3808 - in trunk/numpy/core: include/numpy src Message-ID: <20070523184715.D92AB39C0BF@new.scipy.org> Author: oliphant Date: 2007-05-23 13:47:08 -0500 (Wed, 23 May 2007) New Revision: 3808 Modified: trunk/numpy/core/include/numpy/ndarrayobject.h trunk/numpy/core/src/arraymethods.c trunk/numpy/core/src/arrayobject.c trunk/numpy/core/src/multiarraymodule.c trunk/numpy/core/src/scalartypes.inc.src Log: Fix so that _internal.py gets imported when it is needed. Perhaps this will fix the problem with multiple-interpreters not working correctly. Modified: trunk/numpy/core/include/numpy/ndarrayobject.h =================================================================== --- trunk/numpy/core/include/numpy/ndarrayobject.h 2007-05-23 18:12:52 UTC (rev 3807) +++ trunk/numpy/core/include/numpy/ndarrayobject.h 2007-05-23 18:47:08 UTC (rev 3808) @@ -1230,6 +1230,13 @@ #define fortran fortran_ /* For some compilers */ +/* Array Flags Object */ +typedef struct PyArrayFlagsObject { + PyObject_HEAD + PyObject *arr; + int flags; +} PyArrayFlagsObject; + /* Mirrors buffer object to ptr */ typedef struct { Modified: trunk/numpy/core/src/arraymethods.c =================================================================== --- trunk/numpy/core/src/arraymethods.c 2007-05-23 18:12:52 UTC (rev 3807) +++ trunk/numpy/core/src/arraymethods.c 2007-05-23 18:47:08 UTC (rev 3808) @@ -872,12 +872,15 @@ if (order == Py_None) order = NULL; if (order != NULL) { PyObject *new_name; + PyObject *_numpy_internal; saved = self->descr; if (saved->names == NULL) { PyErr_SetString(PyExc_ValueError, "Cannot specify " \ "order when the array has no fields."); return NULL; } + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; new_name = PyObject_CallMethod(_numpy_internal, "_newnames", "OO", saved, order); if (new_name == NULL) return NULL; @@ -914,12 +917,15 @@ if (order == Py_None) order = NULL; if (order != NULL) { PyObject *new_name; + PyObject *_numpy_internal; saved = self->descr; if (saved->names == NULL) { PyErr_SetString(PyExc_ValueError, "Cannot specify " \ "order when the array has no fields."); return NULL; } + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; new_name = PyObject_CallMethod(_numpy_internal, "_newnames", "OO", saved, order); if (new_name == NULL) return NULL; Modified: trunk/numpy/core/src/arrayobject.c =================================================================== --- trunk/numpy/core/src/arrayobject.c 2007-05-23 18:12:52 UTC (rev 3807) +++ trunk/numpy/core/src/arrayobject.c 2007-05-23 18:47:08 UTC (rev 3808) @@ -6131,6 +6131,9 @@ static PyObject * array_ctypes_get(PyArrayObject *self) { + PyObject *_numpy_internal; + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; return PyObject_CallMethod(_numpy_internal, "_ctypes", "ON", self, PyLong_FromVoidPtr(self->data)); @@ -10844,6 +10847,7 @@ arraydescr_protocol_descr_get(PyArray_Descr *self) { PyObject *dobj, *res; + PyObject *_numpy_internal; if (self->names == NULL) { /* get default */ @@ -10858,6 +10862,8 @@ return res; } + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; return PyObject_CallMethod(_numpy_internal, "_array_descr", "O", self); } @@ -11637,12 +11643,6 @@ /** Array Flags Object **/ -typedef struct PyArrayFlagsObject { - PyObject_HEAD - PyObject *arr; - int flags; -} PyArrayFlagsObject; - /*OBJECT_API Get New ArrayFlagsObject */ Modified: trunk/numpy/core/src/multiarraymodule.c =================================================================== --- trunk/numpy/core/src/multiarraymodule.c 2007-05-23 18:12:52 UTC (rev 3807) +++ trunk/numpy/core/src/multiarraymodule.c 2007-05-23 18:47:08 UTC (rev 3808) @@ -23,9 +23,9 @@ #include "numpy/arrayobject.h" #define PyAO PyArrayObject + static PyObject *typeDict=NULL; /* Must be explicitly loaded */ -static PyObject *_numpy_internal=NULL; /* A Python module for callbacks */ static PyArray_Descr * _arraydescr_fromobj(PyObject *obj) @@ -4859,8 +4859,11 @@ { PyObject *listobj; PyArray_Descr *res; + PyObject *_numpy_internal; if (!PyString_Check(obj)) return NULL; + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; listobj = PyObject_CallMethod(_numpy_internal, "_commastring", "O", obj); if (!listobj) return NULL; @@ -4926,6 +4929,9 @@ static PyArray_Descr * _use_fields_dict(PyObject *obj, int align) { + PyObject *_numpy_internal; + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; return (PyArray_Descr *)PyObject_CallMethod(_numpy_internal, "_usefields", "Oi", obj, align); @@ -7583,10 +7589,8 @@ set_flaginfo(d); if (set_typeinfo(d) != 0) goto err; + return; - _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal != NULL) return; - err: if (!PyErr_Occurred()) { PyErr_SetString(PyExc_RuntimeError, Modified: trunk/numpy/core/src/scalartypes.inc.src =================================================================== --- trunk/numpy/core/src/scalartypes.inc.src 2007-05-23 18:12:52 UTC (rev 3807) +++ trunk/numpy/core/src/scalartypes.inc.src 2007-05-23 18:47:08 UTC (rev 3808) @@ -740,8 +740,12 @@ static PyObject * voidtype_flags_get(PyVoidScalarObject *self) { - return PyObject_CallMethod(_numpy_internal, "flagsobj", "Oii", - self, self->flags, 1); + PyObject *flagobj; + flagobj = PyArrayFlags_Type.tp_alloc(&PyArrayFlags_Type, 0); + if (flagobj == NULL) return NULL; + ((PyArrayFlagsObject *)flagobj)->arr = NULL; + ((PyArrayFlagsObject *)flagobj)->flags = self->flags; + return flagobj; } static PyObject * @@ -2657,11 +2661,15 @@ { PyObject *tup; PyObject *ret; + PyObject *_numpy_internal; + if (!PyDict_Check(fields)) { PyErr_SetString(PyExc_TypeError, "Fields must be a dictionary"); return NULL; } + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; tup = PyObject_CallMethod(_numpy_internal, "_makenames_list", "O", fields); if (tup == NULL) return NULL; ret = PyTuple_GET_ITEM(tup, 0); From numpy-svn at scipy.org Wed May 23 15:30:21 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 23 May 2007 14:30:21 -0500 (CDT) Subject: [Numpy-svn] r3809 - trunk/numpy/lib Message-ID: <20070523193021.60DCB39C0AC@new.scipy.org> Author: oliphant Date: 2007-05-23 14:30:18 -0500 (Wed, 23 May 2007) New Revision: 3809 Modified: trunk/numpy/lib/getlimits.py Log: Expose numpy.iinfo and re-implement so it supports big-endian as well. Modified: trunk/numpy/lib/getlimits.py =================================================================== --- trunk/numpy/lib/getlimits.py 2007-05-23 18:47:08 UTC (rev 3808) +++ trunk/numpy/lib/getlimits.py 2007-05-23 19:30:18 UTC (rev 3809) @@ -1,7 +1,7 @@ """ Machine limits for Float32 and Float64 and (long double) if available... """ -__all__ = ['finfo'] +__all__ = ['finfo','iinfo'] from machar import MachAr import numpy.core.numeric as numeric @@ -124,39 +124,29 @@ """ - # Should be using dtypes as keys, but hash-function isn't yet implemented - _min_values = {'int8': -2**7, - 'int16': -2**15, - 'int32': -2**31, - 'int64': -2**63, - 'uint8': 0, - 'uint16': 0, - 'uint32': 0, - 'uint64': 0} - - _max_values = {'int8': 2**7 - 1, - 'int16': 2**15 - 1, - 'int32': 2**31 - 1, - 'int64': 2**63 - 1, - 'uint8': 2**8 - 1, - 'uint16': 2**16 - 1, - 'uint32': 2**32 - 1, - 'uint64': 2**64 - 1} - def __init__(self, type): - self.dtype = str(N.dtype(type)) - if not (self.dtype in self._min_values and \ - self.dtype in self._max_values): + self.dtype = N.dtype(type) + self.kind = self.dtype.kind + self.bits = self.dtype.itemsize * 8 + if not self.kind in 'iu': raise ValueError("Invalid integer data type.") def min(self): """Minimum value of given dtype.""" - return self._min_values[self.dtype] + if self.kind == 'u': + return 0 + else: + return -(1 << (self.bits-1)) + min = property(min) def max(self): """Maximum value of given dtype.""" - return self._max_values[self.dtype] + if self.kind == 'u': + return (1 << self.bits) - 1 + else: + return (1 << (self.bits-1)) - 1 + max = property(max) if __name__ == '__main__': From numpy-svn at scipy.org Wed May 23 16:25:36 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 23 May 2007 15:25:36 -0500 (CDT) Subject: [Numpy-svn] r3810 - trunk/numpy/core/src Message-ID: <20070523202536.8D5FB39C0FE@new.scipy.org> Author: oliphant Date: 2007-05-23 15:25:31 -0500 (Wed, 23 May 2007) New Revision: 3810 Modified: trunk/numpy/core/src/arraymethods.c trunk/numpy/core/src/arrayobject.c trunk/numpy/core/src/multiarraymodule.c trunk/numpy/core/src/scalartypes.inc.src Log: Properly decrement references for _internal.py imports Modified: trunk/numpy/core/src/arraymethods.c =================================================================== --- trunk/numpy/core/src/arraymethods.c 2007-05-23 19:30:18 UTC (rev 3809) +++ trunk/numpy/core/src/arraymethods.c 2007-05-23 20:25:31 UTC (rev 3810) @@ -883,6 +883,7 @@ if (_numpy_internal == NULL) return NULL; new_name = PyObject_CallMethod(_numpy_internal, "_newnames", "OO", saved, order); + Py_DECREF(_numpy_internal); if (new_name == NULL) return NULL; newd = PyArray_DescrNew(saved); newd->names = new_name; @@ -928,6 +929,7 @@ if (_numpy_internal == NULL) return NULL; new_name = PyObject_CallMethod(_numpy_internal, "_newnames", "OO", saved, order); + Py_DECREF(_numpy_internal); if (new_name == NULL) return NULL; newd = PyArray_DescrNew(saved); newd->names = new_name; Modified: trunk/numpy/core/src/arrayobject.c =================================================================== --- trunk/numpy/core/src/arrayobject.c 2007-05-23 19:30:18 UTC (rev 3809) +++ trunk/numpy/core/src/arrayobject.c 2007-05-23 20:25:31 UTC (rev 3810) @@ -6132,11 +6132,14 @@ array_ctypes_get(PyArrayObject *self) { PyObject *_numpy_internal; + PyObject *ret; _numpy_internal = PyImport_ImportModule("numpy.core._internal"); if (_numpy_internal == NULL) return NULL; - return PyObject_CallMethod(_numpy_internal, "_ctypes", - "ON", self, - PyLong_FromVoidPtr(self->data)); + ret = PyObject_CallMethod(_numpy_internal, "_ctypes", + "ON", self, + PyLong_FromVoidPtr(self->data)); + Py_DECREF(_numpy_internal); + return ret; } static PyObject * @@ -10864,8 +10867,10 @@ _numpy_internal = PyImport_ImportModule("numpy.core._internal"); if (_numpy_internal == NULL) return NULL; - return PyObject_CallMethod(_numpy_internal, "_array_descr", - "O", self); + res = PyObject_CallMethod(_numpy_internal, "_array_descr", + "O", self); + Py_DECREF(_numpy_internal); + return res; } /* returns 1 for a builtin type Modified: trunk/numpy/core/src/multiarraymodule.c =================================================================== --- trunk/numpy/core/src/multiarraymodule.c 2007-05-23 19:30:18 UTC (rev 3809) +++ trunk/numpy/core/src/multiarraymodule.c 2007-05-23 20:25:31 UTC (rev 3810) @@ -26,6 +26,7 @@ static PyObject *typeDict=NULL; /* Must be explicitly loaded */ +static PyObject *_internal_pname=NULL; static PyArray_Descr * _arraydescr_fromobj(PyObject *obj) @@ -4866,6 +4867,7 @@ if (_numpy_internal == NULL) return NULL; listobj = PyObject_CallMethod(_numpy_internal, "_commastring", "O", obj); + Py_DECREF(_numpy_internal); if (!listobj) return NULL; if (!PyList_Check(listobj) || PyList_GET_SIZE(listobj)<1) { PyErr_SetString(PyExc_RuntimeError, "_commastring is " \ @@ -4929,12 +4931,14 @@ static PyArray_Descr * _use_fields_dict(PyObject *obj, int align) { - PyObject *_numpy_internal; + PyObject *_numpy_internal, *res; _numpy_internal = PyImport_ImportModule("numpy.core._internal"); if (_numpy_internal == NULL) return NULL; - return (PyArray_Descr *)PyObject_CallMethod(_numpy_internal, - "_usefields", - "Oi", obj, align); + res = (PyArray_Descr *)PyObject_CallMethod(_numpy_internal, + "_usefields", + "Oi", obj, align); + Py_DECREF(_numpy_internal); + return res; } static PyArray_Descr * Modified: trunk/numpy/core/src/scalartypes.inc.src =================================================================== --- trunk/numpy/core/src/scalartypes.inc.src 2007-05-23 19:30:18 UTC (rev 3809) +++ trunk/numpy/core/src/scalartypes.inc.src 2007-05-23 20:25:31 UTC (rev 3810) @@ -2671,6 +2671,7 @@ _numpy_internal = PyImport_ImportModule("numpy.core._internal"); if (_numpy_internal == NULL) return NULL; tup = PyObject_CallMethod(_numpy_internal, "_makenames_list", "O", fields); + Py_DECREF(_numpy_internal); if (tup == NULL) return NULL; ret = PyTuple_GET_ITEM(tup, 0); ret = PySequence_Tuple(ret); From numpy-svn at scipy.org Wed May 23 16:32:20 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 23 May 2007 15:32:20 -0500 (CDT) Subject: [Numpy-svn] r3811 - trunk/numpy/core/src Message-ID: <20070523203220.8C71139C0AC@new.scipy.org> Author: oliphant Date: 2007-05-23 15:32:17 -0500 (Wed, 23 May 2007) New Revision: 3811 Modified: trunk/numpy/core/src/multiarraymodule.c Log: Fix some compiler warnings. Modified: trunk/numpy/core/src/multiarraymodule.c =================================================================== --- trunk/numpy/core/src/multiarraymodule.c 2007-05-23 20:25:31 UTC (rev 3810) +++ trunk/numpy/core/src/multiarraymodule.c 2007-05-23 20:32:17 UTC (rev 3811) @@ -26,7 +26,6 @@ static PyObject *typeDict=NULL; /* Must be explicitly loaded */ -static PyObject *_internal_pname=NULL; static PyArray_Descr * _arraydescr_fromobj(PyObject *obj) @@ -4931,7 +4930,8 @@ static PyArray_Descr * _use_fields_dict(PyObject *obj, int align) { - PyObject *_numpy_internal, *res; + PyObject *_numpy_internal; + PyArray_Descr *res; _numpy_internal = PyImport_ImportModule("numpy.core._internal"); if (_numpy_internal == NULL) return NULL; res = (PyArray_Descr *)PyObject_CallMethod(_numpy_internal, From numpy-svn at scipy.org Wed May 23 18:03:48 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 23 May 2007 17:03:48 -0500 (CDT) Subject: [Numpy-svn] r3812 - trunk/numpy/lib Message-ID: <20070523220348.B695739C18C@new.scipy.org> Author: oliphant Date: 2007-05-23 17:03:42 -0500 (Wed, 23 May 2007) New Revision: 3812 Modified: trunk/numpy/lib/getlimits.py Log: Fix up getlimits to work with Python2.3 Modified: trunk/numpy/lib/getlimits.py =================================================================== --- trunk/numpy/lib/getlimits.py 2007-05-23 20:32:17 UTC (rev 3811) +++ trunk/numpy/lib/getlimits.py 2007-05-23 22:03:42 UTC (rev 3812) @@ -124,10 +124,14 @@ """ + _min_vals = {} + _max_vals = {} + def __init__(self, type): self.dtype = N.dtype(type) self.kind = self.dtype.kind self.bits = self.dtype.itemsize * 8 + self.key = "%s%d" % (self.kind, self.bits) if not self.kind in 'iu': raise ValueError("Invalid integer data type.") @@ -136,16 +140,26 @@ if self.kind == 'u': return 0 else: - return -(1 << (self.bits-1)) + try: + val = iinfo._min_vals[self.key] + except KeyError: + val = int(-(1L << (self.bits-1))) + iinfo._min_vals[self.key] = val + return val min = property(min) def max(self): """Maximum value of given dtype.""" - if self.kind == 'u': - return (1 << self.bits) - 1 - else: - return (1 << (self.bits-1)) - 1 + try: + val = iinfo._max_vals[self.key] + except KeyError: + if self.kind == 'u': + val = int((1L << self.bits) - 1) + else: + val = int((1L << (self.bits-1)) - 1) + iinfo._max_vals[self.key] = val + return val max = property(max) From numpy-svn at scipy.org Wed May 23 18:04:54 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 23 May 2007 17:04:54 -0500 (CDT) Subject: [Numpy-svn] r3813 - trunk/numpy/lib Message-ID: <20070523220454.2AC6739C166@new.scipy.org> Author: oliphant Date: 2007-05-23 17:04:51 -0500 (Wed, 23 May 2007) New Revision: 3813 Modified: trunk/numpy/lib/getlimits.py Log: Fix tab/space. Modified: trunk/numpy/lib/getlimits.py =================================================================== --- trunk/numpy/lib/getlimits.py 2007-05-23 22:03:42 UTC (rev 3812) +++ trunk/numpy/lib/getlimits.py 2007-05-23 22:04:51 UTC (rev 3813) @@ -131,7 +131,7 @@ self.dtype = N.dtype(type) self.kind = self.dtype.kind self.bits = self.dtype.itemsize * 8 - self.key = "%s%d" % (self.kind, self.bits) + self.key = "%s%d" % (self.kind, self.bits) if not self.kind in 'iu': raise ValueError("Invalid integer data type.") From numpy-svn at scipy.org Wed May 23 18:18:21 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 23 May 2007 17:18:21 -0500 (CDT) Subject: [Numpy-svn] r3814 - tags Message-ID: <20070523221821.DEAA839C0DF@new.scipy.org> Author: oliphant Date: 2007-05-23 17:18:18 -0500 (Wed, 23 May 2007) New Revision: 3814 Added: tags/1.0.3/ Log: Tag tree for 1.0.3 release Copied: tags/1.0.3 (from rev 3813, trunk) From numpy-svn at scipy.org Wed May 23 18:18:56 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 23 May 2007 17:18:56 -0500 (CDT) Subject: [Numpy-svn] r3815 - trunk/numpy Message-ID: <20070523221856.716F039C0DF@new.scipy.org> Author: oliphant Date: 2007-05-23 17:18:54 -0500 (Wed, 23 May 2007) New Revision: 3815 Modified: trunk/numpy/version.py Log: Update version number on trunk. Modified: trunk/numpy/version.py =================================================================== --- trunk/numpy/version.py 2007-05-23 22:18:18 UTC (rev 3814) +++ trunk/numpy/version.py 2007-05-23 22:18:54 UTC (rev 3815) @@ -1,4 +1,4 @@ -version='1.0.3' +version='1.0.4' release=False if not release: From numpy-svn at scipy.org Wed May 23 18:19:57 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 23 May 2007 17:19:57 -0500 (CDT) Subject: [Numpy-svn] r3816 - tags/1.0.3/numpy Message-ID: <20070523221957.3B02339C0DF@new.scipy.org> Author: oliphant Date: 2007-05-23 17:19:54 -0500 (Wed, 23 May 2007) New Revision: 3816 Modified: tags/1.0.3/numpy/version.py Log: Make 1.0.3 tag a version release. Modified: tags/1.0.3/numpy/version.py =================================================================== --- tags/1.0.3/numpy/version.py 2007-05-23 22:18:54 UTC (rev 3815) +++ tags/1.0.3/numpy/version.py 2007-05-23 22:19:54 UTC (rev 3816) @@ -1,5 +1,5 @@ version='1.0.3' -release=False +release=True if not release: version += '.dev' From numpy-svn at scipy.org Thu May 24 14:31:41 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 24 May 2007 13:31:41 -0500 (CDT) Subject: [Numpy-svn] r3817 - trunk/numpy/lib Message-ID: <20070524183141.36A5E39C0FC@new.scipy.org> Author: edschofield Date: 2007-05-24 13:31:28 -0500 (Thu, 24 May 2007) New Revision: 3817 Modified: trunk/numpy/lib/shape_base.py Log: Fix docstring typo for vstack() Modified: trunk/numpy/lib/shape_base.py =================================================================== --- trunk/numpy/lib/shape_base.py 2007-05-23 22:19:54 UTC (rev 3816) +++ trunk/numpy/lib/shape_base.py 2007-05-24 18:31:28 UTC (rev 3817) @@ -184,7 +184,7 @@ """ Stack arrays in sequence vertically (row wise) Description: - Take a sequence of arrays and stack them veritcally + Take a sequence of arrays and stack them vertically to make a single array. All arrays in the sequence must have the same shape along all but the first axis. vstack will rebuild arrays divided by vsplit. From numpy-svn at scipy.org Thu May 24 14:41:00 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 24 May 2007 13:41:00 -0500 (CDT) Subject: [Numpy-svn] r3818 - trunk/numpy Message-ID: <20070524184100.AE55C39C0FC@new.scipy.org> Author: edschofield Date: 2007-05-24 13:40:58 -0500 (Thu, 24 May 2007) New Revision: 3818 Modified: trunk/numpy/_import_tools.py Log: Fix docstring formatting for PackageLoader class Modified: trunk/numpy/_import_tools.py =================================================================== --- trunk/numpy/_import_tools.py 2007-05-24 18:31:28 UTC (rev 3817) +++ trunk/numpy/_import_tools.py 2007-05-24 18:40:58 UTC (rev 3818) @@ -133,10 +133,10 @@ Usage: - This function is intended to shorten the need to import many of + This function is intended to shorten the need to import many subpackages, say of scipy, constantly with statements such as - import scipy.linalg, scipy.fftpack, scipy.etc... + import scipy.linalg, scipy.fftpack, scipy.etc... Instead, you can say: @@ -154,18 +154,19 @@ Inputs: - - the names (one or more strings) of all the numpy modules one wishes to - load into the top-level namespace. + - the names (one or more strings) of all the numpy modules one + wishes to load into the top-level namespace. Optional keyword inputs: - verbose - integer specifying verbosity level [default: -1]. verbose=-1 will suspend also warnings. - - force - when True, force reloading loaded packages [default: False]. + - force - when True, force reloading loaded packages + [default: False]. - postpone - when True, don't load packages [default: False] - If no input arguments are given, then all of scipy's subpackages are - imported. + If no input arguments are given, then all of scipy's subpackages + are imported. """ frame = self.parent_frame From numpy-svn at scipy.org Thu May 24 14:48:50 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 24 May 2007 13:48:50 -0500 (CDT) Subject: [Numpy-svn] r3819 - trunk/numpy/testing Message-ID: <20070524184850.E4EA539C0FC@new.scipy.org> Author: edschofield Date: 2007-05-24 13:48:47 -0500 (Thu, 24 May 2007) New Revision: 3819 Modified: trunk/numpy/testing/numpytest.py Log: Improve docstring formatting for NumpyTest Modified: trunk/numpy/testing/numpytest.py =================================================================== --- trunk/numpy/testing/numpytest.py 2007-05-24 18:40:58 UTC (rev 3818) +++ trunk/numpy/testing/numpytest.py 2007-05-24 18:48:47 UTC (rev 3819) @@ -239,22 +239,22 @@ is package name or its module object. - Package is supposed to contain a directory tests/ - with test_*.py files where * refers to the names of submodules. - See .rename() method to redefine name mapping between test_*.py files - and names of submodules. Pattern test_*.py can be overwritten by - redefining .get_testfile() method. + Package is supposed to contain a directory tests/ with test_*.py + files where * refers to the names of submodules. See .rename() + method to redefine name mapping between test_*.py files and names of + submodules. Pattern test_*.py can be overwritten by redefining + .get_testfile() method. - test_*.py files are supposed to define a classes, derived - from NumpyTestCase or unittest.TestCase, with methods having - names starting with test or bench or check. The names of TestCase - classes must have a prefix test. This can be overwritten by - redefining .check_testcase_name() method. + test_*.py files are supposed to define a classes, derived from + NumpyTestCase or unittest.TestCase, with methods having names + starting with test or bench or check. The names of TestCase classes + must have a prefix test. This can be overwritten by redefining + .check_testcase_name() method. And that is it! No need to implement test or test_suite functions in each .py file. - Also old styled test_suite(level=1) hooks are supported. + Old-style test_suite(level=1) hooks are also supported. """ _check_testcase_name = re.compile(r'test.*').match def check_testcase_name(self, name): @@ -293,9 +293,14 @@ self._rename_map = {} def rename(self, **kws): - """ Apply renaming submodule test file test_.py to test_.py. - Usage: self.rename(name='newname') before calling self.test() method. - If 'newname' is None, then no tests will be executed for a given module. + """Apply renaming submodule test file test_.py to + test_.py. + + Usage: self.rename(name='newname') before calling the + self.test() method. + + If 'newname' is None, then no tests will be executed for a given + module. """ for k,v in kws.items(): self._rename_map[k] = v @@ -533,12 +538,12 @@ True --- run all test files (like self.testall()) False (default) --- only run test files associated with a module - It is assumed (when all=False) that package tests suite follows the - following convention: for each package module, there exists file - /tests/test_.py that defines TestCase classes - (with names having prefix 'test_') with methods (with names having - prefixes 'check_' or 'bench_'); each of these methods are called when - running unit tests. + It is assumed (when all=False) that package tests suite follows + the following convention: for each package module, there exists + file /tests/test_.py that defines + TestCase classes (with names having prefix 'test_') with methods + (with names having prefixes 'check_' or 'bench_'); each of these + methods are called when running unit tests. """ if level is None: # Do nothing. return From numpy-svn at scipy.org Thu May 24 14:52:28 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 24 May 2007 13:52:28 -0500 (CDT) Subject: [Numpy-svn] r3820 - trunk/numpy/lib Message-ID: <20070524185228.3739A39C11D@new.scipy.org> Author: edschofield Date: 2007-05-24 13:52:25 -0500 (Thu, 24 May 2007) New Revision: 3820 Modified: trunk/numpy/lib/utils.py Log: Change scipy -> numpy in who() docstring Modified: trunk/numpy/lib/utils.py =================================================================== --- trunk/numpy/lib/utils.py 2007-05-24 18:48:47 UTC (rev 3819) +++ trunk/numpy/lib/utils.py 2007-05-24 18:52:25 UTC (rev 3820) @@ -163,7 +163,7 @@ def who(vardict=None): - """Print the scipy arrays in the given dictionary (or globals() if None). + """Print the Numpy arrays in the given dictionary (or globals() if None). """ if vardict is None: frame = sys._getframe().f_back From numpy-svn at scipy.org Thu May 24 15:17:21 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 24 May 2007 14:17:21 -0500 (CDT) Subject: [Numpy-svn] r3821 - trunk/numpy/core Message-ID: <20070524191721.6CFFF39C05E@new.scipy.org> Author: edschofield Date: 2007-05-24 14:17:17 -0500 (Thu, 24 May 2007) New Revision: 3821 Modified: trunk/numpy/core/fromnumeric.py Log: Fix the formatting of docstrings for all functions in fromnumeric.py so they don't wrap when using help() from an 80-character terminal. Modified: trunk/numpy/core/fromnumeric.py =================================================================== --- trunk/numpy/core/fromnumeric.py 2007-05-24 18:52:25 UTC (rev 3820) +++ trunk/numpy/core/fromnumeric.py 2007-05-24 19:17:17 UTC (rev 3821) @@ -46,8 +46,8 @@ """Return an array with values pulled from the given array at the given indices. - This function does the same thing as "fancy" indexing; however, it can be - easier to use if you need to specify a given axis. + This function does the same thing as "fancy" indexing; however, it can + be easier to use if you need to specify a given axis. :Parameters: - `a` : array @@ -55,12 +55,13 @@ - `indices` : int array The indices of the values to extract. - `axis` : None or int, optional (default=None) - The axis over which to select values. None signifies that the operation - should be performed over the flattened array. + The axis over which to select values. None signifies that the + operation should be performed over the flattened array. - `out` : array, optional - If provided, the result will be inserted into this array. It should be - of the appropriate shape and dtype. - - `mode` : one of 'raise', 'wrap', or 'clip', optional (default='raise') + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype. + - `mode` : one of 'raise', 'wrap', or 'clip', optional + (default='raise') Specifies how out-of-bounds indices will behave. - 'raise' : raise an error - 'wrap' : wrap around @@ -95,8 +96,8 @@ :Returns: - `reshaped_array` : array - This will be a new view object if possible; otherwise, it will return - a copy. + This will be a new view object if possible; otherwise, it will + return a copy. :See also: numpy.ndarray.reshape() is the equivalent method. @@ -111,20 +112,21 @@ def choose(a, choices, out=None, mode='raise'): """Use an index array to construct a new array from a set of choices. - Given an array of integers in {0, 1, ..., n-1} and a set of n choice arrays, - this function will create a new array that merges each of the choice arrays. - Where a value in `a` is i, then the new array will have the value that - choices[i] contains in the same place. + Given an array of integers in {0, 1, ..., n-1} and a set of n choice + arrays, this function will create a new array that merges each of the + choice arrays. Where a value in `a` is i, then the new array will have + the value that choices[i] contains in the same place. :Parameters: - `a` : int array - This array must contain integers in [0, n-1], where n is the number of - choices. + This array must contain integers in [0, n-1], where n is the number + of choices. - `choices` : sequence of arrays - Each of the choice arrays should have the same shape as the index array. + Each of the choice arrays should have the same shape as the index + array. - `out` : array, optional - If provided, the result will be inserted into this array. It should be - of the appropriate shape and dtype + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype - `mode` : one of 'raise', 'wrap', or 'clip', optional (default='raise') Specifies how out-of-bounds indices will behave. - 'raise' : raise an error @@ -161,12 +163,13 @@ :Parameters: - `a` : array - `repeats` : int or int array - The number of repetitions for each element. If a plain integer, then it - is applied to all elements. If an array, it needs to be of the same - length as the chosen axis. + The number of repetitions for each element. If a plain integer, then + it is applied to all elements. If an array, it needs to be of the + same length as the chosen axis. - `axis` : None or int, optional (default=None) - The axis along which to repeat values. If None, then this function will - operated on the flattened array `a` and return a similarly flat result. + The axis along which to repeat values. If None, then this function + will operated on the flattened array `a` and return a similarly flat + result. :Returns: - `repeated_array` : array @@ -189,16 +192,15 @@ def put (a, ind, v, mode='raise'): - """put(a, ind, v) results in a[n] = v[n] for all n in ind - If v is shorter than mask it will be repeated as necessary. - In particular v can be a scalar or length 1 array. - The routine put is the equivalent of the following (although the loop - is in C for speed): + """put(a, ind, v) results in a[n] = v[n] for all n in ind. If v is + shorter than mask it will be repeated as necessary. In particular v can + be a scalar or length 1 array. The routine put is the equivalent of the + following (although the loop is in C for speed): - ind = array(indices, copy=False) - v = array(values, copy=False).astype(a.dtype) - for i in ind: a.flat[i] = v[i] - a must be a contiguous numpy array. + ind = array(indices, copy=False) + v = array(values, copy=False).astype(a.dtype) + for i in ind: a.flat[i] = v[i] + a must be a contiguous numpy array. """ return a.put(ind, v, mode) @@ -215,9 +217,9 @@ def transpose(a, axes=None): - """transpose(a, axes=None) returns a view of the array with - dimensions permuted according to axes. If axes is None - (default) returns array with dimensions reversed. + """transpose(a, axes=None) returns a view of the array with dimensions + permuted according to axes. If axes is None (default) returns array + with dimensions reversed. """ try: transpose = a.transpose @@ -231,8 +233,8 @@ *Description* - Perform an inplace sort along the given axis using the algorithm specified - by the kind keyword. + Perform an inplace sort along the given axis using the algorithm + specified by the kind keyword. *Parameters*: @@ -240,17 +242,17 @@ Array to be sorted. axis : integer - Axis to be sorted along. None indicates that the flattened array - should be used. Default is -1. + Axis to be sorted along. None indicates that the flattened + array should be used. Default is -1. kind : string Sorting algorithm to use. Possible values are 'quicksort', 'mergesort', or 'heapsort'. Default is 'quicksort'. order : list type or None - When a is an array with fields defined, this argument specifies - which fields to compare first, second, etc. Not all fields need be - specified. + When a is an array with fields defined, this argument + specifies which fields to compare first, second, etc. Not + all fields need be specified. *Returns*: @@ -268,9 +270,10 @@ *Notes* The various sorts are characterized by average speed, worst case - performance, need for work space, and whether they are stable. A stable - sort keeps items with the same key in the same relative order. The - three available algorithms have the following properties: + performance, need for work space, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: +-----------+-------+-------------+------------+-------+ | kind | speed | worst case | work space | stable| @@ -282,9 +285,10 @@ | heapsort | 3 | O(n*log(n)) | 0 | no | +-----------+-------+-------------+------------+-------+ - All the sort algorithms make temporary copies of the data when the sort - is not along the last axis. Consequently, sorts along the last axis are - faster and use less space than sorts along other axis. + All the sort algorithms make temporary copies of the data when + the sort is not along the last axis. Consequently, sorts along + the last axis are faster and use less space than sorts along + other axis. """ if axis is None: @@ -301,27 +305,28 @@ *Description* - Perform an indirect sort along the given axis using the algorithm specified - by the kind keyword. It returns an array of indices of the same shape as - a that index data along the given axis in sorted order. + Perform an indirect sort along the given axis using the algorithm + specified by the kind keyword. It returns an array of indices of the + same shape as a that index data along the given axis in sorted order. *Parameters*: a : array type - Array containing values that the returned indices should sort. + Array containing values that the returned indices should + sort. axis : integer - Axis to be indirectly sorted. None indicates that the flattened - array should be used. Default is -1. + Axis to be indirectly sorted. None indicates that the + flattened array should be used. Default is -1. kind : string Sorting algorithm to use. Possible values are 'quicksort', 'mergesort', or 'heapsort'. Default is 'quicksort'. order : list type or None - When a is an array with fields defined, this argument specifies - which fields to compare first, second, etc. Not all fields need be - specified. + When a is an array with fields defined, this argument + specifies which fields to compare first, second, etc. Not + all fields need be specified. *Returns*: @@ -338,9 +343,10 @@ *Notes* The various sorts are characterized by average speed, worst case - performance, need for work space, and whether they are stable. A stable - sort keeps items with the same key in the same relative order. The - three available algorithms have the following properties: + performance, need for work space, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: +-----------+-------+-------------+------------+-------+ | kind | speed | worst case | work space | stable| @@ -352,9 +358,10 @@ | heapsort | 3 | O(n*log(n)) | 0 | no | +-----------+-------+-------------+------------+-------+ - All the sort algorithms make temporary copies of the data when the sort - is not along the last axis. Consequently, sorts along the last axis are - faster and use less space than sorts along other axis. + All the sort algorithms make temporary copies of the data when + the sort is not along the last axis. Consequently, sorts along + the last axis are faster and use less space than sorts along + other axis. """ try: @@ -391,13 +398,14 @@ *Description* - Find the indices into a sorted array such that if the corresponding - keys in v were inserted before the indices the order of a would be - preserved. If side='left', then the first such index is returned. If - side='right', then the last such index is returned. If there is no such - index because the key is out of bounds, then the length of a is - returned, i.e., the key would need to be appended. The returned index - array has the same shape as v. + Find the indices into a sorted array such that if the + corresponding keys in v were inserted before the indices the + order of a would be preserved. If side='left', then the first + such index is returned. If side='right', then the last such index + is returned. If there is no such index because the key is out of + bounds, then the length of a is returned, i.e., the key would + need to be appended. The returned index array has the same shape + as v. *Parameters*: @@ -408,8 +416,9 @@ Array of keys to be searched for in a. side : string - Possible values are : 'left', 'right'. Default is 'left'. Return - the first or last index where the key could be inserted. + Possible values are : 'left', 'right'. Default is 'left'. + Return the first or last index where the key could be + inserted. *Returns*: @@ -426,8 +435,9 @@ *Notes* - The array a must be 1-d and is assumed to be sorted in ascending order. - Searchsorted uses binary search to find the required insertion points. + The array a must be 1-d and is assumed to be sorted in ascending + order. Searchsorted uses binary search to find the required + insertion points. """ try: @@ -439,11 +449,11 @@ def resize(a, new_shape): """resize(a,new_shape) returns a new array with the specified shape. - The original array's total size can be any size. It - fills the new array with repeated copies of a. + The original array's total size can be any size. It fills the new + array with repeated copies of a. - Note that a.resize(new_shape) will fill array with 0's - beyond current definition of a. + Note that a.resize(new_shape) will fill array with 0's beyond current + definition of a. """ if isinstance(new_shape, (int, nt.integer)): @@ -483,37 +493,39 @@ *Description* - If a is 2-d, returns the diagonal of self with the given offset, i.e., the - collection of elements of the form a[i,i+offset]. If a is n-d with n > 2, - then the axes specified by axis1 and axis2 are used to determine the 2-d - subarray whose diagonal is returned. The shape of the resulting array can be - determined by removing axis1 and axis2 and appending an index to the right - equal to the size of the resulting diagonals. + If a is 2-d, returns the diagonal of self with the given offset, + i.e., the collection of elements of the form a[i,i+offset]. If a is + n-d with n > 2, then the axes specified by axis1 and axis2 are used + to determine the 2-d subarray whose diagonal is returned. The shape + of the resulting array can be determined by removing axis1 and axis2 + and appending an index to the right equal to the size of the + resulting diagonals. *Parameters*: offset : integer - Offset of the diagonal from the main diagonal. Can be both positive - and negative. Defaults to main diagonal. + Offset of the diagonal from the main diagonal. Can be both + positive and negative. Defaults to main diagonal. axis1 : integer - Axis to be used as the first axis of the 2-d subarrays from which - the diagonals should be taken. Defaults to first axis. + Axis to be used as the first axis of the 2-d subarrays from + which the diagonals should be taken. Defaults to first axis. axis2 : integer - Axis to be used as the second axis of the 2-d subarrays from which - the diagonals should be taken. Defaults to second axis. + Axis to be used as the second axis of the 2-d subarrays from + which the diagonals should be taken. Defaults to second axis. *Returns*: array_of_diagonals : type of original array - If a is 2-d, then a 1-d array containing the diagonal is returned. + If a is 2-d, then a 1-d array containing the diagonal is + returned. If a is n-d, n > 2, then an array of diagonals is returned. *SeeAlso*: diag : - matlab workalike for 1-d and 2-d arrays + Matlab workalike for 1-d and 2-d arrays diagflat : creates diagonal arrays trace : @@ -551,9 +563,9 @@ return asarray(a).trace(offset, axis1, axis2, dtype, out) def ravel(m,order='C'): - """ravel(m) returns a 1d array corresponding to all the elements of it's - argument. The new array is a view of m if possible, otherwise it is - a copy. + """ravel(m) returns a 1d array corresponding to all the elements of + its argument. The new array is a view of m if possible, otherwise it + is a copy. """ a = asarray(m) return a.ravel(order) @@ -570,8 +582,8 @@ return res def shape(a): - """shape(a) returns the shape of a (as a function call which - also works on nested sequences). + """shape(a) returns the shape of a (as a function call which also + works on nested sequences). """ try: result = a.shape @@ -781,23 +793,26 @@ out -- existing array to use for output (default copy of a). Returns: - Reference to out, where None specifies a copy of the original array a. + Reference to out, where None specifies a copy of the original + array a. - Round to the specified number of decimals. When 'decimals' is negative it - specifies the number of positions to the left of the decimal point. The - real and imaginary parts of complex numbers are rounded separately. - Nothing is done if the array is not of float type and 'decimals' is greater - than or equal to 0. + Round to the specified number of decimals. When 'decimals' is + negative it specifies the number of positions to the left of the + decimal point. The real and imaginary parts of complex numbers are + rounded separately. Nothing is done if the array is not of float + type and 'decimals' is greater than or equal to 0. - The keyword 'out' may be used to specify a different array to hold the - result rather than the default 'a'. If the type of the array specified by - 'out' differs from that of 'a', the result is cast to the new type, - otherwise the original type is kept. Floats round to floats by default. + The keyword 'out' may be used to specify a different array to hold + the result rather than the default 'a'. If the type of the array + specified by 'out' differs from that of 'a', the result is cast to + the new type, otherwise the original type is kept. Floats round to + floats by default. - Numpy rounds to even. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to - 0.0, etc. Results may also be surprising due to the inexact representation - of decimal fractions in IEEE floating point and the errors introduced in - scaling the numbers when 'decimals' is something other than 0. + Numpy rounds to even. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 + round to 0.0, etc. Results may also be surprising due to the inexact + representation of decimal fractions in IEEE floating point and the + errors introduced in scaling the numbers when 'decimals' is something + other than 0. The function around is an alias for round_. @@ -815,8 +830,9 @@ *Description* - Returns the average of the array elements. The average is taken over - the flattened array by default, otherwise over the specified axis. + Returns the average of the array elements. The average is taken + over the flattened array by default, otherwise over the specified + axis. *Parameters*: @@ -825,20 +841,20 @@ to compute the standard deviation of the flattened array. dtype : type - Type to use in computing the means. For arrays of - integer type the default is float32, for arrays of float types it - is the same as the array type. + Type to use in computing the means. For arrays of integer + type the default is float32, for arrays of float types it is + the same as the array type. out : ndarray - Alternative output array in which to place the result. It must have - the same shape as the expected output but the type will be cast if - necessary. + Alternative output array in which to place the result. It + must have the same shape as the expected output but the type + will be cast if necessary. *Returns*: mean : The return type varies, see above. - A new array holding the result is returned unless out is specified, - in which case a reference to out is returned. + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. *SeeAlso*: @@ -865,31 +881,33 @@ *Description* - Returns the standard deviation of the array elements, a measure of the - spread of a distribution. The standard deviation is computed for the - flattened array by default, otherwise over the specified axis. + Returns the standard deviation of the array elements, a measure + of the spread of a distribution. The standard deviation is + computed for the flattened array by default, otherwise over the + specified axis. *Parameters*: axis : integer - Axis along which the standard deviation is computed. The default is - to compute the standard deviation of the flattened array. + Axis along which the standard deviation is computed. The + default is to compute the standard deviation of the flattened + array. dtype : type - Type to use in computing the standard deviation. For arrays of - integer type the default is float32, for arrays of float types it - is the same as the array type. + Type to use in computing the standard deviation. For arrays + of integer type the default is float32, for arrays of float + types it is the same as the array type. out : ndarray - Alternative output array in which to place the result. It must have - the same shape as the expected output but the type will be cast if - necessary. + Alternative output array in which to place the result. It + must have the same shape as the expected output but the type + will be cast if necessary. *Returns*: standard_deviation : The return type varies, see above. - A new array holding the result is returned unless out is specified, - in which case a reference to out is returned. + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. *SeeAlso*: @@ -900,10 +918,11 @@ *Notes* - The standard deviation is the square root of the average of the squared - deviations from the mean, i.e. var = sqrt(mean((x - x.mean())**2)). - The computed standard deviation is biased, i.e., the mean is computed - by dividing by the number of elements, N, rather than by N-1. + The standard deviation is the square root of the average of the + squared deviations from the mean, i.e. var = sqrt(mean((x - + x.mean())**2)). The computed standard deviation is biased, i.e., + the mean is computed by dividing by the number of elements, N, + rather than by N-1. """ try: @@ -918,9 +937,9 @@ *Description* - Returns the variance of the array elements, a measure of the spread of - a distribution. The variance is computed for the flattened array by - default, otherwise over the specified axis. + Returns the variance of the array elements, a measure of the + spread of a distribution. The variance is computed for the + flattened array by default, otherwise over the specified axis. *Parameters*: @@ -929,20 +948,20 @@ compute the variance of the flattened array. dtype : type - Type to use in computing the variance. For arrays of integer type - the default is float32, for arrays of float types it is the same as - the array type. + Type to use in computing the variance. For arrays of integer + type the default is float32, for arrays of float types it is + the same as the array type. out : ndarray - Alternative output array in which to place the result. It must have - the same shape as the expected output but the type will be cast if - necessary. + Alternative output array in which to place the result. It + must have the same shape as the expected output but the type + will be cast if necessary. *Returns*: variance : depends, see above - A new array holding the result is returned unless out is specified, - in which case a reference to out is returned. + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. *SeeAlso*: @@ -953,10 +972,10 @@ *Notes* - The variance is the average of the squared deviations from the mean, - i.e. var = mean((x - x.mean())**2). The computed variance is biased, - i.e., the mean is computed by dividing by the number of elements, N, - rather than by N-1. + The variance is the average of the squared deviations from the + mean, i.e. var = mean((x - x.mean())**2). The computed variance + is biased, i.e., the mean is computed by dividing by the number + of elements, N, rather than by N-1. """ try: From numpy-svn at scipy.org Fri May 25 04:16:39 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 25 May 2007 03:16:39 -0500 (CDT) Subject: [Numpy-svn] r3822 - trunk/numpy/distutils/fcompiler Message-ID: <20070525081639.55D9639C0BB@new.scipy.org> Author: pearu Date: 2007-05-25 03:16:26 -0500 (Fri, 25 May 2007) New Revision: 3822 Modified: trunk/numpy/distutils/fcompiler/__init__.py Log: fix ticket 526 Modified: trunk/numpy/distutils/fcompiler/__init__.py =================================================================== --- trunk/numpy/distutils/fcompiler/__init__.py 2007-05-24 19:17:17 UTC (rev 3821) +++ trunk/numpy/distutils/fcompiler/__init__.py 2007-05-25 08:16:26 UTC (rev 3822) @@ -120,16 +120,16 @@ def get_version_cmd(self): """ Compiler command to print out version information. """ - f77 = self.executables['compiler_f77'] + f77 = self.executables.get('compiler_f77') if f77 is not None: f77 = f77[0] - cmd = self.executables['version_cmd'] + cmd = self.executables.get('version_cmd') if cmd is not None: cmd = cmd[0] if cmd==f77: cmd = self.compiler_f77[0] else: - f90 = self.executables['compiler_f90'] + f90 = self.executables.get('compiler_f90') if f90 is not None: f90 = f90[0] if cmd==f90: @@ -141,13 +141,13 @@ f77 = self.executables['compiler_f77'] if f77 is not None: f77 = f77[0] - ln = self.executables['linker_so'] + ln = self.executables.get('linker_so') if ln is not None: ln = ln[0] if ln==f77: ln = self.compiler_f77[0] else: - f90 = self.executables['compiler_f90'] + f90 = self.executables.get('compiler_f90') if f90 is not None: f90 = f90[0] if ln==f90: @@ -165,7 +165,7 @@ if ln==f77: ln = self.compiler_f77[0] else: - f90 = self.executables['compiler_f90'] + f90 = self.executables.get('compiler_f90') if f90 is not None: f90 = f90[0] if ln==f90: @@ -177,17 +177,17 @@ return [] + self.pic_flags def get_flags_version(self): """ List of compiler flags to print out version information. """ - if self.executables['version_cmd']: + if self.executables.get('version_cmd'): return self.executables['version_cmd'][1:] return [] def get_flags_f77(self): """ List of Fortran 77 specific flags. """ - if self.executables['compiler_f77']: + if self.executables.get('compiler_f77'): return self.executables['compiler_f77'][1:] return [] def get_flags_f90(self): """ List of Fortran 90 specific flags. """ - if self.executables['compiler_f90']: + if self.executables.get('compiler_f90'): return self.executables['compiler_f90'][1:] return [] def get_flags_free(self): @@ -195,22 +195,22 @@ return [] def get_flags_fix(self): """ List of Fortran 90 fixed format specific flags. """ - if self.executables['compiler_fix']: + if self.executables.get('compiler_fix'): return self.executables['compiler_fix'][1:] return [] def get_flags_linker_so(self): """ List of linker flags to build a shared library. """ - if self.executables['linker_so']: + if self.executables.get('linker_so'): return self.executables['linker_so'][1:] return [] def get_flags_linker_exe(self): """ List of linker flags to build an executable. """ - if self.executables['linker_exe']: + if self.executables.get('linker_exe'): return self.executables['linker_exe'][1:] return [] def get_flags_ar(self): """ List of archiver flags. """ - if self.executables['archiver']: + if self.executables.get('archiver'): return self.executables['archiver'][1:] return [] def get_flags_opt(self): From numpy-svn at scipy.org Fri May 25 07:20:12 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 25 May 2007 06:20:12 -0500 (CDT) Subject: [Numpy-svn] r3823 - trunk/numpy/distutils Message-ID: <20070525112012.797D739C03E@new.scipy.org> Author: cookedm Date: 2007-05-25 06:20:02 -0500 (Fri, 25 May 2007) New Revision: 3823 Modified: trunk/numpy/distutils/ccompiler.py trunk/numpy/distutils/conv_template.py trunk/numpy/distutils/core.py trunk/numpy/distutils/exec_command.py trunk/numpy/distutils/from_template.py trunk/numpy/distutils/intelccompiler.py trunk/numpy/distutils/misc_util.py trunk/numpy/distutils/system_info.py Log: merge from distutils-revamp branch (step 1) - minor cleanups - find_executable returns None when no file found (instead of having to check with os.path.isfile) Modified: trunk/numpy/distutils/ccompiler.py =================================================================== --- trunk/numpy/distutils/ccompiler.py 2007-05-25 08:16:26 UTC (rev 3822) +++ trunk/numpy/distutils/ccompiler.py 2007-05-25 11:20:02 UTC (rev 3823) @@ -259,6 +259,8 @@ version_cmd = self.version_cmd except AttributeError: return None + if not version_cmd or not version_cmd[0]: + return None cmd = ' '.join(version_cmd) try: matcher = self.version_match @@ -278,9 +280,7 @@ version = None if status in ok_status: version = matcher(output) - if not version: - log.warn("Couldn't match compiler version for %r" % (output,)) - else: + if version: version = LooseVersion(version) self.version = version return version @@ -341,7 +341,8 @@ try: __import__ (module_name) except ImportError, msg: - print msg,'in numpy.distutils, trying from distutils..' + log.info('%s in numpy.distutils; trying from distutils', + str(msg)) module_name = module_name[6:] try: __import__(module_name) Modified: trunk/numpy/distutils/conv_template.py =================================================================== --- trunk/numpy/distutils/conv_template.py 2007-05-25 08:16:26 UTC (rev 3822) +++ trunk/numpy/distutils/conv_template.py 2007-05-25 11:20:02 UTC (rev 3823) @@ -18,15 +18,10 @@ __all__ = ['process_str', 'process_file'] -import string,os,sys -if sys.version[:3]>='2.3': - import re -else: - import pre as re - False = 0 - True = 1 +import os +import sys +import re - def parse_structure(astr): spanlist = [] # subroutines @@ -66,7 +61,8 @@ # with 'a,b,c,a,b,c,a,b,c,a,b,c' astr = parenrep.sub(paren_repl,astr) # replaces occurences of xxx*3 with xxx, xxx, xxx - astr = ','.join([plainrep.sub(paren_repl,x.strip()) for x in astr.split(',')]) + astr = ','.join([plainrep.sub(paren_repl,x.strip()) + for x in astr.split(',')]) return astr def unique_key(adict): @@ -85,40 +81,38 @@ done = True return newkey -def namerepl(match): - global _names, _thissub - name = match.group(1) - return _names[name][_thissub] - def expand_sub(substr, namestr, line): - global _names, _thissub # find all named replacements reps = named_re.findall(namestr) - _names = {} - _names.update(_special_names) + names = {} + names.update(_special_names) numsubs = None for rep in reps: name = rep[0].strip() thelist = conv(rep[1]) - _names[name] = thelist + names[name] = thelist # make lists out of string entries in name dictionary - for name in _names.keys(): - entry = _names[name] + for name in names.keys(): + entry = names[name] entrylist = entry.split(',') - _names[name] = entrylist + names[name] = entrylist num = len(entrylist) if numsubs is None: numsubs = num - elif (numsubs != num): + elif numsubs != num: print namestr print substr raise ValueError, "Mismatch in number to replace" # now replace all keys for each of the lists mystr = '' + thissub = [None] + def namerepl(match): + name = match.group(1) + return names[name][thissub[0]] for k in range(numsubs): - _thissub = k + thissub[0] = k mystr += ("#line %d\n%s\n\n" % (line, template_re.sub(namerepl, substr))) return mystr Modified: trunk/numpy/distutils/core.py =================================================================== --- trunk/numpy/distutils/core.py 2007-05-25 08:16:26 UTC (rev 3822) +++ trunk/numpy/distutils/core.py 2007-05-25 11:20:02 UTC (rev 3823) @@ -16,20 +16,14 @@ from distutils.core import setup as old_setup have_setuptools = False +import warnings +import distutils.core +import distutils.dist + from numpy.distutils.extension import Extension -from numpy.distutils.command import config -from numpy.distutils.command import build -from numpy.distutils.command import build_py -from numpy.distutils.command import config_compiler -from numpy.distutils.command import build_ext -from numpy.distutils.command import build_clib -from numpy.distutils.command import build_src -from numpy.distutils.command import build_scripts -from numpy.distutils.command import sdist -from numpy.distutils.command import install_data -from numpy.distutils.command import install_headers -from numpy.distutils.command import install -from numpy.distutils.command import bdist_rpm +from numpy.distutils.command import config, config_compiler, \ + build, build_py, build_ext, build_clib, build_src, build_scripts, \ + sdist, install_data, install_headers, install, bdist_rpm from numpy.distutils.misc_util import get_data_files, is_sequence, is_string numpy_cmdclass = {'build': build.build, @@ -61,20 +55,15 @@ continue dv = d[k] if isinstance(dv, tuple): - dv += tuple(v) - continue - if isinstance(dv, list): - dv += list(v) - continue - if isinstance(dv, dict): + d[k] = dv + tuple(v) + elif isinstance(dv, list): + d[k] = dv + list(v) + elif isinstance(dv, dict): _dict_append(dv, **v) - continue - if isinstance(dv, str): - assert isinstance(v,str),`type(v)` - d[k] = v - continue - raise TypeError,`type(dv)` - return + elif is_string(dv): + d[k] = dv + v + else: + raise TypeError, repr(type(dv)) def _command_line_ok(_cache=[]): """ Return True if command line does not contain any @@ -102,6 +91,21 @@ raw_input('Press ENTER to close the interactive session..') print '='*72 +def get_distribution(always=False): + dist = distutils.core._setup_distribution + # XXX Hack to get numpy installable with easy_install. + # The problem is easy_install runs it's own setup(), which + # sets up distutils.core._setup_distribution. However, + # when our setup() runs, that gets overwritten and lost. + # We can't use isinstance, as the DistributionWithoutHelpCommands + # class is local to a function in setuptools.command.easy_install + if dist is not None and \ + repr(dist).find('DistributionWithoutHelpCommands') != -1: + dist = None + if always and dist is None: + dist = distutils.dist.Distribution() + return dist + def setup(**attr): if len(sys.argv)<=1 and not attr.get('script_args',[]): @@ -124,7 +128,6 @@ # or help request in command in the line. configuration = new_attr.pop('configuration') - import distutils.core old_dist = distutils.core._setup_distribution old_stop = distutils.core._setup_stop_after distutils.core._setup_distribution = None @@ -173,7 +176,6 @@ return old_setup(**new_attr) def _check_append_library(libraries, item): - import warnings for libitem in libraries: if is_sequence(libitem): if is_sequence(item): @@ -198,10 +200,8 @@ if item==libitem: return libraries.append(item) - return def _check_append_ext_library(libraries, (lib_name,build_info)): - import warnings for item in libraries: if is_sequence(item): if item[0]==lib_name: @@ -215,4 +215,3 @@ " no build_info" % (lib_name,)) break libraries.append((lib_name,build_info)) - return Modified: trunk/numpy/distutils/exec_command.py =================================================================== --- trunk/numpy/distutils/exec_command.py 2007-05-25 08:16:26 UTC (rev 3822) +++ trunk/numpy/distutils/exec_command.py 2007-05-25 11:20:02 UTC (rev 3823) @@ -123,60 +123,48 @@ ############################################################ def find_executable(exe, path=None): - """ Return full path of a executable. + """Return full path of a executable. + + Symbolic links are not followed. """ log.debug('find_executable(%r)' % exe) orig_exe = exe + if path is None: path = os.environ.get('PATH',os.defpath) if os.name=='posix' and sys.version[:3]>'2.1': realpath = os.path.realpath else: realpath = lambda a:a - if exe[0]=='"': + + if exe.startswith('"'): exe = exe[1:-1] - suffices = [''] + + suffixes = [''] if os.name in ['nt','dos','os2']: fn,ext = os.path.splitext(exe) - extra_suffices = ['.exe','.com','.bat'] - if ext.lower() not in extra_suffices: - suffices = extra_suffices + extra_suffixes = ['.exe','.com','.bat'] + if ext.lower() not in extra_suffixes: + suffixes = extra_suffixes + if os.path.isabs(exe): paths = [''] else: - paths = map(os.path.abspath, path.split(os.pathsep)) - if 0 and os.name == 'nt': - new_paths = [] - cygwin_paths = [] - for path in paths: - d,p = os.path.splitdrive(path) - if p.lower().find('cygwin') >= 0: - cygwin_paths.append(path) - else: - new_paths.append(path) - paths = new_paths + cygwin_paths + paths = [ os.path.abspath(p) for p in path.split(os.pathsep) ] + for path in paths: - fn = os.path.join(path,exe) - for s in suffices: + fn = os.path.join(path, exe) + for s in suffixes: f_ext = fn+s if not os.path.islink(f_ext): - # see comment below. f_ext = realpath(f_ext) - if os.path.isfile(f_ext) and os.access(f_ext,os.X_OK): + if os.path.isfile(f_ext) and os.access(f_ext, os.X_OK): log.debug('Found executable %s' % f_ext) return f_ext - if os.path.islink(exe): - # Don't follow symbolic links. E.g. when using colorgcc then - # gcc -> /usr/bin/colorgcc - # g77 -> /usr/bin/colorgcc - pass - else: - exe = realpath(exe) - if not os.path.isfile(exe) or os.access(exe,os.X_OK): - log.warn('Could not locate executable %s' % orig_exe) - return orig_exe - return exe + log.warn('Could not locate executable %s' % orig_exe) + return None + ############################################################ def _preserve_environment( names ): Modified: trunk/numpy/distutils/from_template.py =================================================================== --- trunk/numpy/distutils/from_template.py 2007-05-25 08:16:26 UTC (rev 3822) +++ trunk/numpy/distutils/from_template.py 2007-05-25 11:20:02 UTC (rev 3823) @@ -48,15 +48,9 @@ __all__ = ['process_str','process_file'] -import string,os,sys -if sys.version[:3]>='2.3': - import re -else: - import pre as re - False = 0 - True = 1 -if sys.version[:5]=='2.2.1': - import re +import os +import sys +import re routine_start_re = re.compile(r'(\n|\A)(( (\$|\*))|)\s*(subroutine|function)\b',re.I) routine_end_re = re.compile(r'\n\s*end\s*(subroutine|function)\b.*(\n|\Z)',re.I) Modified: trunk/numpy/distutils/intelccompiler.py =================================================================== --- trunk/numpy/distutils/intelccompiler.py 2007-05-25 08:16:26 UTC (rev 3822) +++ trunk/numpy/distutils/intelccompiler.py 2007-05-25 11:20:02 UTC (rev 3823) @@ -26,5 +26,5 @@ # On Itanium, the Intel Compiler used to be called ecc, let's search for # it (now it's also icc, so ecc is last in the search). for cc_exe in map(find_executable,['icc','ecc']): - if os.path.isfile(cc_exe): + if cc_exe: break Modified: trunk/numpy/distutils/misc_util.py =================================================================== --- trunk/numpy/distutils/misc_util.py 2007-05-25 08:16:26 UTC (rev 3822) +++ trunk/numpy/distutils/misc_util.py 2007-05-25 11:20:02 UTC (rev 3823) @@ -538,6 +538,8 @@ self.local_path = get_path_from_frame(caller_frame, top_path) # local_path -- directory of a file (usually setup.py) that # defines a configuration() function. + # local_path -- directory of a file (usually setup.py) that + # defines a configuration() function. if top_path is None: top_path = self.local_path if package_path is None: @@ -638,18 +640,8 @@ raise ValueError,'Unknown option: '+key def get_distribution(self): - import distutils.core - dist = distutils.core._setup_distribution - # XXX Hack to get numpy installable with easy_install. - # The problem is easy_install runs it's own setup(), which - # sets up distutils.core._setup_distribution. However, - # when our setup() runs, that gets overwritten and lost. - # We can't use isinstance, as the DistributionWithoutHelpCommands - # class is local to a function in setuptools.command.easy_install - if dist is not None and \ - repr(dist).find('DistributionWithoutHelpCommands') != -1: - return None - return dist + from numpy.distutils.core import get_distribution + return get_distribution() def _wildcard_get_subpackage(self, subpackage_name, parent_name, Modified: trunk/numpy/distutils/system_info.py =================================================================== --- trunk/numpy/distutils/system_info.py 2007-05-25 08:16:26 UTC (rev 3822) +++ trunk/numpy/distutils/system_info.py 2007-05-25 11:20:02 UTC (rev 3823) @@ -1639,7 +1639,7 @@ def calc_info(self): config_exe = find_executable(self.get_config_exe()) - if not os.path.isfile(config_exe): + if not config_exe: log.warn('File not found: %s. Cannot determine %s info.' \ % (config_exe, self.section)) return From numpy-svn at scipy.org Fri May 25 07:41:27 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 25 May 2007 06:41:27 -0500 (CDT) Subject: [Numpy-svn] r3824 - in trunk/numpy/distutils: . fcompiler Message-ID: <20070525114127.ED3E539C03E@new.scipy.org> Author: cookedm Date: 2007-05-25 06:41:16 -0500 (Fri, 25 May 2007) New Revision: 3824 Added: trunk/numpy/distutils/environment.py Modified: trunk/numpy/distutils/fcompiler/__init__.py trunk/numpy/distutils/fcompiler/absoft.py trunk/numpy/distutils/fcompiler/compaq.py trunk/numpy/distutils/fcompiler/g95.py trunk/numpy/distutils/fcompiler/gnu.py trunk/numpy/distutils/fcompiler/hpux.py trunk/numpy/distutils/fcompiler/ibm.py trunk/numpy/distutils/fcompiler/intel.py trunk/numpy/distutils/fcompiler/lahey.py trunk/numpy/distutils/fcompiler/mips.py trunk/numpy/distutils/fcompiler/nag.py trunk/numpy/distutils/fcompiler/none.py trunk/numpy/distutils/fcompiler/pg.py trunk/numpy/distutils/fcompiler/sun.py trunk/numpy/distutils/fcompiler/vast.py trunk/numpy/distutils/interactive.py Log: merge from distutils-revamp branch (step 2) - fcompiler changes. All flags, executables, etc., should be overridable by the user with config_fc (either command line or setup.cfg) or by environment variables Added: trunk/numpy/distutils/environment.py =================================================================== --- trunk/numpy/distutils/environment.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/environment.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -0,0 +1,49 @@ +import os +from distutils.dist import Distribution + +__metaclass__ = type + +class EnvironmentConfig: + def __init__(self, distutils_section='DEFAULT', **kw): + self._distutils_section = distutils_section + self._conf_keys = kw + self._conf = None + self._hook_handler = None + + def __getattr__(self, name): + try: + conf_desc = self._conf_keys[name] + except KeyError: + raise AttributeError(name) + return self._get_var(name, conf_desc) + + def get(self, name, default=None): + try: + conf_desc = self._conf_keys[name] + except KeyError: + return default + var = self._get_var(name, conf_desc) + if var is None: + var = default + return var + + def _get_var(self, name, conf_desc): + hook, envvar, confvar = conf_desc + var = self._hook_handler(name, hook) + if envvar is not None: + var = os.environ.get(envvar, var) + if confvar is not None and self._conf: + var = self._conf.get(confvar, (None, var))[1] + return var + + def clone(self, hook_handler): + ec = self.__class__(distutils_section=self._distutils_section, + **self._conf_keys) + ec._hook_handler = hook_handler + return ec + + def use_distribution(self, dist): + if isinstance(dist, Distribution): + self._conf = dist.get_option_dict(self._distutils_section) + else: + self._conf = dist Modified: trunk/numpy/distutils/fcompiler/__init__.py =================================================================== --- trunk/numpy/distutils/fcompiler/__init__.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/__init__.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -10,6 +10,12 @@ import os import sys import re +import new +try: + set +except NameError: + from sets import Set as set + from distutils.sysconfig import get_config_var, get_python_lib from distutils.fancy_getopt import FancyGetopt from distutils.errors import DistutilsModuleError,DistutilsArgError,\ @@ -18,12 +24,18 @@ from numpy.distutils.ccompiler import CCompiler, gen_lib_options from numpy.distutils import log -from numpy.distutils.command.config_compiler import config_fc from numpy.distutils.misc_util import is_string, is_sequence +from numpy.distutils.environment import EnvironmentConfig +from numpy.distutils.exec_command import find_executable from distutils.spawn import _nt_quote_args +__metaclass__ = type + +class CompilerNotFound(Exception): + pass + class FCompiler(CCompiler): - """ Abstract base class to define the interface that must be implemented + """Abstract base class to define the interface that must be implemented by real Fortran compiler classes. Methods that subclasses may redefine: @@ -38,7 +50,7 @@ DON'T call these methods (except get_version) after constructing a compiler instance or inside any other method. All methods, except get_version_cmd() and get_flags_version(), may - call get_version() method. + call the get_version() method. After constructing a compiler instance, always call customize(dist=None) method that finalizes compiler construction and makes the following @@ -53,7 +65,55 @@ library_dirs """ + # These are the environment variables and distutils keys used. + # Each configuration descripition is + # (, , ) + # The hook names are handled by the self._environment_hook method. + # - names starting with 'self.' call methods in this class + # - names starting with 'exe.' return the key in the executables dict + # - names like'flags.YYY' return self.get_flag_YYY() + distutils_vars = EnvironmentConfig( + noopt = (None, None, 'noopt'), + noarch = (None, None, 'noarch'), + debug = (None, None, 'debug'), + verbose = (None, None, 'verbose'), + ) + + command_vars = EnvironmentConfig( + distutils_section='config_fc', + compiler_f77 = ('exe.compiler_f77', 'F77', 'f77exec'), + compiler_f90 = ('exe.compiler_f90', 'F90', 'f90exec'), + compiler_fix = ('exe.compiler_fix', 'F90', 'f90exec'), + version_cmd = ('self.get_version_cmd', None, None), + linker_so = ('self.get_linker_so', 'LDSHARED', 'ldshared'), + linker_exe = ('self.get_linker_exe', 'LD', 'ld'), + archiver = (None, 'AR', 'ar'), + ranlib = (None, 'RANLIB', 'ranlib'), + ) + + flag_vars = EnvironmentConfig( + distutils_section='config_fc', + version = ('flags.version', None, None), + f77 = ('flags.f77', 'F77FLAGS', 'f77flags'), + f90 = ('flags.f90', 'F90FLAGS', 'f90flags'), + free = ('flags.free', 'FREEFLAGS', 'freeflags'), + fix = ('flags.fix', None, None), + opt = ('flags.opt', 'FOPT', 'opt'), + opt_f77 = ('flags.opt_f77', None, None), + opt_f90 = ('flags.opt_f90', None, None), + arch = ('flags.arch', 'FARCH', 'arch'), + arch_f77 = ('flags.arch_f77', None, None), + arch_f90 = ('flags.arch_f90', None, None), + debug = ('flags.debug', 'FDEBUG', None, None), + debug_f77 = ('flags.debug_f77', None, None), + debug_f90 = ('flags.debug_f90', None, None), + flags = ('self.get_flags', 'FFLAGS', 'fflags'), + linker_so = ('flags.linker_so', 'LDFLAGS', 'ldflags'), + linker_exe = ('flags.linker_exe', 'LDFLAGS', 'ldflags'), + ar = ('flags.ar', 'ARFLAGS', 'arflags'), + ) + language_map = {'.f':'f77', '.for':'f77', '.F':'f77', # XXX: needs preprocessor @@ -67,14 +127,15 @@ version_pattern = None + possible_executables = [] executables = { - 'version_cmd' : ["f77","-v"], + 'version_cmd' : ["f77", "-v"], 'compiler_f77' : ["f77"], 'compiler_f90' : ["f90"], - 'compiler_fix' : ["f90","-fixed"], - 'linker_so' : ["f90","-shared"], + 'compiler_fix' : ["f90", "-fixed"], + 'linker_so' : ["f90", "-shared"], 'linker_exe' : ["f90"], - 'archiver' : ["ar","-cr"], + 'archiver' : ["ar", "-cr"], 'ranlib' : None, } @@ -103,6 +164,26 @@ shared_lib_format = "%s%s" exe_extension = "" + def __init__(self, *args, **kw): + CCompiler.__init__(self, *args, **kw) + self.distutils_vars = self.distutils_vars.clone(self._environment_hook) + self.command_vars = self.command_vars.clone(self._environment_hook) + self.flag_vars = self.flag_vars.clone(self._environment_hook) + self.executables = self.executables.copy() + for e in ['version_cmd', 'compiler_f77', 'compiler_f90', + 'compiler_fix', 'linker_so', 'linker_exe', 'archiver', + 'ranlib']: + if e not in self.executables: + self.executables[e] = None + + def __copy__(self): + obj = new.instance(self.__class__, self.__dict__) + obj.distutils_vars = obj.distutils_vars.clone(obj._environment_hook) + obj.command_vars = obj.command_vars.clone(obj._environment_hook) + obj.flag_vars = obj.flag_vars.clone(obj._environment_hook) + obj.executables = obj.executables.copy() + return obj + # If compiler does not support compiling Fortran 90 then it can # suggest using another compiler. For example, gnu would suggest # gnu95 compiler type when there are F90 sources. @@ -114,12 +195,72 @@ ## results if used elsewhere. So, you have been warned.. def find_executables(self): - """Modify self.executables to hold found executables, instead of - searching for them at class creation time.""" - pass + """Go through the self.executables dictionary, and attempt to + find and assign appropiate executables. + Executable names are looked for in the environment (environment + variables, the distutils.cfg, and command line), the 0th-element of + the command list, and the self.possible_executables list. + + Also, if the 0th element is "" or "", the Fortran 77 + or the Fortran 90 compiler executable is used, unless overridden + by an environment setting. + """ + exe_cache = {} + def cached_find_executable(exe): + if exe in exe_cache: + return exe_cache[exe] + fc_exe = find_executable(exe) + exe_cache[exe] = fc_exe + return fc_exe + def set_exe(exe_key, f77=None, f90=None): + cmd = self.executables.get(exe_key, None) + if not cmd: + return None + # Note that we get cmd[0] here if the environment doesn't + # have anything set + exe_from_environ = getattr(self.command_vars, exe_key) + if not exe_from_environ: + possibles = [f90, f77] + self.possible_executables + else: + possibles = [exe_from_environ] + self.possible_executables + + seen = set() + unique_possibles = [] + for e in possibles: + if e == '': + e = f77 + elif e == '': + e = f90 + if not e or e in seen: + continue + seen.add(e) + unique_possibles.append(e) + + for exe in unique_possibles: + fc_exe = cached_find_executable(exe) + if fc_exe: + cmd[0] = fc_exe + return fc_exe + return None + + f90 = set_exe('compiler_f90') + if not f90: + raise CompilerNotFound('f90') + f77 = set_exe('compiler_f77', f90=f90) + if not f77: + raise CompilerNotFound('f90') + set_exe('compiler_fix', f90=f90) + + set_exe('linker_so', f77=f77, f90=f90) + set_exe('linker_exe', f77=f77, f90=f90) + set_exe('version_cmd', f77=f77, f90=f90) + + set_exe('archiver') + set_exe('ranlib') + def get_version_cmd(self): - """ Compiler command to print out version information. """ + """Compiler command to print out version information.""" f77 = self.executables.get('compiler_f77') if f77 is not None: f77 = f77[0] @@ -137,8 +278,8 @@ return cmd def get_linker_so(self): - """ Linker command to build shared libraries. """ - f77 = self.executables['compiler_f77'] + """Linker command to build shared libraries.""" + f77 = self.executables.get('compiler_f77') if f77 is not None: f77 = f77[0] ln = self.executables.get('linker_so') @@ -155,8 +296,8 @@ return ln def get_linker_exe(self): - """ Linker command to build shared libraries. """ - f77 = self.executables['compiler_f77'] + """Linker command to build shared libraries.""" + f77 = self.executables.get('compiler_f77') if f77 is not None: f77 = f77[0] ln = self.executables.get('linker_exe') @@ -173,54 +314,47 @@ return ln def get_flags(self): - """ List of flags common to all compiler types. """ + """List of flags common to all compiler types.""" return [] + self.pic_flags + + def _get_executable_flags(self, key): + cmd = self.executables.get(key, None) + if cmd is None: + return [] + return cmd[1:] + def get_flags_version(self): - """ List of compiler flags to print out version information. """ - if self.executables.get('version_cmd'): - return self.executables['version_cmd'][1:] - return [] + """List of compiler flags to print out version information.""" + return self._get_executable_flags('version_cmd') def get_flags_f77(self): - """ List of Fortran 77 specific flags. """ - if self.executables.get('compiler_f77'): - return self.executables['compiler_f77'][1:] - return [] + """List of Fortran 77 specific flags.""" + return self._get_executable_flags('compiler_f77') def get_flags_f90(self): - """ List of Fortran 90 specific flags. """ - if self.executables.get('compiler_f90'): - return self.executables['compiler_f90'][1:] - return [] + """List of Fortran 90 specific flags.""" + return self._get_executable_flags('compiler_f90') def get_flags_free(self): - """ List of Fortran 90 free format specific flags. """ + """List of Fortran 90 free format specific flags.""" return [] def get_flags_fix(self): - """ List of Fortran 90 fixed format specific flags. """ - if self.executables.get('compiler_fix'): - return self.executables['compiler_fix'][1:] - return [] + """List of Fortran 90 fixed format specific flags.""" + return self._get_executable_flags('compiler_fix') def get_flags_linker_so(self): - """ List of linker flags to build a shared library. """ - if self.executables.get('linker_so'): - return self.executables['linker_so'][1:] - return [] + """List of linker flags to build a shared library.""" + return self._get_executable_flags('linker_so') def get_flags_linker_exe(self): - """ List of linker flags to build an executable. """ - if self.executables.get('linker_exe'): - return self.executables['linker_exe'][1:] - return [] + """List of linker flags to build an executable.""" + return self._get_executable_flags('linker_exe') def get_flags_ar(self): - """ List of archiver flags. """ - if self.executables.get('archiver'): - return self.executables['archiver'][1:] - return [] + """List of archiver flags. """ + return self._get_executable_flags('archiver') def get_flags_opt(self): - """ List of architecture independent compiler flags. """ + """List of architecture independent compiler flags.""" return [] def get_flags_arch(self): - """ List of architecture dependent compiler flags. """ + """List of architecture dependent compiler flags.""" return [] def get_flags_debug(self): - """ List of compiler flags to compile with debugging information. """ + """List of compiler flags to compile with debugging information.""" return [] get_flags_opt_f77 = get_flags_opt_f90 = get_flags_opt @@ -228,18 +362,18 @@ get_flags_debug_f77 = get_flags_debug_f90 = get_flags_debug def get_libraries(self): - """ List of compiler libraries. """ + """List of compiler libraries.""" return self.libraries[:] def get_library_dirs(self): - """ List of compiler library directories. """ + """List of compiler library directories.""" return self.library_dirs[:] ############################################################ ## Public methods: - def customize(self, dist=None): - """ Customize Fortran compiler. + def customize(self, dist): + """Customize Fortran compiler. This method gets Fortran compiler specific information from (i) class definition, (ii) environment, (iii) distutils config @@ -250,88 +384,71 @@ instance is needed for (iii) and (iv). """ log.info('customize %s' % (self.__class__.__name__)) - from distutils.dist import Distribution - if dist is None: - # These hooks are for testing only! - dist = Distribution() - dist.script_name = os.path.basename(sys.argv[0]) - dist.script_args = ['config_fc'] + sys.argv[1:] - dist.cmdclass['config_fc'] = config_fc - dist.parse_config_files() - dist.parse_command_line() - if isinstance(dist,Distribution): - conf = dist.get_option_dict('config_fc') - else: - assert isinstance(dist,dict) - conf = dist - noopt = conf.get('noopt',[None,0])[1] + self.distutils_vars.use_distribution(dist) + self.command_vars.use_distribution(dist) + self.flag_vars.use_distribution(dist) + + self.find_executables() + + noopt = self.distutils_vars.get('noopt', False) if 0: # change to `if 1:` when making release. # Don't use architecture dependent compiler flags: - noarch = 1 + noarch = True else: - noarch = conf.get('noarch',[None,noopt])[1] - debug = conf.get('debug',[None,0])[1] + noarch = self.distutils_vars.get('noarch', noopt) + debug = self.distutils_vars.get('debug', False) - self.find_executables() + f77 = self.command_vars.compiler_f77 + f90 = self.command_vars.compiler_f90 - f77 = self.__get_cmd('compiler_f77','F77',(conf,'f77exec')) - f90 = self.__get_cmd('compiler_f90','F90',(conf,'f90exec')) - # Temporarily setting f77,f90 compilers so that - # version_cmd can use their executables. - if f77: - self.set_executables(compiler_f77=[f77]) - if f90: - self.set_executables(compiler_f90=[f90]) - # Must set version_cmd before others as self.get_flags* # methods may call self.get_version. - vers_cmd = self.__get_cmd(self.get_version_cmd) + vers_cmd = self.command_vars.version_cmd if vers_cmd: - vflags = self.__get_flags(self.get_flags_version) + vflags = self.flag_vars.version self.set_executables(version_cmd=[vers_cmd]+vflags) + f77flags = [] + f90flags = [] + freeflags = [] + fixflags = [] + if f77: - f77flags = self.__get_flags(self.get_flags_f77,'F77FLAGS', - (conf,'f77flags')) + f77flags = self.flag_vars.f77 if f90: - f90flags = self.__get_flags(self.get_flags_f90,'F90FLAGS', - (conf,'f90flags')) - freeflags = self.__get_flags(self.get_flags_free,'FREEFLAGS', - (conf,'freeflags')) + f90flags = self.flag_vars.f90 + freeflags = self.flag_vars.free # XXX Assuming that free format is default for f90 compiler. - fix = self.__get_cmd('compiler_fix','F90',(conf,'f90exec')) + fix = self.command_vars.compiler_fix if fix: - fixflags = self.__get_flags(self.get_flags_fix) + f90flags + fixflags = self.flag_vars.fix + f90flags - oflags,aflags,dflags = [],[],[] + oflags, aflags, dflags = [], [], [] + def to_list(flags): + if is_string(flags): + return [flags] + return flags + # examine get_flags__ for extra flags + # only add them if the method is different from get_flags_ + def get_flags(tag, flags): + # note that self.flag_vars. calls self.get_flags_() + flags.extend(to_list(getattr(self.flag_vars, tag))) + this_get = getattr(self, 'get_flags_' + tag) + for name, c, flagvar in [('f77', f77, f77flags), + ('f90', f90, f90flags), + ('f90', fix, fixflags)]: + t = '%s_%s' % (tag, name) + if c and this_get is not getattr(self, 'get_flags_' + t): + flagvar.extend(to_list(getattr(self.flag_vars, t))) + return oflags if not noopt: - oflags = self.__get_flags(self.get_flags_opt,'FOPT',(conf,'opt')) - if f77 and self.get_flags_opt is not self.get_flags_opt_f77: - f77flags += self.__get_flags(self.get_flags_opt_f77) - if f90 and self.get_flags_opt is not self.get_flags_opt_f90: - f90flags += self.__get_flags(self.get_flags_opt_f90) - if fix and self.get_flags_opt is not self.get_flags_opt_f90: - fixflags += self.__get_flags(self.get_flags_opt_f90) + get_flags('opt', oflags) if not noarch: - aflags = self.__get_flags(self.get_flags_arch,'FARCH', - (conf,'arch')) - if f77 and self.get_flags_arch is not self.get_flags_arch_f77: - f77flags += self.__get_flags(self.get_flags_arch_f77) - if f90 and self.get_flags_arch is not self.get_flags_arch_f90: - f90flags += self.__get_flags(self.get_flags_arch_f90) - if fix and self.get_flags_arch is not self.get_flags_arch_f90: - fixflags += self.__get_flags(self.get_flags_arch_f90) + get_flags('arch', aflags) if debug: - dflags = self.__get_flags(self.get_flags_debug,'FDEBUG') - if f77 and self.get_flags_debug is not self.get_flags_debug_f77: - f77flags += self.__get_flags(self.get_flags_debug_f77) - if f90 and self.get_flags_debug is not self.get_flags_debug_f90: - f90flags += self.__get_flags(self.get_flags_debug_f90) - if fix and self.get_flags_debug is not self.get_flags_debug_f90: - fixflags += self.__get_flags(self.get_flags_debug_f90) + get_flags('debug', dflags) - fflags = self.__get_flags(self.get_flags,'FFLAGS') \ - + dflags + oflags + aflags + fflags = to_list(self.flag_vars.flags) + dflags + oflags + aflags if f77: self.set_executables(compiler_f77=[f77]+f77flags+fflags) @@ -339,10 +456,11 @@ self.set_executables(compiler_f90=[f90]+freeflags+f90flags+fflags) if fix: self.set_executables(compiler_fix=[fix]+fixflags+fflags) + #XXX: Do we need LDSHARED->SOSHARED, LDFLAGS->SOFLAGS - linker_so = self.__get_cmd(self.get_linker_so,'LDSHARED') + linker_so = self.command_vars.linker_so if linker_so: - linker_so_flags = self.__get_flags(self.get_flags_linker_so,'LDFLAGS') + linker_so_flags = to_list(self.flag_vars.linker_so) if sys.platform.startswith('aix'): python_lib = get_python_lib(standard_lib=1) ld_so_aix = os.path.join(python_lib, 'config', 'ld_so_aix') @@ -352,37 +470,32 @@ linker_so = [linker_so] self.set_executables(linker_so=linker_so+linker_so_flags) - linker_exe = self.__get_cmd(self.get_linker_exe,'LD') + linker_exe = self.command_vars.linker_exe if linker_exe: - linker_exe_flags = self.__get_flags(self.get_flags_linker_exe,'LDFLAGS') + linker_exe_flags = to_list(self.flag_vars.linker_exe) self.set_executables(linker_exe=[linker_exe]+linker_exe_flags) - ar = self.__get_cmd('archiver','AR') + + ar = self.command_vars.archiver if ar: - arflags = self.__get_flags(self.get_flags_ar,'ARFLAGS') + arflags = to_list(self.flag_vars.ar) self.set_executables(archiver=[ar]+arflags) - ranlib = self.__get_cmd('ranlib','RANLIB') + ranlib = self.command_vars.ranlib if ranlib: self.set_executables(ranlib=[ranlib]) self.set_library_dirs(self.get_library_dirs()) self.set_libraries(self.get_libraries()) - - verbose = conf.get('verbose',[None,0])[1] - if verbose: - self.dump_properties() - return - def dump_properties(self): - """ Print out the attributes of a compiler instance. """ + """Print out the attributes of a compiler instance.""" props = [] for key in self.executables.keys() + \ ['version','libraries','library_dirs', 'object_switch','compile_switch']: if hasattr(self,key): v = getattr(self,key) - props.append((key, None, '= '+`v`)) + props.append((key, None, '= '+repr(v))) props.sort() pretty_printer = FancyGetopt(props) @@ -429,7 +542,8 @@ if os.name == 'nt': compiler = _nt_quote_args(compiler) - command = compiler + cc_args + extra_flags + s_args + o_args + extra_postargs + command = compiler + cc_args + extra_flags + s_args + o_args \ + + extra_postargs display = '%s: %s' % (os.path.basename(compiler[0]) + flavor, src) @@ -512,167 +626,145 @@ log.debug("skipping %s (up-to-date)", output_filename) return - - ## Private methods: - - def __get_cmd(self, command, envvar=None, confvar=None): - if command is None: - var = None - elif is_string(command): - var = self.executables[command] - if var is not None: - var = var[0] + def _environment_hook(self, name, hook_name): + if hook_name is None: + return None + if is_string(hook_name): + if hook_name.startswith('self.'): + hook_name = hook_name[5:] + hook = getattr(self, hook_name) + return hook() + elif hook_name.startswith('exe.'): + hook_name = hook_name[4:] + var = self.executables[hook_name] + if var: + return var[0] + else: + return None + elif hook_name.startswith('flags.'): + hook_name = hook_name[6:] + hook = getattr(self, 'get_flags_' + hook_name) + return hook() else: - var = command() - if envvar is not None: - var = os.environ.get(envvar, var) - if confvar is not None: - var = confvar[0].get(confvar[1], [None,var])[1] - return var + return hook_name() - def __get_flags(self, command, envvar=None, confvar=None): - if command is None: - var = [] - elif is_string(command): - var = self.executables[command][1:] - else: - var = command() - if envvar is not None: - var = os.environ.get(envvar, var) - if confvar is not None: - var = confvar[0].get(confvar[1], [None,var])[1] - if is_string(var): - var = split_quoted(var) - return var - ## class FCompiler -fcompiler_class = {'gnu':('gnu','GnuFCompiler', - "GNU Fortran Compiler"), - 'gnu95':('gnu','Gnu95FCompiler', - "GNU 95 Fortran Compiler"), - 'g95':('g95','G95FCompiler', - "G95 Fortran Compiler"), - 'pg':('pg','PGroupFCompiler', - "Portland Group Fortran Compiler"), - 'absoft':('absoft','AbsoftFCompiler', - "Absoft Corp Fortran Compiler"), - 'mips':('mips','MipsFCompiler', - "MIPSpro Fortran Compiler"), - 'sun':('sun','SunFCompiler', - "Sun|Forte Fortran 95 Compiler"), - 'intel':('intel','IntelFCompiler', - "Intel Fortran Compiler for 32-bit apps"), - 'intelv':('intel','IntelVisualFCompiler', - "Intel Visual Fortran Compiler for 32-bit apps"), - 'intele':('intel','IntelItaniumFCompiler', - "Intel Fortran Compiler for Itanium apps"), - 'intelev':('intel','IntelItaniumVisualFCompiler', - "Intel Visual Fortran Compiler for Itanium apps"), - 'intelem':('intel','IntelEM64TFCompiler', - "Intel Fortran Compiler for EM64T-based apps"), - 'nag':('nag','NAGFCompiler', - "NAGWare Fortran 95 Compiler"), - 'compaq':('compaq','CompaqFCompiler', - "Compaq Fortran Compiler"), - 'compaqv':('compaq','CompaqVisualFCompiler', - "DIGITAL|Compaq Visual Fortran Compiler"), - 'vast':('vast','VastFCompiler', - "Pacific-Sierra Research Fortran 90 Compiler"), - 'hpux':('hpux','HPUXFCompiler', - "HP Fortran 90 Compiler"), - 'lahey':('lahey','LaheyFCompiler', - "Lahey/Fujitsu Fortran 95 Compiler"), - 'ibm':('ibm','IbmFCompiler', - "IBM XL Fortran Compiler"), - 'f':('f','FFCompiler', - "Fortran Company/NAG F Compiler"), - 'none':('none','NoneFCompiler',"Fake Fortran compiler") - } - _default_compilers = ( # Platform mappings - ('win32',('gnu','intelv','absoft','compaqv','intelev','gnu95','g95')), - ('cygwin.*',('gnu','intelv','absoft','compaqv','intelev','gnu95','g95')), - ('linux.*',('gnu','intel','lahey','pg','absoft','nag','vast','compaq', + ('win32', ('gnu','intelv','absoft','compaqv','intelev','gnu95','g95')), + ('cygwin.*', ('gnu','intelv','absoft','compaqv','intelev','gnu95','g95')), + ('linux.*', ('gnu','intel','lahey','pg','absoft','nag','vast','compaq', 'intele','intelem','gnu95','g95')), - ('darwin.*',('nag','absoft','ibm','gnu','gnu95','g95')), - ('sunos.*',('sun','gnu','gnu95','g95')), - ('irix.*',('mips','gnu','gnu95',)), - ('aix.*',('ibm','gnu','gnu95',)), + ('darwin.*', ('nag', 'absoft', 'ibm', 'intel', 'gnu', 'gnu95', 'g95')), + ('sunos.*', ('sun','gnu','gnu95','g95')), + ('irix.*', ('mips','gnu','gnu95',)), + ('aix.*', ('ibm','gnu','gnu95',)), # OS mappings - ('posix',('gnu','gnu95',)), - ('nt',('gnu','gnu95',)), - ('mac',('gnu','gnu95',)), + ('posix', ('gnu','gnu95',)), + ('nt', ('gnu','gnu95',)), + ('mac', ('gnu','gnu95',)), ) -def _find_existing_fcompiler(compilers, osname=None, platform=None, requiref90=None): - for compiler in compilers: +fcompiler_class = None + +def load_all_fcompiler_classes(): + """Cache all the FCompiler classes found in modules in the + numpy.distutils.fcompiler package. + """ + from glob import glob + global fcompiler_class + if fcompiler_class is not None: + return + pys = os.path.join(os.path.dirname(__file__), '*.py') + fcompiler_class = {} + for fname in glob(pys): + module_name, ext = os.path.splitext(os.path.basename(fname)) + module_name = 'numpy.distutils.fcompiler.' + module_name + __import__ (module_name) + module = sys.modules[module_name] + if hasattr(module, 'compilers'): + for cname in module.compilers: + klass = getattr(module, cname) + fcompiler_class[klass.compiler_type] = (klass.compiler_type, + klass, + klass.description) + +def _find_existing_fcompiler(compiler_types, osname=None, platform=None, + requiref90=False): + from numpy.distutils.core import get_distribution + dist = get_distribution(always=True) + for compiler_type in compiler_types: v = None try: - c = new_fcompiler(plat=platform, compiler=compiler) - c.customize() + c = new_fcompiler(plat=platform, compiler=compiler_type) + c.customize(dist) v = c.get_version() if requiref90 and c.compiler_f90 is None: v = None new_compiler = c.suggested_f90_compiler if new_compiler: - log.warn('Trying %r compiler as suggested by %r compiler for f90 support.' % (compiler, new_compiler)) + log.warn('Trying %r compiler as suggested by %r ' + 'compiler for f90 support.' % (compiler, + new_compiler)) c = new_fcompiler(plat=platform, compiler=new_compiler) - c.customize() + c.customize(dist) v = c.get_version() if v is not None: - compiler = new_compiler + compiler_type = new_compiler if requiref90 and c.compiler_f90 is None: - raise ValueError,'%s does not support compiling f90 codes, skipping.' \ - % (c.__class__.__name__) + raise ValueError('%s does not support compiling f90 codes, ' + 'skipping.' % (c.__class__.__name__)) except DistutilsModuleError: pass - except Exception, msg: - log.warn(msg) + except CompilerNotFound: + pass if v is not None: - return compiler - return + return compiler_type + return None -def get_default_fcompiler(osname=None, platform=None, requiref90=None): - """ Determine the default Fortran compiler to use for the given platform. """ +def available_fcompilers_for_platform(osname=None, platform=None): if osname is None: osname = os.name if platform is None: platform = sys.platform - matching_compilers = [] - for pattern, compiler in _default_compilers: - if re.match(pattern, platform) is not None or \ - re.match(pattern, osname) is not None: - if is_sequence(compiler): - matching_compilers.extend(list(compiler)) - else: - matching_compilers.append(compiler) - if not matching_compilers: - matching_compilers.append('gnu') - compiler = _find_existing_fcompiler(matching_compilers, - osname=osname, - platform=platform, - requiref90=requiref90) - if compiler is not None: - return compiler - return matching_compilers[0] + matching_compiler_types = [] + for pattern, compiler_type in _default_compilers: + if re.match(pattern, platform) or re.match(pattern, osname): + for ct in compiler_type: + if ct not in matching_compiler_types: + matching_compiler_types.append(ct) + if not matching_compiler_types: + matching_compiler_types.append('gnu') + return matching_compiler_types +def get_default_fcompiler(osname=None, platform=None, requiref90=False): + """Determine the default Fortran compiler to use for the given + platform.""" + matching_compiler_types = available_fcompilers_for_platform(osname, + platform) + compiler_type = _find_existing_fcompiler(matching_compiler_types, + osname=osname, + platform=platform, + requiref90=requiref90) + return compiler_type + def new_fcompiler(plat=None, compiler=None, verbose=0, dry_run=0, force=0, - requiref90=0): - """ Generate an instance of some FCompiler subclass for the supplied + requiref90=False): + """Generate an instance of some FCompiler subclass for the supplied platform/compiler combination. """ + load_all_fcompiler_classes() if plat is None: plat = os.name + if compiler is None: + compiler = get_default_fcompiler(plat, requiref90=requiref90) try: - if compiler is None: - compiler = get_default_fcompiler(plat,requiref90=requiref90) - (module_name, class_name, long_description) = fcompiler_class[compiler] + module_name, klass, long_description = fcompiler_class[compiler] except KeyError: msg = "don't know how to compile Fortran code on platform '%s'" % plat if compiler is not None: @@ -681,69 +773,67 @@ % (','.join(fcompiler_class.keys())) raise DistutilsPlatformError, msg - try: - module_name = 'numpy.distutils.fcompiler.'+module_name - __import__ (module_name) - module = sys.modules[module_name] - klass = vars(module)[class_name] - except ImportError: - raise DistutilsModuleError, \ - "can't compile Fortran code: unable to load module '%s'" % \ - module_name - except KeyError: - raise DistutilsModuleError, \ - ("can't compile Fortran code: unable to find class '%s' " + - "in module '%s'") % (class_name, module_name) - compiler = klass(None, dry_run, force) - log.debug('new_fcompiler returns %s' % (klass)) + compiler = klass(verbose=verbose, dry_run=dry_run, force=force) return compiler -def show_fcompilers(dist = None): - """ Print list of available compilers (used by the "--help-fcompiler" +def show_fcompilers(dist=None): + """Print list of available compilers (used by the "--help-fcompiler" option to "config_fc"). """ if dist is None: from distutils.dist import Distribution + from numpy.distutils.command.config_compiler import config_fc dist = Distribution() dist.script_name = os.path.basename(sys.argv[0]) dist.script_args = ['config_fc'] + sys.argv[1:] + try: + dist.script_args.remove('--help-fcompiler') + except ValueError: + pass dist.cmdclass['config_fc'] = config_fc dist.parse_config_files() dist.parse_command_line() compilers = [] compilers_na = [] compilers_ni = [] - for compiler in fcompiler_class.keys(): - v = 'N/A' + if not fcompiler_class: + load_all_fcompiler_classes() + platform_compilers = available_fcompilers_for_platform() + for compiler in platform_compilers: + v = None log.set_verbosity(-2) + log.set_verbosity(-2) try: - c = new_fcompiler(compiler=compiler) + c = new_fcompiler(compiler=compiler, verbose=dist.verbose) c.customize(dist) v = c.get_version() - except DistutilsModuleError: + except (DistutilsModuleError, CompilerNotFound): pass - except Exception, msg: - log.warn(msg) + if v is None: compilers_na.append(("fcompiler="+compiler, None, fcompiler_class[compiler][2])) - elif v=='N/A': - compilers_ni.append(("fcompiler="+compiler, None, - fcompiler_class[compiler][2])) else: + c.dump_properties() compilers.append(("fcompiler="+compiler, None, fcompiler_class[compiler][2] + ' (%s)' % v)) + compilers_ni = list(set(fcompiler_class.keys()) - set(platform_compilers)) + compilers_ni = [("fcompiler="+fc, None, fcompiler_class[fc][2]) + for fc in compilers_ni] + compilers.sort() compilers_na.sort() + compilers_ni.sort() pretty_printer = FancyGetopt(compilers) - pretty_printer.print_help("List of available Fortran compilers:") + pretty_printer.print_help("Fortran compilers found:") pretty_printer = FancyGetopt(compilers_na) - pretty_printer.print_help("List of unavailable Fortran compilers:") + pretty_printer.print_help("Compilers available for this " + "platform, but not found:") if compilers_ni: pretty_printer = FancyGetopt(compilers_ni) - pretty_printer.print_help("List of unimplemented Fortran compilers:") + pretty_printer.print_help("Compilers not available on this platform:") print "For compiler details, run 'config_fc --verbose' setup command." def dummy_fortran_file(): Modified: trunk/numpy/distutils/fcompiler/absoft.py =================================================================== --- trunk/numpy/distutils/fcompiler/absoft.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/absoft.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -13,9 +13,12 @@ from numpy.distutils.fcompiler import FCompiler, dummy_fortran_file from numpy.distutils.misc_util import cyg2win32 +compilers = ['AbsoftFCompiler'] + class AbsoftFCompiler(FCompiler): compiler_type = 'absoft' + description = 'Absoft Corp Fortran Compiler' #version_pattern = r'FORTRAN 77 Compiler (?P[^\s*,]*).*?Absoft Corp' version_pattern = r'(f90:.*?(Absoft Pro FORTRAN Version|FORTRAN 77 Compiler|Absoft Fortran Compiler Version|Copyright Absoft Corporation.*?Version))'+\ r' (?P[^\s*,]*)(.*?Absoft Corp|)' @@ -28,12 +31,12 @@ # Note that fink installs g77 as f77, so need to use f90 for detection. executables = { - 'version_cmd' : ["f90", "-V -c %(fname)s.f -o %(fname)s.o" \ + 'version_cmd' : ["", "-V -c %(fname)s.f -o %(fname)s.o" \ % {'fname':cyg2win32(dummy_fortran_file())}], 'compiler_f77' : ["f77"], 'compiler_fix' : ["f90"], 'compiler_f90' : ["f90"], - 'linker_so' : ["f90"], + 'linker_so' : [""], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } Modified: trunk/numpy/distutils/fcompiler/compaq.py =================================================================== --- trunk/numpy/distutils/fcompiler/compaq.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/compaq.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -7,9 +7,17 @@ from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler +compilers = ['CompaqFCompiler'] +if os.name != 'posix': + # Otherwise we'd get a false positive on posix systems with + # case-insensitive filesystems (like darwin), because we'll pick + # up /bin/df + compilers.append('CompaqVisualFCompiler') + class CompaqFCompiler(FCompiler): compiler_type = 'compaq' + description = 'Compaq Fortran Compiler' version_pattern = r'Compaq Fortran (?P[^\s]*).*' if sys.platform[:5]=='linux': @@ -18,11 +26,11 @@ fc_exe = 'f90' executables = { - 'version_cmd' : [fc_exe, "-version"], + 'version_cmd' : ['', "-version"], 'compiler_f77' : [fc_exe, "-f77rtl","-fixed"], 'compiler_fix' : [fc_exe, "-fixed"], 'compiler_f90' : [fc_exe], - 'linker_so' : [fc_exe], + 'linker_so' : [''], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } @@ -47,6 +55,7 @@ class CompaqVisualFCompiler(FCompiler): compiler_type = 'compaqv' + description = 'DIGITAL or Compaq Visual Fortran Compiler' version_pattern = r'(DIGITAL|Compaq) Visual Fortran Optimizing Compiler'\ ' Version (?P[^\s]*).*' @@ -68,11 +77,11 @@ ar_exe = m.lib executables = { - 'version_cmd' : ['DF', "/what"], - 'compiler_f77' : ['DF', "/f77rtl","/fixed"], - 'compiler_fix' : ['DF', "/fixed"], - 'compiler_f90' : ['DF'], - 'linker_so' : ['DF'], + 'version_cmd' : ['', "/what"], + 'compiler_f77' : [fc_exe, "/f77rtl","/fixed"], + 'compiler_fix' : [fc_exe, "/fixed"], + 'compiler_f90' : [fc_exe], + 'linker_so' : [''], 'archiver' : [ar_exe, "/OUT:"], 'ranlib' : None } Modified: trunk/numpy/distutils/fcompiler/g95.py =================================================================== --- trunk/numpy/distutils/fcompiler/g95.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/g95.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -6,9 +6,12 @@ from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler +compilers = ['G95FCompiler'] + class G95FCompiler(FCompiler): + compiler_type = 'g95' + description = 'G95 Fortran Compiler' - compiler_type = 'g95' # version_pattern = r'G95 \((GCC (?P[\d.]+)|.*?) \(g95!\) (?P.*)\).*' # $ g95 --version # G95 (GCC 4.0.3 (g95!) May 22 2006) @@ -17,13 +20,12 @@ # $ g95 --version # G95 (GCC 4.0.3 (g95 0.90!) Aug 22 2006) - executables = { - 'version_cmd' : ["g95", "--version"], + 'version_cmd' : ["", "--version"], 'compiler_f77' : ["g95", "-ffixed-form"], 'compiler_fix' : ["g95", "-ffixed-form"], 'compiler_f90' : ["g95"], - 'linker_so' : ["g95","-shared"], + 'linker_so' : ["","-shared"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } Modified: trunk/numpy/distutils/fcompiler/gnu.py =================================================================== --- trunk/numpy/distutils/fcompiler/gnu.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/gnu.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -5,12 +5,14 @@ from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler -from numpy.distutils.exec_command import exec_command, find_executable +from numpy.distutils.exec_command import exec_command from numpy.distutils.misc_util import mingw32, msvc_runtime_library -class GnuFCompiler(FCompiler): +compilers = ['GnuFCompiler', 'Gnu95FCompiler'] +class GnuFCompiler(FCompiler): compiler_type = 'gnu' + description = 'GNU Fortran 77 compiler' def gnu_version_match(self, version_string): """Handle the different versions of GNU fortran compilers""" @@ -44,22 +46,23 @@ # GNU Fortran 0.5.25 20010319 (prerelease) # Redhat: GNU Fortran (GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)) 3.2.2 20030222 (Red Hat Linux 3.2.2-5) + possible_executables = ['g77', 'f77'] executables = { - 'version_cmd' : ["g77", "--version"], - 'compiler_f77' : ["g77", "-g", "-Wall","-fno-second-underscore"], + 'version_cmd' : [None, "--version"], + 'compiler_f77' : [None, "-g", "-Wall", "-fno-second-underscore"], 'compiler_f90' : None, # Use --fcompiler=gnu95 for f90 codes 'compiler_fix' : None, - 'linker_so' : ["g77", "-g", "-Wall"], + 'linker_so' : [None, "-g", "-Wall"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"], - 'linker_exe' : ["g77", "-g", "-Wall"] + 'linker_exe' : [None, "-g", "-Wall"] } module_dir_switch = None module_include_switch = None # Cygwin: f771: warning: -fPIC ignored for target (all code is # position independent) - if os.name != 'nt' and sys.platform!='cygwin': + if os.name != 'nt' and sys.platform != 'cygwin': pic_flags = ['-fPIC'] # use -mno-cygwin for g77 when Python is not Cygwin-Python @@ -71,13 +74,6 @@ suggested_f90_compiler = 'gnu95' - def find_executables(self): - for fc_exe in [find_executable(c) for c in ['g77','f77']]: - if os.path.isfile(fc_exe): - break - for key in ['version_cmd', 'compiler_f77', 'linker_so', 'linker_exe']: - self.executables[key][0] = fc_exe - #def get_linker_so(self): # # win32 linking should be handled by standard linker # # Darwin g77 cannot be used as a linker. @@ -106,7 +102,7 @@ opt.extend(['-undefined', 'dynamic_lookup', '-bundle']) else: opt.append("-shared") - if sys.platform[:5]=='sunos': + if sys.platform.startswith('sunos'): # SunOS often has dynamically loaded symbols defined in the # static library libg2c.a The linker doesn't like this. To # ignore the problem, use the -mimpure-text flag. It isn't @@ -178,13 +174,10 @@ def get_flags_arch(self): opt = [] - if sys.platform=='darwin': - if os.name != 'posix': - # this should presumably correspond to Apple - if cpu.is_ppc(): - opt.append('-arch ppc') - elif cpu.is_i386(): - opt.append('-arch i386') + if sys.platform == 'darwin': + # Since Apple doesn't distribute a GNU Fortran compiler, we + # can't add -arch ppc or -arch i386, as only their version + # of the GNU compilers accepts those. for a in '601 602 603 603e 604 604e 620 630 740 7400 7450 750'\ '403 505 801 821 823 860'.split(): if getattr(cpu,'is_ppc%s'%a)(): @@ -276,8 +269,8 @@ return opt class Gnu95FCompiler(GnuFCompiler): - compiler_type = 'gnu95' + description = 'GNU Fortran 95 compiler' def version_match(self, version_string): v = self.gnu_version_match(version_string) @@ -293,17 +286,18 @@ # GNU Fortran 95 (GCC) 4.2.0 20060218 (experimental) # GNU Fortran (GCC) 4.3.0 20070316 (experimental) + possible_executables = ['gfortran', 'f95'] executables = { - 'version_cmd' : ["gfortran", "--version"], - 'compiler_f77' : ["gfortran", "-Wall", "-ffixed-form", + 'version_cmd' : ["", "--version"], + 'compiler_f77' : [None, "-Wall", "-ffixed-form", "-fno-second-underscore"], - 'compiler_f90' : ["gfortran", "-Wall", "-fno-second-underscore"], - 'compiler_fix' : ["gfortran", "-Wall", "-ffixed-form", + 'compiler_f90' : [None, "-Wall", "-fno-second-underscore"], + 'compiler_fix' : [None, "-Wall", "-ffixed-form", "-fno-second-underscore"], - 'linker_so' : ["gfortran", "-Wall"], + 'linker_so' : ["", "-Wall"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"], - 'linker_exe' : ["gfortran", "-Wall"] + 'linker_exe' : [None,"-Wall"] } # use -mno-cygwin flag for g77 when Python is not Cygwin-Python @@ -317,14 +311,6 @@ g2c = 'gfortran' - def find_executables(self): - for fc_exe in [find_executable(c) for c in ['gfortran','f95']]: - if os.path.isfile(fc_exe): - break - for key in ['version_cmd', 'compiler_f77', 'compiler_f90', - 'compiler_fix', 'linker_so', 'linker_exe']: - self.executables[key][0] = fc_exe - def get_libraries(self): opt = GnuFCompiler.get_libraries(self) if sys.platform == 'darwin': @@ -339,3 +325,6 @@ compiler = GnuFCompiler() compiler.customize() print compiler.get_version() + compiler = Gnu95FCompiler() + compiler.customize() + print compiler.get_version() Modified: trunk/numpy/distutils/fcompiler/hpux.py =================================================================== --- trunk/numpy/distutils/fcompiler/hpux.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/hpux.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -4,13 +4,16 @@ from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler +compilers = ['HPUXFCompiler'] + class HPUXFCompiler(FCompiler): compiler_type = 'hpux' + description = 'HP Fortran 90 Compiler' version_pattern = r'HP F90 (?P[^\s*,]*)' executables = { - 'version_cmd' : ["f90", "+version"], + 'version_cmd' : ["", "+version"], 'compiler_f77' : ["f90"], 'compiler_fix' : ["f90"], 'compiler_f90' : ["f90"], Modified: trunk/numpy/distutils/fcompiler/ibm.py =================================================================== --- trunk/numpy/distutils/fcompiler/ibm.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/ibm.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -7,13 +7,16 @@ from distutils import log from distutils.sysconfig import get_python_lib -class IbmFCompiler(FCompiler): +compilers = ['IBMFCompiler'] +class IBMFCompiler(FCompiler): compiler_type = 'ibm' + description = 'IBM XL Fortran Compiler' version_pattern = r'(xlf\(1\)\s*|)IBM XL Fortran ((Advanced Edition |)Version |Enterprise Edition V)(?P[^\s*]*)' #IBM XL Fortran Enterprise Edition V10.1 for AIX \nVersion: 10.01.0000.0004 + executables = { - 'version_cmd' : ["xlf","-qversion"], + 'version_cmd' : ["", "-qversion"], 'compiler_f77' : ["xlf"], 'compiler_fix' : ["xlf90", "-qfixed"], 'compiler_f90' : ["xlf90"], Modified: trunk/numpy/distutils/fcompiler/intel.py =================================================================== --- trunk/numpy/distutils/fcompiler/intel.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/intel.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -9,8 +9,11 @@ from numpy.distutils.cpuinfo import cpu from numpy.distutils.ccompiler import simple_version_match from numpy.distutils.fcompiler import FCompiler, dummy_fortran_file -from numpy.distutils.exec_command import find_executable +compilers = ['IntelFCompiler', 'IntelVisualFCompiler', + 'IntelItaniumFCompiler', 'IntelItaniumVisualFCompiler', + 'IntelEM64TFCompiler'] + def intel_version_match(type): # Match against the important stuff in the version string return simple_version_match(start=r'Intel.*?Fortran.*?%s.*?Version' % (type,)) @@ -18,19 +21,18 @@ class IntelFCompiler(FCompiler): compiler_type = 'intel' + description = 'Intel Fortran Compiler for 32-bit apps' version_match = intel_version_match('32-bit') - for fc_exe in map(find_executable,['ifort','ifc']): - if os.path.isfile(fc_exe): - break + possible_executables = ['ifort', 'ifc'] executables = { - 'version_cmd' : [fc_exe, "-FI -V -c %(fname)s.f -o %(fname)s.o" \ + 'version_cmd' : ["", "-FI -V -c %(fname)s.f -o %(fname)s.o" \ % {'fname':dummy_fortran_file()}], - 'compiler_f77' : [fc_exe,"-72","-w90","-w95"], - 'compiler_fix' : [fc_exe,"-FI"], - 'compiler_f90' : [fc_exe], - 'linker_so' : [fc_exe,"-shared"], + 'compiler_f77' : [None,"-72","-w90","-w95"], + 'compiler_f90' : [None], + 'compiler_fix' : [None,"-FI"], + 'linker_so' : ["","-shared"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } @@ -80,6 +82,7 @@ class IntelItaniumFCompiler(IntelFCompiler): compiler_type = 'intele' + description = 'Intel Fortran Compiler for Itanium apps' version_match = intel_version_match('Itanium') @@ -88,37 +91,34 @@ #Copyright (C) 1985-2006 Intel Corporation.? All rights reserved. #30 DAY EVALUATION LICENSE - for fc_exe in map(find_executable,['ifort','efort','efc']): - if os.path.isfile(fc_exe): - break + possible_executables = ['ifort', 'efort', 'efc'] executables = { - 'version_cmd' : [fc_exe, "-FI -V -c %(fname)s.f -o %(fname)s.o" \ + 'version_cmd' : ['', "-FI -V -c %(fname)s.f -o %(fname)s.o" \ % {'fname':dummy_fortran_file()}], - 'compiler_f77' : [fc_exe,"-FI","-w90","-w95"], - 'compiler_fix' : [fc_exe,"-FI"], - 'compiler_f90' : [fc_exe], - 'linker_so' : [fc_exe,"-shared"], + 'compiler_f77' : [None,"-FI","-w90","-w95"], + 'compiler_fix' : [None,"-FI"], + 'compiler_f90' : [None], + 'linker_so' : ['', "-shared"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } class IntelEM64TFCompiler(IntelFCompiler): compiler_type = 'intelem' + description = 'Intel Fortran Compiler for EM64T-based apps' version_match = intel_version_match('EM64T-based') - for fc_exe in map(find_executable,['ifort','efort','efc']): - if os.path.isfile(fc_exe): - break + possible_executables = ['ifort', 'efort', 'efc'] executables = { - 'version_cmd' : [fc_exe, "-FI -V -c %(fname)s.f -o %(fname)s.o" \ + 'version_cmd' : ['', "-FI -V -c %(fname)s.f -o %(fname)s.o" \ % {'fname':dummy_fortran_file()}], - 'compiler_f77' : [fc_exe,"-FI","-w90","-w95"], - 'compiler_fix' : [fc_exe,"-FI"], - 'compiler_f90' : [fc_exe], - 'linker_so' : [fc_exe,"-shared"], + 'compiler_f77' : [None, "-FI", "-w90", "-w95"], + 'compiler_fix' : [None, "-FI"], + 'compiler_f90' : [None], + 'linker_so' : ['', "-shared"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } @@ -133,20 +133,20 @@ # and the Visual compilers? class IntelVisualFCompiler(FCompiler): - compiler_type = 'intelv' + description = 'Intel Visual Fortran Compiler for 32-bit apps' version_match = intel_version_match('32-bit') ar_exe = 'lib.exe' - fc_exe = 'ifl' + possible_executables = ['ifl'] executables = { - 'version_cmd' : [fc_exe, "-FI -V -c %(fname)s.f -o %(fname)s.o" \ + 'version_cmd' : ['', "-FI -V -c %(fname)s.f -o %(fname)s.o" \ % {'fname':dummy_fortran_file()}], - 'compiler_f77' : [fc_exe,"-FI","-w90","-w95"], - 'compiler_fix' : [fc_exe,"-FI","-4L72","-w"], - 'compiler_f90' : [fc_exe], - 'linker_so' : [fc_exe,"-shared"], + 'compiler_f77' : [None,"-FI","-w90","-w95"], + 'compiler_fix' : [None,"-FI","-4L72","-w"], + 'compiler_f90' : [None], + 'linker_so' : ['', "-shared"], 'archiver' : [ar_exe, "/verbose", "/OUT:"], 'ranlib' : None } @@ -185,20 +185,21 @@ return opt class IntelItaniumVisualFCompiler(IntelVisualFCompiler): + compiler_type = 'intelev' + description = 'Intel Visual Fortran Compiler for Itanium apps' - compiler_type = 'intelev' version_match = intel_version_match('Itanium') - fc_exe = 'efl' # XXX this is a wild guess + possible_executables = ['efl'] # XXX this is a wild guess ar_exe = IntelVisualFCompiler.ar_exe executables = { - 'version_cmd' : [fc_exe, "-FI -V -c %(fname)s.f -o %(fname)s.o" \ + 'version_cmd' : ['', "-FI -V -c %(fname)s.f -o %(fname)s.o" \ % {'fname':dummy_fortran_file()}], - 'compiler_f77' : [fc_exe,"-FI","-w90","-w95"], - 'compiler_fix' : [fc_exe,"-FI","-4L72","-w"], - 'compiler_f90' : [fc_exe], - 'linker_so' : [fc_exe,"-shared"], + 'compiler_f77' : [None,"-FI","-w90","-w95"], + 'compiler_fix' : [None,"-FI","-4L72","-w"], + 'compiler_f90' : [None], + 'linker_so' : ['',"-shared"], 'archiver' : [ar_exe, "/verbose", "/OUT:"], 'ranlib' : None } Modified: trunk/numpy/distutils/fcompiler/lahey.py =================================================================== --- trunk/numpy/distutils/fcompiler/lahey.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/lahey.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -4,13 +4,16 @@ from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler +compilers = ['LaheyFCompiler'] + class LaheyFCompiler(FCompiler): compiler_type = 'lahey' + description = 'Lahey/Fujitsu Fortran 95 Compiler' version_pattern = r'Lahey/Fujitsu Fortran 95 Compiler Release (?P[^\s*]*)' executables = { - 'version_cmd' : ["lf95", "--version"], + 'version_cmd' : ["", "--version"], 'compiler_f77' : ["lf95", "--fix"], 'compiler_fix' : ["lf95", "--fix"], 'compiler_f90' : ["lf95"], Modified: trunk/numpy/distutils/fcompiler/mips.py =================================================================== --- trunk/numpy/distutils/fcompiler/mips.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/mips.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -4,13 +4,16 @@ from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler -class MipsFCompiler(FCompiler): +compilers = ['MIPSFCompiler'] +class MIPSFCompiler(FCompiler): + compiler_type = 'mips' + description = 'MIPSpro Fortran Compiler' version_pattern = r'MIPSpro Compilers: Version (?P[^\s*,]*)' executables = { - 'version_cmd' : ["f90", "-version"], + 'version_cmd' : ["", "-version"], 'compiler_f77' : ["f77", "-f77"], 'compiler_fix' : ["f90", "-fixedform"], 'compiler_f90' : ["f90"], Modified: trunk/numpy/distutils/fcompiler/nag.py =================================================================== --- trunk/numpy/distutils/fcompiler/nag.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/nag.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -4,17 +4,20 @@ from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler +compilers = ['NAGFCompiler'] + class NAGFCompiler(FCompiler): compiler_type = 'nag' + description = 'NAGWare Fortran 95 Compiler' version_pattern = r'NAGWare Fortran 95 compiler Release (?P[^\s]*)' executables = { - 'version_cmd' : ["f95", "-V"], + 'version_cmd' : ["", "-V"], 'compiler_f77' : ["f95", "-fixed"], 'compiler_fix' : ["f95", "-fixed"], 'compiler_f90' : ["f95"], - 'linker_so' : ["f95"], + 'linker_so' : [""], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } Modified: trunk/numpy/distutils/fcompiler/none.py =================================================================== --- trunk/numpy/distutils/fcompiler/none.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/none.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -1,20 +1,27 @@ from numpy.distutils.fcompiler import FCompiler +compilers = ['NoneFCompiler'] + class NoneFCompiler(FCompiler): compiler_type = 'none' + description = 'Fake Fortran compiler' - executables = {'compiler_f77':['/path/to/nowhere/none'], - 'compiler_f90':['/path/to/nowhere/none'], - 'compiler_fix':['/path/to/nowhere/none'], - 'linker_so':['/path/to/nowhere/none'], - 'archiver':['/path/to/nowhere/none'], - 'ranlib':['/path/to/nowhere/none'], - 'version_cmd':['/path/to/nowhere/none'], + executables = {'compiler_f77' : None, + 'compiler_f90' : None, + 'compiler_fix' : None, + 'linker_so' : None, + 'linker_exe' : None, + 'archiver' : None, + 'ranlib' : None, + 'version_cmd' : None, } + def find_executables(self): + pass + if __name__ == '__main__': from distutils import log log.set_verbosity(2) Modified: trunk/numpy/distutils/fcompiler/pg.py =================================================================== --- trunk/numpy/distutils/fcompiler/pg.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/pg.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -7,13 +7,16 @@ from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler +compilers = ['PGroupFCompiler'] + class PGroupFCompiler(FCompiler): compiler_type = 'pg' + description = 'Portland Group Fortran Compiler' version_pattern = r'\s*pg(f77|f90|hpf) (?P[\d.-]+).*' executables = { - 'version_cmd' : ["pgf77", "-V 2>/dev/null"], + 'version_cmd' : ["", "-V 2>/dev/null"], 'compiler_f77' : ["pgf77"], 'compiler_fix' : ["pgf90", "-Mfixed"], 'compiler_f90' : ["pgf90"], Modified: trunk/numpy/distutils/fcompiler/sun.py =================================================================== --- trunk/numpy/distutils/fcompiler/sun.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/sun.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -5,20 +5,23 @@ from numpy.distutils.ccompiler import simple_version_match from numpy.distutils.fcompiler import FCompiler +compilers = ['SunFCompiler'] + class SunFCompiler(FCompiler): compiler_type = 'sun' + description = 'Sun or Forte Fortran 95 Compiler' # ex: # f90: Sun WorkShop 6 update 2 Fortran 95 6.2 Patch 111690-10 2003/08/28 version_match = simple_version_match( start=r'f9[05]: (Sun|Forte|WorkShop).*Fortran 95') executables = { - 'version_cmd' : ["f90", "-V"], + 'version_cmd' : ["", "-V"], 'compiler_f77' : ["f90"], 'compiler_fix' : ["f90", "-fixed"], 'compiler_f90' : ["f90"], - 'linker_so' : ["f90","-Bdynamic","-G"], + 'linker_so' : ["","-Bdynamic","-G"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } Modified: trunk/numpy/distutils/fcompiler/vast.py =================================================================== --- trunk/numpy/distutils/fcompiler/vast.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/fcompiler/vast.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -4,9 +4,12 @@ from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler.gnu import GnuFCompiler +compilers = ['VastFCompiler'] + class VastFCompiler(GnuFCompiler): compiler_type = 'vast' + description = 'Pacific-Sierra Research Fortran 90 Compiler' version_pattern = r'\s*Pacific-Sierra Research vf90 '\ '(Personal|Professional)\s+(?P[^\s]*)' @@ -19,7 +22,7 @@ 'compiler_f77' : ["g77"], 'compiler_fix' : ["f90", "-Wv,-ya"], 'compiler_f90' : ["f90"], - 'linker_so' : ["f90"], + 'linker_so' : [""], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } @@ -31,8 +34,8 @@ def get_version_cmd(self): f90 = self.compiler_f90[0] - d,b = os.path.split(f90) - vf90 = os.path.join(d,'v'+b) + d, b = os.path.split(f90) + vf90 = os.path.join(d, 'v'+b) return vf90 def get_flags_arch(self): Modified: trunk/numpy/distutils/interactive.py =================================================================== --- trunk/numpy/distutils/interactive.py 2007-05-25 11:20:02 UTC (rev 3823) +++ trunk/numpy/distutils/interactive.py 2007-05-25 11:41:16 UTC (rev 3824) @@ -19,7 +19,7 @@ def show_fortran_compilers(*args): from fcompiler import show_fcompilers - show_fcompilers({}) + show_fcompilers() def show_compilers(*args): from distutils.ccompiler import show_compilers From numpy-svn at scipy.org Fri May 25 07:41:56 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 25 May 2007 06:41:56 -0500 (CDT) Subject: [Numpy-svn] r3825 - trunk/numpy/distutils/command Message-ID: <20070525114156.AC76D39C03E@new.scipy.org> Author: cookedm Date: 2007-05-25 06:41:55 -0500 (Fri, 25 May 2007) New Revision: 3825 Modified: trunk/numpy/distutils/command/build_src.py Log: merge from distutils-revamp branch (step 3) - minor command/build_src cleanup Modified: trunk/numpy/distutils/command/build_src.py =================================================================== --- trunk/numpy/distutils/command/build_src.py 2007-05-25 11:41:16 UTC (rev 3824) +++ trunk/numpy/distutils/command/build_src.py 2007-05-25 11:41:55 UTC (rev 3825) @@ -10,6 +10,15 @@ from distutils.util import get_platform from distutils.errors import DistutilsError, DistutilsSetupError +try: + from Pyrex.Compiler import Main + have_pyrex = True +except ImportError: + have_pyrex = False + +# this import can't be done here, as it uses numpy stuff only available +# after it's installed +#import numpy.f2py from numpy.distutils import log from numpy.distutils.misc_util import fortran_ext_match, \ appendpath, is_string, is_sequence @@ -56,7 +65,6 @@ self.swig_opts = None self.swig_cpp = None self.swig = None - return def finalize_options(self): self.set_undefined_options('build', @@ -95,7 +103,7 @@ self.swig_opts = self.swigflags self.swigflags = None - if self.swig_opts is None: + if self.swig_opts is None: self.swig_opts = [] else: self.swig_opts = splitcmdline(self.swig_opts) @@ -115,20 +123,17 @@ else: log.info('using "%s=%s" option from build_ext command' % (o,v)) setattr(self, c, v) - return def run(self): if not (self.extensions or self.libraries): return self.build_sources() - return - def build_sources(self): if self.inplace: - self.get_package_dir = self.get_finalized_command('build_py')\ - .get_package_dir + self.get_package_dir = \ + self.get_finalized_command('build_py').get_package_dir self.build_py_modules_sources() @@ -143,8 +148,6 @@ self.build_data_files_sources() - return - def build_data_files_sources(self): if not self.data_files: return @@ -179,7 +182,6 @@ else: raise TypeError(repr(data)) self.data_files[:] = new_data_files - return def build_py_modules_sources(self): if not self.py_modules: @@ -206,7 +208,6 @@ else: new_py_modules.append(source) self.py_modules[:] = new_py_modules - return def build_library_sources(self, lib_name, build_info): sources = list(build_info.get('sources',[])) @@ -274,8 +275,6 @@ ext.sources = sources - return - def generate_sources(self, sources, extension): new_sources = [] func_sources = [] @@ -370,12 +369,6 @@ return new_sources def pyrex_sources(self, sources, extension): - have_pyrex = False - try: - import Pyrex - have_pyrex = True - except ImportError: - pass new_sources = [] ext_name = extension.name.split('.')[-1] for source in sources: @@ -485,8 +478,9 @@ if (self.force or newer_group(depends, target_file,'newer')) \ and not skip_f2py: log.info("f2py: %s" % (source)) - import numpy.f2py as f2py2e - f2py2e.run_main(f2py_options + ['--build-dir',target_dir,source]) + import numpy.f2py + numpy.f2py.run_main(f2py_options + + ['--build-dir',target_dir,source]) else: log.debug(" skipping '%s' f2py interface (up-to-date)" % (source)) else: @@ -501,10 +495,10 @@ depends = f_sources + extension.depends if (self.force or newer_group(depends, target_file, 'newer')) \ and not skip_f2py: - import numpy.f2py as f2py2e log.info("f2py:> %s" % (target_file)) self.mkpath(target_dir) - f2py2e.run_main(f2py_options + ['--lower', + import numpy.f2py + numpy.f2py.run_main(f2py_options + ['--lower', '--build-dir',target_dir]+\ ['-m',ext_name]+f_sources) else: @@ -524,8 +518,8 @@ extension.include_dirs.append(self.build_src) if not skip_f2py: - import numpy.f2py as f2py2e - d = os.path.dirname(f2py2e.__file__) + import numpy.f2py + d = os.path.dirname(numpy.f2py.__file__) source_c = os.path.join(d,'src','fortranobject.c') source_h = os.path.join(d,'src','fortranobject.h') if newer(source_c,target_c) or newer(source_h,target_h): From numpy-svn at scipy.org Fri May 25 10:47:45 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 25 May 2007 09:47:45 -0500 (CDT) Subject: [Numpy-svn] r3826 - trunk/numpy/distutils Message-ID: <20070525144745.D0C0339C030@new.scipy.org> Author: cookedm Date: 2007-05-25 09:47:42 -0500 (Fri, 25 May 2007) New Revision: 3826 Modified: trunk/numpy/distutils/exec_command.py trunk/numpy/distutils/log.py trunk/numpy/distutils/misc_util.py Log: Add a numpy.distutils.log.good function, which when WARN messages would be logged, logs a "nice" anti-warn version. Use this for finding executables to report when we do actually find one. Modified: trunk/numpy/distutils/exec_command.py =================================================================== --- trunk/numpy/distutils/exec_command.py 2007-05-25 11:41:55 UTC (rev 3825) +++ trunk/numpy/distutils/exec_command.py 2007-05-25 14:47:42 UTC (rev 3826) @@ -159,7 +159,7 @@ if not os.path.islink(f_ext): f_ext = realpath(f_ext) if os.path.isfile(f_ext) and os.access(f_ext, os.X_OK): - log.debug('Found executable %s' % f_ext) + log.good('Found executable %s' % f_ext) return f_ext log.warn('Could not locate executable %s' % orig_exe) Modified: trunk/numpy/distutils/log.py =================================================================== --- trunk/numpy/distutils/log.py 2007-05-25 11:41:55 UTC (rev 3825) +++ trunk/numpy/distutils/log.py 2007-05-25 14:47:42 UTC (rev 3826) @@ -4,7 +4,7 @@ from distutils.log import * from distutils.log import Log as old_Log from distutils.log import _global_log -from misc_util import red_text, yellow_text, cyan_text, is_sequence, is_string +from misc_util import red_text, yellow_text, cyan_text, green_text, is_sequence, is_string def _fix_args(args,flag=1): @@ -22,8 +22,29 @@ else: print _global_color_map[level](msg) sys.stdout.flush() + + def good(self, msg, *args): + """If we'd log WARN messages, log this message as a 'nice' anti-warn + message. + """ + if WARN >= self.threshold: + if args: + print green_text(msg % _fix_args(args)) + else: + print green_text(msg) + sys.stdout.flush() _global_log.__class__ = Log +good = _global_log.good + +def set_threshold(level): + prev_level = _global_log.threshold + if prev_level > DEBUG: + # If we're running at DEBUG, don't change the threshold, as there's + # likely a good reason why we're running at this level. + _global_log.threshold = level + return prev_level + def set_verbosity(v): prev_level = _global_log.threshold if v < 0: @@ -44,4 +65,4 @@ FATAL:red_text } -set_verbosity(1) +set_verbosity(INFO) Modified: trunk/numpy/distutils/misc_util.py =================================================================== --- trunk/numpy/distutils/misc_util.py 2007-05-25 11:41:55 UTC (rev 3825) +++ trunk/numpy/distutils/misc_util.py 2007-05-25 14:47:42 UTC (rev 3826) @@ -65,7 +65,7 @@ # (likely we're building an egg) d = os.path.abspath('.') # hmm, should we use sys.argv[0] like in __builtin__ case? - + if parent_path is not None: d = rel_path(d, parent_path) @@ -219,18 +219,37 @@ return 0 if terminal_has_colors(): - def red_text(s): return '\x1b[31m%s\x1b[0m'%s - def green_text(s): return '\x1b[32m%s\x1b[0m'%s - def yellow_text(s): return '\x1b[33m%s\x1b[0m'%s - def blue_text(s): return '\x1b[34m%s\x1b[0m'%s - def cyan_text(s): return '\x1b[35m%s\x1b[0m'%s + _colour_codes = dict(black=0, red=1, green=2, yellow=3, + blue=4, magenta=5, cyan=6, white=7) + def colour_text(s, fg=None, bg=None, bold=False): + seq = [] + if bold: + seq.append('1') + if fg: + fgcode = 30 + _colour_codes.get(fg.lower(), 0) + seq.append(str(fgcode)) + if bg: + bgcode = 40 + _colour_codes.get(fg.lower(), 7) + seq.append(str(bgcode)) + if seq: + return '\x1b[%sm%s\x1b[0m' % (';'.join(seq), s) + else: + return s else: - def red_text(s): return s - def green_text(s): return s - def yellow_text(s): return s - def cyan_text(s): return s - def blue_text(s): return s + def colour_text(s, fg=None, bg=None): + return s +def red_text(s): + return colour_text(s, 'red') +def green_text(s): + return colour_text(s, 'green') +def yellow_text(s): + return colour_text(s, 'yellow') +def cyan_text(s): + return colour_text(s, 'cyan') +def blue_text(s): + return colour_text(s, 'blue') + ######################### def cyg2win32(path): From numpy-svn at scipy.org Fri May 25 11:05:57 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 25 May 2007 10:05:57 -0500 (CDT) Subject: [Numpy-svn] r3827 - trunk/numpy/distutils/fcompiler Message-ID: <20070525150557.9879839C030@new.scipy.org> Author: cookedm Date: 2007-05-25 10:05:55 -0500 (Fri, 25 May 2007) New Revision: 3827 Modified: trunk/numpy/distutils/fcompiler/__init__.py Log: Add a few more log.debug's in fcompiler Modified: trunk/numpy/distutils/fcompiler/__init__.py =================================================================== --- trunk/numpy/distutils/fcompiler/__init__.py 2007-05-25 14:47:42 UTC (rev 3826) +++ trunk/numpy/distutils/fcompiler/__init__.py 2007-05-25 15:05:55 UTC (rev 3827) @@ -651,7 +651,7 @@ ## class FCompiler _default_compilers = ( - # Platform mappings + # sys.platform mappings ('win32', ('gnu','intelv','absoft','compaqv','intelev','gnu95','g95')), ('cygwin.*', ('gnu','intelv','absoft','compaqv','intelev','gnu95','g95')), ('linux.*', ('gnu','intel','lahey','pg','absoft','nag','vast','compaq', @@ -660,7 +660,7 @@ ('sunos.*', ('sun','gnu','gnu95','g95')), ('irix.*', ('mips','gnu','gnu95',)), ('aix.*', ('ibm','gnu','gnu95',)), - # OS mappings + # os.name mappings ('posix', ('gnu','gnu95',)), ('nt', ('gnu','gnu95',)), ('mac', ('gnu','gnu95',)), @@ -690,7 +690,8 @@ klass, klass.description) -def _find_existing_fcompiler(compiler_types, osname=None, platform=None, +def _find_existing_fcompiler(compiler_types, + osname=None, platform=None, requiref90=False): from numpy.distutils.core import get_distribution dist = get_distribution(always=True) @@ -716,9 +717,9 @@ raise ValueError('%s does not support compiling f90 codes, ' 'skipping.' % (c.__class__.__name__)) except DistutilsModuleError: - pass + log.debug("_find_existing_fcompiler: compiler_type='%s' raised DistutilsModuleError", compiler_type) except CompilerNotFound: - pass + log.debug("_find_existing_fcompiler: compiler_type='%s' not found", compiler_type) if v is not None: return compiler_type return None @@ -802,13 +803,12 @@ for compiler in platform_compilers: v = None log.set_verbosity(-2) - log.set_verbosity(-2) try: c = new_fcompiler(compiler=compiler, verbose=dist.verbose) c.customize(dist) v = c.get_version() except (DistutilsModuleError, CompilerNotFound): - pass + log.debug("show_fcompilers: %s not found" % (compiler,)) if v is None: From numpy-svn at scipy.org Fri May 25 11:17:21 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 25 May 2007 10:17:21 -0500 (CDT) Subject: [Numpy-svn] r3828 - trunk/numpy/distutils/fcompiler Message-ID: <20070525151721.D961D39C030@new.scipy.org> Author: cookedm Date: 2007-05-25 10:17:20 -0500 (Fri, 25 May 2007) New Revision: 3828 Modified: trunk/numpy/distutils/fcompiler/__init__.py Log: Fix at least one bug in fcompiler introduced by me Modified: trunk/numpy/distutils/fcompiler/__init__.py =================================================================== --- trunk/numpy/distutils/fcompiler/__init__.py 2007-05-25 15:05:55 UTC (rev 3827) +++ trunk/numpy/distutils/fcompiler/__init__.py 2007-05-25 15:17:20 UTC (rev 3828) @@ -245,11 +245,11 @@ return None f90 = set_exe('compiler_f90') - if not f90: - raise CompilerNotFound('f90') +# if not f90: +# raise CompilerNotFound('f90') f77 = set_exe('compiler_f77', f90=f90) if not f77: - raise CompilerNotFound('f90') + raise CompilerNotFound('f77') set_exe('compiler_fix', f90=f90) set_exe('linker_so', f77=f77, f90=f90) @@ -273,8 +273,8 @@ f90 = self.executables.get('compiler_f90') if f90 is not None: f90 = f90[0] - if cmd==f90: - cmd = self.compiler_f90[0] + if cmd == f90: + cmd = self.compiler_f90[0] return cmd def get_linker_so(self): @@ -285,14 +285,14 @@ ln = self.executables.get('linker_so') if ln is not None: ln = ln[0] - if ln==f77: + if ln == f77: ln = self.compiler_f77[0] else: f90 = self.executables.get('compiler_f90') if f90 is not None: f90 = f90[0] - if ln==f90: - ln = self.compiler_f90[0] + if ln == f90: + ln = self.compiler_f90[0] return ln def get_linker_exe(self): @@ -303,14 +303,14 @@ ln = self.executables.get('linker_exe') if ln is not None: ln = ln[0] - if ln==f77: + if ln == f77: ln = self.compiler_f77[0] else: f90 = self.executables.get('compiler_f90') if f90 is not None: f90 = f90[0] - if ln==f90: - ln = self.compiler_f90[0] + if ln == f90: + ln = self.compiler_f90[0] return ln def get_flags(self): @@ -807,8 +807,9 @@ c = new_fcompiler(compiler=compiler, verbose=dist.verbose) c.customize(dist) v = c.get_version() - except (DistutilsModuleError, CompilerNotFound): + except (DistutilsModuleError, CompilerNotFound), e: log.debug("show_fcompilers: %s not found" % (compiler,)) + log.debug(repr(e)) if v is None: From numpy-svn at scipy.org Fri May 25 15:38:51 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 25 May 2007 14:38:51 -0500 (CDT) Subject: [Numpy-svn] r3829 - in trunk/numpy/distutils: . command fcompiler Message-ID: <20070525193851.F41C239C09C@new.scipy.org> Author: cookedm Date: 2007-05-25 14:38:39 -0500 (Fri, 25 May 2007) New Revision: 3829 Modified: trunk/numpy/distutils/command/build_clib.py trunk/numpy/distutils/command/build_src.py trunk/numpy/distutils/command/config_compiler.py trunk/numpy/distutils/exec_command.py trunk/numpy/distutils/fcompiler/__init__.py trunk/numpy/distutils/fcompiler/absoft.py trunk/numpy/distutils/fcompiler/compaq.py trunk/numpy/distutils/fcompiler/g95.py trunk/numpy/distutils/fcompiler/gnu.py trunk/numpy/distutils/fcompiler/hpux.py trunk/numpy/distutils/fcompiler/ibm.py trunk/numpy/distutils/fcompiler/intel.py trunk/numpy/distutils/fcompiler/lahey.py trunk/numpy/distutils/fcompiler/mips.py trunk/numpy/distutils/fcompiler/nag.py trunk/numpy/distutils/fcompiler/none.py trunk/numpy/distutils/fcompiler/pg.py trunk/numpy/distutils/fcompiler/sun.py trunk/numpy/distutils/fcompiler/vast.py trunk/numpy/distutils/intelccompiler.py trunk/numpy/distutils/lib2def.py trunk/numpy/distutils/line_endings.py trunk/numpy/distutils/unixccompiler.py Log: distutils: clean up imports (found by running pyflakes) Modified: trunk/numpy/distutils/command/build_clib.py =================================================================== --- trunk/numpy/distutils/command/build_clib.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/command/build_clib.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -2,8 +2,10 @@ """ import os +from glob import glob from distutils.command.build_clib import build_clib as old_build_clib -from distutils.errors import DistutilsSetupError, DistutilsError +from distutils.errors import DistutilsSetupError, DistutilsError, \ + DistutilsFileError from numpy.distutils import log from distutils.dep_util import newer_group Modified: trunk/numpy/distutils/command/build_src.py =================================================================== --- trunk/numpy/distutils/command/build_src.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/command/build_src.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -11,7 +11,7 @@ from distutils.errors import DistutilsError, DistutilsSetupError try: - from Pyrex.Compiler import Main + import Pyrex.Compiler have_pyrex = True except ImportError: have_pyrex = False @@ -384,11 +384,11 @@ if have_pyrex: log.info("pyrexc:> %s" % (target_file)) self.mkpath(target_dir) - from Pyrex.Compiler import Main - options = Main.CompilationOptions( - defaults=Main.default_options, + options = Pyrex.Compiler.Main.CompilationOptions( + defaults=Pyrex.Compiler.Main.default_options, output_file=target_file) - pyrex_result = Main.compile(source, options=options) + pyrex_result = Pyrex.Compiler.Main.compile(source, + options=options) if pyrex_result.num_errors != 0: raise DistutilsError,"%d errors while compiling %r with Pyrex" \ % (pyrex_result.num_errors, source) Modified: trunk/numpy/distutils/command/config_compiler.py =================================================================== --- trunk/numpy/distutils/command/config_compiler.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/command/config_compiler.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -1,4 +1,3 @@ -import sys from distutils.core import Command from numpy.distutils import log Modified: trunk/numpy/distutils/exec_command.py =================================================================== --- trunk/numpy/distutils/exec_command.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/exec_command.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -49,7 +49,6 @@ __all__ = ['exec_command','find_executable'] import os -import re import sys import tempfile Modified: trunk/numpy/distutils/fcompiler/__init__.py =================================================================== --- trunk/numpy/distutils/fcompiler/__init__.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/__init__.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -18,13 +18,13 @@ from distutils.sysconfig import get_config_var, get_python_lib from distutils.fancy_getopt import FancyGetopt -from distutils.errors import DistutilsModuleError,DistutilsArgError,\ - DistutilsExecError,CompileError,LinkError,DistutilsPlatformError +from distutils.errors import DistutilsModuleError, \ + DistutilsExecError, CompileError, LinkError, DistutilsPlatformError from distutils.util import split_quoted from numpy.distutils.ccompiler import CCompiler, gen_lib_options from numpy.distutils import log -from numpy.distutils.misc_util import is_string, is_sequence +from numpy.distutils.misc_util import is_string from numpy.distutils.environment import EnvironmentConfig from numpy.distutils.exec_command import find_executable from distutils.spawn import _nt_quote_args @@ -706,7 +706,7 @@ new_compiler = c.suggested_f90_compiler if new_compiler: log.warn('Trying %r compiler as suggested by %r ' - 'compiler for f90 support.' % (compiler, + 'compiler for f90 support.' % (compiler_type, new_compiler)) c = new_fcompiler(plat=platform, compiler=new_compiler) c.customize(dist) @@ -811,7 +811,6 @@ log.debug("show_fcompilers: %s not found" % (compiler,)) log.debug(repr(e)) - if v is None: compilers_na.append(("fcompiler="+compiler, None, fcompiler_class[compiler][2])) Modified: trunk/numpy/distutils/fcompiler/absoft.py =================================================================== --- trunk/numpy/distutils/fcompiler/absoft.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/absoft.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -7,7 +7,6 @@ # generated extension modules (works for f2py v2.45.241_1936 and up) import os -import sys from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler, dummy_fortran_file Modified: trunk/numpy/distutils/fcompiler/compaq.py =================================================================== --- trunk/numpy/distutils/fcompiler/compaq.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/compaq.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -4,7 +4,6 @@ import os import sys -from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler compilers = ['CompaqFCompiler'] Modified: trunk/numpy/distutils/fcompiler/g95.py =================================================================== --- trunk/numpy/distutils/fcompiler/g95.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/g95.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -1,9 +1,5 @@ # http://g95.sourceforge.net/ -import os -import sys - -from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler compilers = ['G95FCompiler'] @@ -43,8 +39,6 @@ if __name__ == '__main__': from distutils import log log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler - #compiler = new_fcompiler(compiler='g95') compiler = G95FCompiler() compiler.customize() print compiler.get_version() Modified: trunk/numpy/distutils/fcompiler/gnu.py =================================================================== --- trunk/numpy/distutils/fcompiler/gnu.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/gnu.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -6,7 +6,7 @@ from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler from numpy.distutils.exec_command import exec_command -from numpy.distutils.misc_util import mingw32, msvc_runtime_library +from numpy.distutils.misc_util import msvc_runtime_library compilers = ['GnuFCompiler', 'Gnu95FCompiler'] @@ -320,8 +320,6 @@ if __name__ == '__main__': from distutils import log log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler - #compiler = new_fcompiler(compiler='gnu') compiler = GnuFCompiler() compiler.customize() print compiler.get_version() Modified: trunk/numpy/distutils/fcompiler/hpux.py =================================================================== --- trunk/numpy/distutils/fcompiler/hpux.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/hpux.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -1,7 +1,3 @@ -import os -import sys - -from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler compilers = ['HPUXFCompiler'] Modified: trunk/numpy/distutils/fcompiler/ibm.py =================================================================== --- trunk/numpy/distutils/fcompiler/ibm.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/ibm.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -5,7 +5,6 @@ from numpy.distutils.fcompiler import FCompiler from numpy.distutils.exec_command import exec_command, find_executable from distutils import log -from distutils.sysconfig import get_python_lib compilers = ['IBMFCompiler'] @@ -91,10 +90,7 @@ return ['-O5'] if __name__ == '__main__': - from distutils import log log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler - #compiler = new_fcompiler(compiler='ibm') - compiler = IbmFCompiler() + compiler = IBMFCompiler() compiler.customize() print compiler.get_version() Modified: trunk/numpy/distutils/fcompiler/intel.py =================================================================== --- trunk/numpy/distutils/fcompiler/intel.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/intel.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -3,9 +3,6 @@ # of intele # http://developer.intel.com/software/products/compilers/flin/ -import os -import sys - from numpy.distutils.cpuinfo import cpu from numpy.distutils.ccompiler import simple_version_match from numpy.distutils.fcompiler import FCompiler, dummy_fortran_file Modified: trunk/numpy/distutils/fcompiler/lahey.py =================================================================== --- trunk/numpy/distutils/fcompiler/lahey.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/lahey.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -1,7 +1,5 @@ import os -import sys -from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler compilers = ['LaheyFCompiler'] Modified: trunk/numpy/distutils/fcompiler/mips.py =================================================================== --- trunk/numpy/distutils/fcompiler/mips.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/mips.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -1,6 +1,3 @@ -import os -import sys - from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler Modified: trunk/numpy/distutils/fcompiler/nag.py =================================================================== --- trunk/numpy/distutils/fcompiler/nag.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/nag.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -1,7 +1,4 @@ -import os import sys - -from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler compilers = ['NAGFCompiler'] Modified: trunk/numpy/distutils/fcompiler/none.py =================================================================== --- trunk/numpy/distutils/fcompiler/none.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/none.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -25,7 +25,6 @@ if __name__ == '__main__': from distutils import log log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler compiler = NoneFCompiler() compiler.customize() print compiler.get_version() Modified: trunk/numpy/distutils/fcompiler/pg.py =================================================================== --- trunk/numpy/distutils/fcompiler/pg.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/pg.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -1,10 +1,6 @@ # http://www.pgroup.com -import os -import sys - -from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler compilers = ['PGroupFCompiler'] Modified: trunk/numpy/distutils/fcompiler/sun.py =================================================================== --- trunk/numpy/distutils/fcompiler/sun.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/sun.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -1,7 +1,3 @@ -import os -import sys - -from numpy.distutils.cpuinfo import cpu from numpy.distutils.ccompiler import simple_version_match from numpy.distutils.fcompiler import FCompiler Modified: trunk/numpy/distutils/fcompiler/vast.py =================================================================== --- trunk/numpy/distutils/fcompiler/vast.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/fcompiler/vast.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -1,7 +1,5 @@ import os -import sys -from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler.gnu import GnuFCompiler compilers = ['VastFCompiler'] Modified: trunk/numpy/distutils/intelccompiler.py =================================================================== --- trunk/numpy/distutils/intelccompiler.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/intelccompiler.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -1,5 +1,4 @@ -import os from distutils.unixccompiler import UnixCCompiler from numpy.distutils.exec_command import find_executable Modified: trunk/numpy/distutils/lib2def.py =================================================================== --- trunk/numpy/distutils/lib2def.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/lib2def.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -1,7 +1,6 @@ import re import sys import os -import string __doc__ = """This module generates a DEF file from the symbols in an MSVC-compiled DLL import library. It correctly discriminates between @@ -21,8 +20,6 @@ __version__ = '0.1a' -import sys - py_ver = "%d%d" % tuple(sys.version_info[:2]) DEFAULT_NM = 'nm -Cs' Modified: trunk/numpy/distutils/line_endings.py =================================================================== --- trunk/numpy/distutils/line_endings.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/line_endings.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -71,5 +71,4 @@ return modified_files if __name__ == "__main__": - import sys dos2unix_dir(sys.argv[1]) Modified: trunk/numpy/distutils/unixccompiler.py =================================================================== --- trunk/numpy/distutils/unixccompiler.py 2007-05-25 15:17:20 UTC (rev 3828) +++ trunk/numpy/distutils/unixccompiler.py 2007-05-25 19:38:39 UTC (rev 3829) @@ -3,13 +3,11 @@ """ import os -import sys import new -from distutils.errors import DistutilsExecError, LinkError, CompileError +from distutils.errors import DistutilsExecError, CompileError from distutils.unixccompiler import * - import log # Note that UnixCCompiler._compile appeared in Python 2.3 From numpy-svn at scipy.org Sun May 27 07:18:37 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sun, 27 May 2007 06:18:37 -0500 (CDT) Subject: [Numpy-svn] r3830 - tags/1.0.3/numpy/core/include/numpy Message-ID: <20070527111837.63E6639C00B@new.scipy.org> Author: oliphant Date: 2007-05-27 06:18:34 -0500 (Sun, 27 May 2007) New Revision: 3830 Modified: tags/1.0.3/numpy/core/include/numpy/ndarrayobject.h Log: Add parenthesis around GETPTR macros. Modified: tags/1.0.3/numpy/core/include/numpy/ndarrayobject.h =================================================================== --- tags/1.0.3/numpy/core/include/numpy/ndarrayobject.h 2007-05-25 19:38:39 UTC (rev 3829) +++ tags/1.0.3/numpy/core/include/numpy/ndarrayobject.h 2007-05-27 11:18:34 UTC (rev 3830) @@ -1923,23 +1923,23 @@ inline the constants inside a for loop making it a moot point */ -#define PyArray_GETPTR1(obj, i) (void *)(PyArray_BYTES(obj) + \ - (i)*PyArray_STRIDES(obj)[0]) +#define PyArray_GETPTR1(obj, i) ((void *)(PyArray_BYTES(obj) + \ + (i)*PyArray_STRIDES(obj)[0])) -#define PyArray_GETPTR2(obj, i, j) (void *)(PyArray_BYTES(obj) + \ +#define PyArray_GETPTR2(obj, i, j) ((void *)(PyArray_BYTES(obj) + \ (i)*PyArray_STRIDES(obj)[0] + \ - (j)*PyArray_STRIDES(obj)[1]) + (j)*PyArray_STRIDES(obj)[1])) -#define PyArray_GETPTR3(obj, i, j, k) (void *)(PyArray_BYTES(obj) + \ +#define PyArray_GETPTR3(obj, i, j, k) ((void *)(PyArray_BYTES(obj) + \ (i)*PyArray_STRIDES(obj)[0] + \ (j)*PyArray_STRIDES(obj)[1] + \ - (k)*PyArray_STRIDES(obj)[2]) \ + (k)*PyArray_STRIDES(obj)[2])) \ -#define PyArray_GETPTR4(obj, i, j, k, l) (void *)(PyArray_BYTES(obj) + \ +#define PyArray_GETPTR4(obj, i, j, k, l) ((void *)(PyArray_BYTES(obj) + \ (i)*PyArray_STRIDES(obj)[0] + \ (j)*PyArray_STRIDES(obj)[1] + \ (k)*PyArray_STRIDES(obj)[2] + \ - (l)*PyArray_STRIDES(obj)[3]) + (l)*PyArray_STRIDES(obj)[3])) #define PyArray_XDECREF_ERR(obj) \ if (obj && (PyArray_FLAGS(obj) & NPY_UPDATEIFCOPY)) { \ From numpy-svn at scipy.org Mon May 28 07:52:15 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 28 May 2007 06:52:15 -0500 (CDT) Subject: [Numpy-svn] r3831 - in trunk/numpy/lib: . tests Message-ID: <20070528115215.2FD2639C099@new.scipy.org> Author: stefan Date: 2007-05-28 06:51:55 -0500 (Mon, 28 May 2007) New Revision: 3831 Modified: trunk/numpy/lib/function_base.py trunk/numpy/lib/tests/test_function_base.py Log: Select should not modify output arguments. Add test for basic select functionality. Modified: trunk/numpy/lib/function_base.py =================================================================== --- trunk/numpy/lib/function_base.py 2007-05-27 11:18:34 UTC (rev 3830) +++ trunk/numpy/lib/function_base.py 2007-05-28 11:51:55 UTC (rev 3831) @@ -286,9 +286,7 @@ def average(a, axis=None, weights=None, returned=False): - """average(a, axis=None weights=None, returned=False) - - Average the array over the given axis. If the axis is None, + """Average the array over the given axis. If the axis is None, average over all dimensions of the array. Equivalent to a.mean(axis) and to @@ -452,7 +450,7 @@ n2 = len(choicelist) if n2 != n: raise ValueError, "list of cases must be same length as list of conditions" - choicelist.insert(0, default) + choicelist = [default] + choicelist S = 0 pfac = 1 for k in range(1, n+1): Modified: trunk/numpy/lib/tests/test_function_base.py =================================================================== --- trunk/numpy/lib/tests/test_function_base.py 2007-05-27 11:18:34 UTC (rev 3830) +++ trunk/numpy/lib/tests/test_function_base.py 2007-05-28 11:51:55 UTC (rev 3831) @@ -64,6 +64,20 @@ desired = array([3.,4.,5.]) assert_array_equal(actual, desired) +class test_select(NumpyTestCase): + def check_basic(self): + choices = [array([1,2,3]), + array([4,5,6]), + array([7,8,9])] + conditions = [array([0,0,0]), + array([0,1,0]), + array([0,0,1])] + assert_array_equal(select(conditions,choices,default=15), + [15,5,9]) + + assert_equal(len(choices),3) + assert_equal(len(conditions),3) + class test_logspace(NumpyTestCase): def check_basic(self): y = logspace(0,6) @@ -431,4 +445,4 @@ assert_array_equal(res[i],desired[i]) if __name__ == "__main__": - NumpyTest('numpy.lib.function_base').run() + NumpyTest().run() From numpy-svn at scipy.org Mon May 28 08:55:53 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 28 May 2007 07:55:53 -0500 (CDT) Subject: [Numpy-svn] r3832 - in trunk/numpy/lib: . tests Message-ID: <20070528125553.6080539C05B@new.scipy.org> Author: stefan Date: 2007-05-28 07:55:25 -0500 (Mon, 28 May 2007) New Revision: 3832 Modified: trunk/numpy/lib/function_base.py trunk/numpy/lib/tests/test_function_base.py Log: Clean up select docstring. Modified: trunk/numpy/lib/function_base.py =================================================================== --- trunk/numpy/lib/function_base.py 2007-05-28 11:51:55 UTC (rev 3831) +++ trunk/numpy/lib/function_base.py 2007-05-28 12:55:25 UTC (rev 3832) @@ -418,33 +418,31 @@ return y def select(condlist, choicelist, default=0): - """ Return an array composed of different elements of choicelist + """Return an array composed of different elements in choicelist, depending on the list of conditions. - condlist is a list of condition arrays containing ones or zeros + :Parameters: + condlist : list of N boolean arrays of length M + The conditions C_0 through C_(N-1) which determine + from which vector the output elements are taken. + choicelist : list of N arrays of length M + Th vectors V_0 through V_(N-1), from which the output + elements are chosen. - choicelist is a list of choice arrays (of the "same" size as the - arrays in condlist). The result array has the "same" size as the - arrays in choicelist. If condlist is [c0, ..., cN-1] then choicelist - must be of length N. The elements of the choicelist can then be - represented as [v0, ..., vN-1]. The default choice if none of the - conditions are met is given as the default argument. + :Returns: + output : 1-dimensional array of length M + The output at position m is the m-th element of the first + vector V_n for which C_n[m] is non-zero. Note that the + output depends on the order of conditions, since the + first satisfied condition is used. - The conditions are tested in order and the first one statisfied is - used to select the choice. In other words, the elements of the - output array are found from the following tree (notice the order of - the conditions matters): + Equivalent to: - if c0: v0 - elif c1: v1 - elif c2: v2 - ... - elif cN-1: vN-1 - else: default + output = [] + for m in range(M): + output += [V[m] for V,C in zip(values,cond) if C[m]] + or [default] - Note that one of the condition arrays must be large enough to handle - the largest array in the choice list. - """ n = len(condlist) n2 = len(choicelist) Modified: trunk/numpy/lib/tests/test_function_base.py =================================================================== --- trunk/numpy/lib/tests/test_function_base.py 2007-05-28 11:51:55 UTC (rev 3831) +++ trunk/numpy/lib/tests/test_function_base.py 2007-05-28 12:55:25 UTC (rev 3832) @@ -65,6 +65,12 @@ assert_array_equal(actual, desired) class test_select(NumpyTestCase): + def _select(self,cond,values,default=0): + output = [] + for m in range(len(cond)): + output += [V[m] for V,C in zip(values,cond) if C[m]] or [default] + return output + def check_basic(self): choices = [array([1,2,3]), array([4,5,6]), @@ -73,7 +79,7 @@ array([0,1,0]), array([0,0,1])] assert_array_equal(select(conditions,choices,default=15), - [15,5,9]) + self._select(conditions,choices,default=15)) assert_equal(len(choices),3) assert_equal(len(conditions),3) From numpy-svn at scipy.org Mon May 28 13:24:28 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 28 May 2007 12:24:28 -0500 (CDT) Subject: [Numpy-svn] r3833 - trunk/numpy/random Message-ID: <20070528172428.F39D939C080@new.scipy.org> Author: cookedm Date: 2007-05-28 12:24:26 -0500 (Mon, 28 May 2007) New Revision: 3833 Modified: trunk/numpy/random/setup.py Log: When checking for the _WIN32 preprocessor symbol, don't #error on failure Modified: trunk/numpy/random/setup.py =================================================================== --- trunk/numpy/random/setup.py 2007-05-28 12:55:25 UTC (rev 3832) +++ trunk/numpy/random/setup.py 2007-05-28 17:24:26 UTC (rev 3833) @@ -42,9 +42,8 @@ #ifdef _WIN32 return 0; #else -#error No _WIN32 + return 1; #endif - return -1; } """ From numpy-svn at scipy.org Mon May 28 13:47:50 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 28 May 2007 12:47:50 -0500 (CDT) Subject: [Numpy-svn] r3834 - in trunk/numpy: core distutils/command Message-ID: <20070528174750.7AEA439C065@new.scipy.org> Author: cookedm Date: 2007-05-28 12:47:47 -0500 (Mon, 28 May 2007) New Revision: 3834 Modified: trunk/numpy/core/setup.py trunk/numpy/distutils/command/build_src.py Log: Use log.info instead of print in setup.py's Modified: trunk/numpy/core/setup.py =================================================================== --- trunk/numpy/core/setup.py 2007-05-28 17:24:26 UTC (rev 3833) +++ trunk/numpy/core/setup.py 2007-05-28 17:47:47 UTC (rev 3834) @@ -2,6 +2,7 @@ import os import sys from os.path import join +from numpy.distutils import log from distutils.dep_util import newer FUNCTIONS_TO_CHECK = [ @@ -37,8 +38,7 @@ target = join(build_dir,'config.h') if newer(__file__,target): config_cmd = config.get_config_cmd() - print 'Generating',target - # + log.info('Generating %s',target) tc = generate_testcode(target) from distutils import sysconfig python_include = sysconfig.get_python_inc() @@ -145,7 +145,7 @@ sys.path.insert(0, codegen_dir) try: m = __import__(module_name) - print 'executing', script + log.info('executing %s', script) h_file, c_file, doc_file = m.generate_api(build_dir) finally: del sys.path[0] Modified: trunk/numpy/distutils/command/build_src.py =================================================================== --- trunk/numpy/distutils/command/build_src.py 2007-05-28 17:24:26 UTC (rev 3833) +++ trunk/numpy/distutils/command/build_src.py 2007-05-28 17:47:47 UTC (rev 3834) @@ -224,7 +224,8 @@ sources, h_files = self.filter_h_files(sources) if h_files: - print self.package,'- nothing done with h_files=',h_files + log.info('%s - nothing done with h_files = %s', + self.package, h_files) #for f in h_files: # self.distribution.headers.append((lib_name,f)) @@ -269,7 +270,8 @@ sources, h_files = self.filter_h_files(sources) if h_files: - print package,'- nothing done with h_files=',h_files + log.info('%s - nothing done with h_files = %s', + package, h_files) #for f in h_files: # self.distribution.headers.append((package,f)) From numpy-svn at scipy.org Mon May 28 13:48:17 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 28 May 2007 12:48:17 -0500 (CDT) Subject: [Numpy-svn] r3835 - trunk/numpy Message-ID: <20070528174817.E4E6A39C060@new.scipy.org> Author: cookedm Date: 2007-05-28 12:48:14 -0500 (Mon, 28 May 2007) New Revision: 3835 Modified: trunk/numpy/setup.py Log: numpy/setup.py shouldn't be run as a script Modified: trunk/numpy/setup.py =================================================================== --- trunk/numpy/setup.py 2007-05-28 17:47:47 UTC (rev 3834) +++ trunk/numpy/setup.py 2007-05-28 17:48:14 UTC (rev 3835) @@ -19,11 +19,4 @@ return config if __name__ == '__main__': - # Remove current working directory from sys.path - # to avoid importing numpy.distutils as Python std. distutils: - import os, sys - for cwd in ['','.',os.getcwd()]: - while cwd in sys.path: sys.path.remove(cwd) - - from numpy.distutils.core import setup - setup(configuration=configuration) + print 'This is the wrong setup.py file to run' From numpy-svn at scipy.org Mon May 28 13:48:53 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 28 May 2007 12:48:53 -0500 (CDT) Subject: [Numpy-svn] r3836 - trunk/numpy/f2py Message-ID: <20070528174853.3358D39C065@new.scipy.org> Author: cookedm Date: 2007-05-28 12:48:49 -0500 (Mon, 28 May 2007) New Revision: 3836 Modified: trunk/numpy/f2py/setup.py Log: Use log.debug instead of print in setup.py's Modified: trunk/numpy/f2py/setup.py =================================================================== --- trunk/numpy/f2py/setup.py 2007-05-28 17:48:14 UTC (rev 3835) +++ trunk/numpy/f2py/setup.py 2007-05-28 17:48:49 UTC (rev 3836) @@ -21,6 +21,7 @@ import os import sys from distutils.dep_util import newer +from numpy.distutils import log from numpy.distutils.core import setup from numpy.distutils.misc_util import Configuration @@ -48,7 +49,7 @@ f2py_exe = f2py_exe + '.py' target = os.path.join(build_dir,f2py_exe) if newer(__file__,target): - print 'Creating',target + log.info('Creating %s', target) f = open(target,'w') f.write('''\ #!/usr/bin/env %s @@ -83,7 +84,7 @@ config.add_scripts(generate_f2py_py) - print 'F2PY Version',config.get_version() + log.info('F2PY Version %s', config.get_version()) return config From numpy-svn at scipy.org Mon May 28 13:49:59 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 28 May 2007 12:49:59 -0500 (CDT) Subject: [Numpy-svn] r3837 - in trunk/numpy/distutils: . fcompiler Message-ID: <20070528174959.E554639C065@new.scipy.org> Author: cookedm Date: 2007-05-28 12:49:55 -0500 (Mon, 28 May 2007) New Revision: 3837 Modified: trunk/numpy/distutils/exec_command.py trunk/numpy/distutils/fcompiler/__init__.py trunk/numpy/distutils/fcompiler/absoft.py trunk/numpy/distutils/fcompiler/ibm.py trunk/numpy/distutils/fcompiler/intel.py trunk/numpy/distutils/misc_util.py Log: Better temporary file handling by using one temporary directory for numpy.distutils, and removing that at exit. Replaces using tempfile.mktemp. Modified: trunk/numpy/distutils/exec_command.py =================================================================== --- trunk/numpy/distutils/exec_command.py 2007-05-28 17:48:49 UTC (rev 3836) +++ trunk/numpy/distutils/exec_command.py 2007-05-28 17:49:55 UTC (rev 3837) @@ -50,14 +50,15 @@ import os import sys -import tempfile -from numpy.distutils.misc_util import is_sequence +from numpy.distutils.misc_util import is_sequence, make_temp_file +from numpy.distutils import log -############################################################ +def temp_file_name(): + fo, name = make_temp_file() + fo.close() + return name -from log import _global_log as log - ############################################################ def get_pythonexe(): @@ -263,17 +264,17 @@ else: command_str = command - tmpfile = tempfile.mktemp() + tmpfile = temp_file_name() stsfile = None if use_tee: - stsfile = tempfile.mktemp() + stsfile = temp_file_name() filter = '' if use_tee == 2: filter = r'| tr -cd "\n" | tr "\n" "."; echo' command_posix = '( %s ; echo $? > %s ) 2>&1 | tee %s %s'\ % (command_str,stsfile,tmpfile,filter) else: - stsfile = tempfile.mktemp() + stsfile = temp_file_name() command_posix = '( %s ; echo $? > %s ) > %s 2>&1'\ % (command_str,stsfile,tmpfile) #command_posix = '( %s ) > %s 2>&1' % (command_str,tmpfile) @@ -310,9 +311,9 @@ log.debug('_exec_command_python(...)') python_exe = get_pythonexe() - cmdfile = tempfile.mktemp() - stsfile = tempfile.mktemp() - outfile = tempfile.mktemp() + cmdfile = temp_file_name() + stsfile = temp_file_name() + outfile = temp_file_name() f = open(cmdfile,'w') f.write('import os\n') @@ -396,10 +397,10 @@ so_dup = os.dup(so_fileno) se_dup = os.dup(se_fileno) - outfile = tempfile.mktemp() + outfile = temp_file_name() fout = open(outfile,'w') if using_command: - errfile = tempfile.mktemp() + errfile = temp_file_name() ferr = open(errfile,'w') log.debug('Running %s(%s,%r,%r,os.environ)' \ @@ -588,7 +589,7 @@ def test_execute_in(**kws): pythonexe = get_pythonexe() - tmpfile = tempfile.mktemp() + tmpfile = temp_file_name() fn = os.path.basename(tmpfile) tmpdir = os.path.dirname(tmpfile) f = open(tmpfile,'w') Modified: trunk/numpy/distutils/fcompiler/__init__.py =================================================================== --- trunk/numpy/distutils/fcompiler/__init__.py 2007-05-28 17:48:49 UTC (rev 3836) +++ trunk/numpy/distutils/fcompiler/__init__.py 2007-05-28 17:49:55 UTC (rev 3837) @@ -24,7 +24,7 @@ from numpy.distutils.ccompiler import CCompiler, gen_lib_options from numpy.distutils import log -from numpy.distutils.misc_util import is_string +from numpy.distutils.misc_util import is_string, make_temp_file from numpy.distutils.environment import EnvironmentConfig from numpy.distutils.exec_command import find_executable from distutils.spawn import _nt_quote_args @@ -267,7 +267,7 @@ cmd = self.executables.get('version_cmd') if cmd is not None: cmd = cmd[0] - if cmd==f77: + if cmd == f77: cmd = self.compiler_f77[0] else: f90 = self.executables.get('compiler_f90') @@ -836,24 +836,14 @@ pretty_printer.print_help("Compilers not available on this platform:") print "For compiler details, run 'config_fc --verbose' setup command." + def dummy_fortran_file(): - import atexit - import tempfile - dummy_name = tempfile.mktemp()+'__dummy' - dummy = open(dummy_name+'.f','w') - dummy.write(" subroutine dummy()\n end\n") - dummy.close() - def rm_file(name=dummy_name,log_threshold=log._global_log.threshold): - save_th = log._global_log.threshold - log.set_threshold(log_threshold) - try: os.remove(name+'.f'); log.debug('removed '+name+'.f') - except OSError: pass - try: os.remove(name+'.o'); log.debug('removed '+name+'.o') - except OSError: pass - log.set_threshold(save_th) - atexit.register(rm_file) - return dummy_name + fo, name = make_temp_file(suffix='.f') + fo.write(" subroutine dummy()\n end\n") + fo.close() + return name[:-2] + is_f_file = re.compile(r'.*[.](for|ftn|f77|f)\Z',re.I).match _has_f_header = re.compile(r'-[*]-\s*fortran\s*-[*]-',re.I).search _has_f90_header = re.compile(r'-[*]-\s*f90\s*-[*]-',re.I).search Modified: trunk/numpy/distutils/fcompiler/absoft.py =================================================================== --- trunk/numpy/distutils/fcompiler/absoft.py 2007-05-28 17:48:49 UTC (rev 3836) +++ trunk/numpy/distutils/fcompiler/absoft.py 2007-05-28 17:49:55 UTC (rev 3837) @@ -30,8 +30,7 @@ # Note that fink installs g77 as f77, so need to use f90 for detection. executables = { - 'version_cmd' : ["", "-V -c %(fname)s.f -o %(fname)s.o" \ - % {'fname':cyg2win32(dummy_fortran_file())}], + 'version_cmd' : None, 'compiler_f77' : ["f77"], 'compiler_fix' : ["f90"], 'compiler_f90' : ["f90"], @@ -46,6 +45,10 @@ module_dir_switch = None module_include_switch = '-p' + def get_version_cmd(self): + f = cyg2win32(dummy_fortran_file()) + return ['', '-V', '-c', f+'.f', '-o', f+'.o'] + def get_flags_linker_so(self): if os.name=='nt': opt = ['/dll'] Modified: trunk/numpy/distutils/fcompiler/ibm.py =================================================================== --- trunk/numpy/distutils/fcompiler/ibm.py 2007-05-28 17:48:49 UTC (rev 3836) +++ trunk/numpy/distutils/fcompiler/ibm.py 2007-05-28 17:49:55 UTC (rev 3837) @@ -4,6 +4,7 @@ from numpy.distutils.fcompiler import FCompiler from numpy.distutils.exec_command import exec_command, find_executable +from numpy.distutils.misc_util import make_temp_file from distutils import log compilers = ['IBMFCompiler'] @@ -65,15 +66,13 @@ opt.append('-bshared') version = self.get_version(ok_status=[0,40]) if version is not None: - import tempfile if sys.platform.startswith('aix'): xlf_cfg = '/etc/xlf.cfg' else: xlf_cfg = '/etc/opt/ibmcmp/xlf/%s/xlf.cfg' % version - new_cfg = tempfile.mktemp()+'_xlf.cfg' + fo, new_cfg = make_temp_file(suffix='_xlf.cfg') log.info('Creating '+new_cfg) fi = open(xlf_cfg,'r') - fo = open(new_cfg,'w') crt1_match = re.compile(r'\s*crt\s*[=]\s*(?P.*)/crt1.o').match for line in fi.readlines(): m = crt1_match(line) Modified: trunk/numpy/distutils/fcompiler/intel.py =================================================================== --- trunk/numpy/distutils/fcompiler/intel.py 2007-05-28 17:48:49 UTC (rev 3836) +++ trunk/numpy/distutils/fcompiler/intel.py 2007-05-28 17:49:55 UTC (rev 3837) @@ -24,8 +24,7 @@ possible_executables = ['ifort', 'ifc'] executables = { - 'version_cmd' : ["", "-FI -V -c %(fname)s.f -o %(fname)s.o" \ - % {'fname':dummy_fortran_file()}], + 'version_cmd' : None, 'compiler_f77' : [None,"-72","-w90","-w95"], 'compiler_f90' : [None], 'compiler_fix' : [None,"-FI"], @@ -38,6 +37,10 @@ module_dir_switch = '-module ' # Don't remove ending space! module_include_switch = '-I' + def get_version_cmd(self): + f = dummy_fortran_file() + return ['', '-FI', '-V', '-c', f + '.f', '-o', f + '.o'] + def get_flags(self): opt = self.pic_flags + ["-cm"] return opt @@ -91,8 +94,7 @@ possible_executables = ['ifort', 'efort', 'efc'] executables = { - 'version_cmd' : ['', "-FI -V -c %(fname)s.f -o %(fname)s.o" \ - % {'fname':dummy_fortran_file()}], + 'version_cmd' : None, 'compiler_f77' : [None,"-FI","-w90","-w95"], 'compiler_fix' : [None,"-FI"], 'compiler_f90' : [None], @@ -110,8 +112,7 @@ possible_executables = ['ifort', 'efort', 'efc'] executables = { - 'version_cmd' : ['', "-FI -V -c %(fname)s.f -o %(fname)s.o" \ - % {'fname':dummy_fortran_file()}], + 'version_cmd' : None, 'compiler_f77' : [None, "-FI", "-w90", "-w95"], 'compiler_fix' : [None, "-FI"], 'compiler_f90' : [None], @@ -138,8 +139,7 @@ possible_executables = ['ifl'] executables = { - 'version_cmd' : ['', "-FI -V -c %(fname)s.f -o %(fname)s.o" \ - % {'fname':dummy_fortran_file()}], + 'version_cmd' : None, 'compiler_f77' : [None,"-FI","-w90","-w95"], 'compiler_fix' : [None,"-FI","-4L72","-w"], 'compiler_f90' : [None], @@ -154,6 +154,10 @@ module_dir_switch = '/module:' #No space after /module: module_include_switch = '/I' + def get_version_cmd(self): + f = dummy_fortran_file() + return ['', '-FI', '-V', '-c', f + '.f', '-o', f + '.o'] + def get_flags(self): opt = ['/nologo','/MD','/nbs','/Qlowercase','/us'] return opt @@ -191,8 +195,7 @@ ar_exe = IntelVisualFCompiler.ar_exe executables = { - 'version_cmd' : ['', "-FI -V -c %(fname)s.f -o %(fname)s.o" \ - % {'fname':dummy_fortran_file()}], + 'version_cmd' : None, 'compiler_f77' : [None,"-FI","-w90","-w95"], 'compiler_fix' : [None,"-FI","-4L72","-w"], 'compiler_f90' : [None], Modified: trunk/numpy/distutils/misc_util.py =================================================================== --- trunk/numpy/distutils/misc_util.py 2007-05-28 17:48:49 UTC (rev 3836) +++ trunk/numpy/distutils/misc_util.py 2007-05-28 17:49:55 UTC (rev 3837) @@ -4,6 +4,8 @@ import imp import copy import glob +import atexit +import tempfile try: set @@ -27,7 +29,7 @@ return os.path.join(*splitted) def rel_path(path, parent_path): - """ Return path relative to parent_path. + """Return path relative to parent_path. """ pd = os.path.abspath(parent_path) apath = os.path.abspath(path) @@ -41,7 +43,7 @@ return path def get_path_from_frame(frame, parent_path=None): - """ Return path of the module given a frame object from the call stack. + """Return path of the module given a frame object from the call stack. Returned path is relative to parent_path when given, otherwise it is absolute path. @@ -72,7 +74,7 @@ return d or '.' def njoin(*path): - """ Join two or more pathname components + + """Join two or more pathname components + - convert a /-separated pathname to one using the OS's path separator. - resolve `..` and `.` from path. @@ -99,7 +101,7 @@ return minrelpath(joined) def get_mathlibs(path=None): - """ Return the MATHLIB line from config.h + """Return the MATHLIB line from config.h """ if path is None: path = get_numpy_include_dirs()[0] @@ -116,7 +118,7 @@ return mathlibs def minrelpath(path): - """ Resolve `..` and '.' from path. + """Resolve `..` and '.' from path. """ if not is_string(path): return path @@ -182,13 +184,38 @@ return map(minrelpath,new_paths) def gpaths(paths, local_path='', include_non_existing=True): - """ Apply glob to paths and prepend local_path if needed. + """Apply glob to paths and prepend local_path if needed. """ if is_string(paths): paths = (paths,) return _fix_paths(paths,local_path, include_non_existing) +_temporary_directory = None +def clean_up_temporary_directory(): + from numpy.distutils import log + global _temporary_directory + if not _temporary_directory: + return + log.debug('removing %s', _temporary_directory) + try: + os.rmdir(_temporary_directory) + except OSError: + pass + _temporary_directory = None + +def make_temp_file(suffix='', prefix='', text=True): + global _temporary_directory + if not _temporary_directory: + _temporary_directory = tempfile.mkdtemp() + atexit.register(clean_up_temporary_directory) + fid, name = tempfile.mkstemp(suffix=suffix, + prefix=prefix, + dir=_temporary_directory, + text=text) + fo = os.fdopen(fid, 'w') + return fo, name + # Hooks for colored terminal output. # See also http://www.livinglogic.de/Python/ansistyle def terminal_has_colors(): @@ -258,7 +285,7 @@ return path def mingw32(): - """ Return true when using mingw32 environment. + """Return true when using mingw32 environment. """ if sys.platform=='win32': if os.environ.get('OSTYPE','')=='msys': @@ -268,7 +295,7 @@ return False def msvc_runtime_library(): - "return name of MSVC runtime library if Python was built with MSVC >= 7" + "Return name of MSVC runtime library if Python was built with MSVC >= 7" msc_pos = sys.version.find('MSC v.') if msc_pos != -1: msc_ver = sys.version[msc_pos+6:msc_pos+10] @@ -288,7 +315,7 @@ f90_ext_match = re.compile(r'.*[.](f90|f95)\Z',re.I).match f90_module_name_match = re.compile(r'\s*module\s*(?P[\w_]+)',re.I).match def _get_f90_modules(source): - """ Return a list of Fortran f90 module names that + """Return a list of Fortran f90 module names that given source file defines. """ if not f90_ext_match(source): @@ -309,7 +336,7 @@ return isinstance(s, str) def all_strings(lst): - """ Return True if all items in lst are string objects. """ + """Return True if all items in lst are string objects. """ for item in lst: if not is_string(item): return False @@ -335,7 +362,7 @@ def get_language(sources): # not used in numpy/scipy packages, use build_ext.detect_language instead - """ Determine language value (c,f77,f90) from sources """ + """Determine language value (c,f77,f90) from sources """ language = None for source in sources: if isinstance(source, str): @@ -347,21 +374,21 @@ return language def has_f_sources(sources): - """ Return True if sources contains Fortran files """ + """Return True if sources contains Fortran files """ for source in sources: if fortran_ext_match(source): return True return False def has_cxx_sources(sources): - """ Return True if sources contains C++ files """ + """Return True if sources contains C++ files """ for source in sources: if cxx_ext_match(source): return True return False def filter_sources(sources): - """ Return four lists of filenames containing + """Return four lists of filenames containing C, C++, Fortran, and Fortran 90 module sources, respectively. """ @@ -405,7 +432,7 @@ return _get_headers(_get_directories(sources)) def is_local_src_dir(directory): - """ Return true if directory is local directory. + """Return true if directory is local directory. """ if not is_string(directory): return False @@ -430,7 +457,7 @@ yield os.path.join(dirpath, f) def general_source_directories_files(top_path): - """ Return a directory name relative to top_path and + """Return a directory name relative to top_path and files contained. """ pruned_directories = ['CVS','.svn','build'] @@ -509,7 +536,7 @@ return '.'.join([a for a in args if a]) def get_frame(level=0): - """ Return frame object from call stack with given level. + """Return frame object from call stack with given level. """ try: return sys._getframe(level+1) @@ -537,7 +564,7 @@ package_path=None, caller_level=1, **attrs): - """ Construct configuration instance of a package. + """Construct configuration instance of a package. package_name -- name of the package Ex.: 'distutils' @@ -624,7 +651,7 @@ self.set_options(**caller_instance.options) def todict(self): - """ Return configuration distionary suitable for passing + """Return configuration distionary suitable for passing to distutils.core.setup() function. """ self._optimize_data_files() @@ -644,7 +671,7 @@ print>>sys.stderr, blue_text('Warning: %s' % (message,)) def set_options(self, **options): - """ Configure Configuration instance. + """Configure Configuration instance. The following options are available: - ignore_setup_xxx_py @@ -722,7 +749,7 @@ subpackage_path=None, parent_name=None, caller_level = 1): - """ Return list of subpackage configurations. + """Return list of subpackage configurations. '*' in subpackage_name is handled as a wildcard. """ @@ -772,7 +799,7 @@ def add_subpackage(self,subpackage_name, subpackage_path=None, standalone = False): - """ Add subpackage to configuration. + """Add subpackage to configuration. """ if standalone: parent_name = None @@ -797,10 +824,9 @@ if dist is not None: self.warn('distutils distribution has been initialized,'\ ' it may be too late to add a subpackage '+ subpackage_name) - return def add_data_dir(self,data_path): - """ Recursively add files under data_path to data_files list. + """Recursively add files under data_path to data_files list. Argument can be either - 2-sequence (,) - path to data directory where python datadir suffix defaults @@ -880,7 +906,6 @@ for d1,f in list(general_source_directories_files(path)): target_path = os.path.join(self.path_in_package,d,d1) data_files.append((target_path, f)) - return def _optimize_data_files(self): data_dict = {} @@ -889,10 +914,9 @@ data_dict[p] = set() map(data_dict[p].add,files) self.data_files[:] = [(p,list(files)) for p,files in data_dict.items()] - return def add_data_files(self,*files): - """ Add data files to configuration data_files. + """Add data files to configuration data_files. Argument(s) can be either - 2-sequence (,) - paths to data files where python datadir prefix defaults @@ -974,12 +998,11 @@ data_files = self.data_files data_files.append((os.path.join(self.path_in_package,d),paths)) - return ### XXX Implement add_py_modules def add_include_dirs(self,*paths): - """ Add paths to configuration include directories. + """Add paths to configuration include directories. """ include_dirs = self.paths(paths) dist = self.get_distribution() @@ -987,14 +1010,13 @@ dist.include_dirs.extend(include_dirs) else: self.include_dirs.extend(include_dirs) - return def add_numarray_include_dirs(self): import numpy.numarray.util as nnu self.add_include_dirs(*nnu.get_numarray_include_dirs()) def add_headers(self,*files): - """ Add installable headers to configuration. + """Add installable headers to configuration. Argument(s) can be either - 2-sequence (,) - path(s) to header file(s) where python includedir suffix will default @@ -1013,10 +1035,9 @@ dist.headers.extend(headers) else: self.headers.extend(headers) - return def paths(self,*paths,**kws): - """ Apply glob to paths and prepend local_path if needed. + """Apply glob to paths and prepend local_path if needed. """ include_non_existing = kws.get('include_non_existing',True) return gpaths(paths, @@ -1030,10 +1051,9 @@ 'module_dirs','extra_objects']: new_v = self.paths(v) kw[k] = new_v - return def add_extension(self,name,sources,**kw): - """ Add extension to configuration. + """Add extension to configuration. Keywords: include_dirs, define_macros, undef_macros, @@ -1047,7 +1067,7 @@ ext_args = copy.copy(kw) ext_args['name'] = dot_join(self.name,name) ext_args['sources'] = sources - + if ext_args.has_key('extra_info'): extra_info = ext_args['extra_info'] del ext_args['extra_info'] @@ -1098,7 +1118,7 @@ return ext def add_library(self,name,sources,**build_info): - """ Add library to configuration. + """Add library to configuration. Valid keywords for build_info: depends @@ -1120,10 +1140,9 @@ if dist is not None: self.warn('distutils distribution has been initialized,'\ ' it may be too late to add a library '+ name) - return def add_scripts(self,*files): - """ Add scripts to configuration. + """Add scripts to configuration. """ scripts = self.paths(files) dist = self.get_distribution() @@ -1131,7 +1150,6 @@ dist.scripts.extend(scripts) else: self.scripts.extend(scripts) - return def dict_append(self,**dict): for key in self.list_keys: @@ -1157,7 +1175,6 @@ pass else: raise ValueError, "Don't know about key=%r" % (key) - return def __str__(self): from pprint import pformat @@ -1189,7 +1206,7 @@ return cmd.build_temp def have_f77c(self): - """ Check for availability of Fortran 77 compiler. + """Check for availability of Fortran 77 compiler. Use it inside source generating function to ensure that setup distribution instance has been initialized. """ @@ -1202,7 +1219,7 @@ return flag def have_f90c(self): - """ Check for availability of Fortran 90 compiler. + """Check for availability of Fortran 90 compiler. Use it inside source generating function to ensure that setup distribution instance has been initialized. """ @@ -1215,7 +1232,7 @@ return flag def append_to(self, extlib): - """ Append libraries, include_dirs to extension or library item. + """Append libraries, include_dirs to extension or library item. """ if is_sequence(extlib): lib_name, build_info = extlib @@ -1227,10 +1244,9 @@ assert isinstance(extlib,Extension), repr(extlib) extlib.libraries.extend(self.libraries) extlib.include_dirs.extend(self.include_dirs) - return def _get_svn_revision(self,path): - """ Return path's SVN revision number. + """Return path's SVN revision number. """ revision = None m = None @@ -1261,7 +1277,7 @@ return revision def get_version(self, version_file=None, version_variable=None): - """ Try to get version string of a package. + """Try to get version string of a package. """ version = getattr(self,'version',None) if version is not None: @@ -1315,7 +1331,7 @@ return version def make_svn_version_py(self, delete=True): - """ Generate package __svn_version__.py file from SVN revision number, + """Generate package __svn_version__.py file from SVN revision number, it will be removed after python exits but will be available when sdist, etc commands are executed. @@ -1349,14 +1365,13 @@ self.add_data_files(('', generate_svn_version_py())) def make_config_py(self,name='__config__'): - """ Generate package __config__.py file containing system_info + """Generate package __config__.py file containing system_info information used during building the package. """ self.py_modules.append((self.name,name,generate_config_py)) - return def get_info(self,*names): - """ Get resources information. + """Get resources information. """ from system_info import get_info, dict_append info_dict = {} @@ -1389,7 +1404,7 @@ ######################### def default_config_dict(name = None, parent_name = None, local_path=None): - """ Return a configuration dictionary for usage in + """Return a configuration dictionary for usage in configuration() function defined in file setup_.py. """ import warnings @@ -1435,10 +1450,10 @@ return os.path.normpath(njoin(drive + prefix, subpath)) def generate_config_py(target): - """ Generate config.py file containing system_info information + """Generate config.py file containing system_info information used during building the package. - Usage:\ + Usage: config['py_modules'].append((packagename, '__config__',generate_config_py)) """ from numpy.distutils.system_info import system_info @@ -1450,20 +1465,23 @@ f.write('__all__ = ["get_info","show"]\n\n') for k, i in system_info.saved_results.items(): f.write('%s=%r\n' % (k, i)) - f.write('\ndef get_info(name):\n g=globals()\n return g.get(name,g.get(name+"_info",{}))\n') - f.write(''' + f.write(r''' +def get_info(name): + g = globals() + return g.get(name, g.get(name + "_info", {})) + def show(): for name,info_dict in globals().items(): - if name[0]=="_" or type(info_dict) is not type({}): continue - print name+":" + if name[0] == "_" or type(info_dict) is not type({}): continue + print name + ":" if not info_dict: print " NOT AVAILABLE" for k,v in info_dict.items(): v = str(v) - if k==\'sources\' and len(v)>200: v = v[:60]+\' ...\\n... \'+v[-60:] - print \' %s = %s\'%(k,v) + if k == "sources" and len(v) > 200: + v = v[:60] + " ...\n... " + v[-60:] + print " %s = %s" % (k,v) print - return ''') f.close() From numpy-svn at scipy.org Mon May 28 13:59:52 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 28 May 2007 12:59:52 -0500 (CDT) Subject: [Numpy-svn] r3838 - trunk/numpy/distutils/fcompiler Message-ID: <20070528175952.215B439C0A2@new.scipy.org> Author: cookedm Date: 2007-05-28 12:59:49 -0500 (Mon, 28 May 2007) New Revision: 3838 Modified: trunk/numpy/distutils/fcompiler/absoft.py trunk/numpy/distutils/fcompiler/intel.py Log: fix my mistake in fcompiler/absoft.py and fcompiler/intel.py Modified: trunk/numpy/distutils/fcompiler/absoft.py =================================================================== --- trunk/numpy/distutils/fcompiler/absoft.py 2007-05-28 17:49:55 UTC (rev 3837) +++ trunk/numpy/distutils/fcompiler/absoft.py 2007-05-28 17:59:49 UTC (rev 3838) @@ -30,7 +30,7 @@ # Note that fink installs g77 as f77, so need to use f90 for detection. executables = { - 'version_cmd' : None, + 'version_cmd' : ['', None], 'compiler_f77' : ["f77"], 'compiler_fix' : ["f90"], 'compiler_f90' : ["f90"], @@ -45,9 +45,9 @@ module_dir_switch = None module_include_switch = '-p' - def get_version_cmd(self): + def get_flags_version(self): f = cyg2win32(dummy_fortran_file()) - return ['', '-V', '-c', f+'.f', '-o', f+'.o'] + return ['-V', '-c', f+'.f', '-o', f+'.o'] def get_flags_linker_so(self): if os.name=='nt': Modified: trunk/numpy/distutils/fcompiler/intel.py =================================================================== --- trunk/numpy/distutils/fcompiler/intel.py 2007-05-28 17:49:55 UTC (rev 3837) +++ trunk/numpy/distutils/fcompiler/intel.py 2007-05-28 17:59:49 UTC (rev 3838) @@ -24,7 +24,7 @@ possible_executables = ['ifort', 'ifc'] executables = { - 'version_cmd' : None, + 'version_cmd' : ['', None], 'compiler_f77' : [None,"-72","-w90","-w95"], 'compiler_f90' : [None], 'compiler_fix' : [None,"-FI"], @@ -37,9 +37,9 @@ module_dir_switch = '-module ' # Don't remove ending space! module_include_switch = '-I' - def get_version_cmd(self): + def get_flags_version(self): f = dummy_fortran_file() - return ['', '-FI', '-V', '-c', f + '.f', '-o', f + '.o'] + return ['-FI', '-V', '-c', f + '.f', '-o', f + '.o'] def get_flags(self): opt = self.pic_flags + ["-cm"] @@ -94,7 +94,7 @@ possible_executables = ['ifort', 'efort', 'efc'] executables = { - 'version_cmd' : None, + 'version_cmd' : ['', None], 'compiler_f77' : [None,"-FI","-w90","-w95"], 'compiler_fix' : [None,"-FI"], 'compiler_f90' : [None], @@ -112,7 +112,7 @@ possible_executables = ['ifort', 'efort', 'efc'] executables = { - 'version_cmd' : None, + 'version_cmd' : ['', None], 'compiler_f77' : [None, "-FI", "-w90", "-w95"], 'compiler_fix' : [None, "-FI"], 'compiler_f90' : [None], @@ -139,7 +139,7 @@ possible_executables = ['ifl'] executables = { - 'version_cmd' : None, + 'version_cmd' : ['', None], 'compiler_f77' : [None,"-FI","-w90","-w95"], 'compiler_fix' : [None,"-FI","-4L72","-w"], 'compiler_f90' : [None], @@ -154,9 +154,9 @@ module_dir_switch = '/module:' #No space after /module: module_include_switch = '/I' - def get_version_cmd(self): + def get_flags_version(self): f = dummy_fortran_file() - return ['', '-FI', '-V', '-c', f + '.f', '-o', f + '.o'] + return ['-FI', '-V', '-c', f + '.f', '-o', f + '.o'] def get_flags(self): opt = ['/nologo','/MD','/nbs','/Qlowercase','/us'] @@ -195,7 +195,7 @@ ar_exe = IntelVisualFCompiler.ar_exe executables = { - 'version_cmd' : None, + 'version_cmd' : ['', None], 'compiler_f77' : [None,"-FI","-w90","-w95"], 'compiler_fix' : [None,"-FI","-4L72","-w"], 'compiler_f90' : [None], From numpy-svn at scipy.org Mon May 28 14:00:25 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 28 May 2007 13:00:25 -0500 (CDT) Subject: [Numpy-svn] r3839 - trunk/numpy/distutils Message-ID: <20070528180025.93BB739C0A2@new.scipy.org> Author: cookedm Date: 2007-05-28 13:00:22 -0500 (Mon, 28 May 2007) New Revision: 3839 Removed: trunk/numpy/distutils/interactive.py Modified: trunk/numpy/distutils/core.py Log: Remove interactive support. No one uses it. Modified: trunk/numpy/distutils/core.py =================================================================== --- trunk/numpy/distutils/core.py 2007-05-28 17:59:49 UTC (rev 3838) +++ trunk/numpy/distutils/core.py 2007-05-28 18:00:22 UTC (rev 3839) @@ -83,14 +83,6 @@ _cache.append(ok) return ok -def _exit_interactive_session(_cache=[]): - if _cache: - return # been here - _cache.append(1) - print '-'*72 - raw_input('Press ENTER to close the interactive session..') - print '='*72 - def get_distribution(always=False): dist = distutils.core._setup_distribution # XXX Hack to get numpy installable with easy_install. @@ -107,15 +99,6 @@ return dist def setup(**attr): - - if len(sys.argv)<=1 and not attr.get('script_args',[]): - from interactive import interactive_sys_argv - import atexit - atexit.register(_exit_interactive_session) - sys.argv[:] = interactive_sys_argv(sys.argv) - if len(sys.argv)>1: - return setup(**attr) - cmdclass = numpy_cmdclass.copy() new_attr = attr.copy() Deleted: trunk/numpy/distutils/interactive.py =================================================================== --- trunk/numpy/distutils/interactive.py 2007-05-28 17:59:49 UTC (rev 3838) +++ trunk/numpy/distutils/interactive.py 2007-05-28 18:00:22 UTC (rev 3839) @@ -1,187 +0,0 @@ -import os -import sys -from pprint import pformat - -__all__ = ['interactive_sys_argv'] - -def show_information(*args): - print 'Python',sys.version - for a in ['platform','prefix','byteorder','path']: - print 'sys.%s = %s' % (a,pformat(getattr(sys,a))) - for a in ['name']: - print 'os.%s = %s' % (a,pformat(getattr(os,a))) - if hasattr(os,'uname'): - print 'system,node,release,version,machine = ',os.uname() - -def show_environ(*args): - for k,i in os.environ.items(): - print ' %s = %s' % (k, i) - -def show_fortran_compilers(*args): - from fcompiler import show_fcompilers - show_fcompilers() - -def show_compilers(*args): - from distutils.ccompiler import show_compilers - show_compilers() - -def show_tasks(argv,ccompiler,fcompiler): - print """\ - -Tasks: - i - Show python/platform/machine information - ie - Show environment information - c - Show C compilers information - c - Set C compiler (current:%s) - f - Show Fortran compilers information - f - Set Fortran compiler (current:%s) - e - Edit proposed sys.argv[1:]. - -Task aliases: - 0 - Configure - 1 - Build - 2 - Install - 2 - Install with prefix. - 3 - Inplace build - 4 - Source distribution - 5 - Binary distribution - -Proposed sys.argv = %s - """ % (ccompiler, fcompiler, argv) - - -from exec_command import splitcmdline - -def edit_argv(*args): - argv = args[0] - readline = args[1] - if readline is not None: - readline.add_history(' '.join(argv[1:])) - try: - s = raw_input('Edit argv [UpArrow to retrive %r]: ' % (' '.join(argv[1:]))) - except EOFError: - return - if s: - argv[1:] = splitcmdline(s) - return - -def interactive_sys_argv(argv): - print '='*72 - print 'Starting interactive session' - print '-'*72 - - readline = None - try: - try: - import readline - except ImportError: - pass - else: - import tempfile - tdir = tempfile.gettempdir() - username = os.environ.get('USER',os.environ.get('USERNAME','UNKNOWN')) - histfile = os.path.join(tdir,".pyhist_interactive_setup-" + username) - try: - try: readline.read_history_file(histfile) - except IOError: pass - import atexit - atexit.register(readline.write_history_file, histfile) - except AttributeError: pass - except Exception, msg: - print msg - - task_dict = {'i':show_information, - 'ie':show_environ, - 'f':show_fortran_compilers, - 'c':show_compilers, - 'e':edit_argv, - } - c_compiler_name = None - f_compiler_name = None - - while 1: - show_tasks(argv,c_compiler_name, f_compiler_name) - try: - task = raw_input('Choose a task (^D to quit, Enter to continue with setup): ').lower() - except EOFError: - print - task = 'quit' - if task=='': break - if task=='quit': sys.exit() - task_func = task_dict.get(task,None) - if task_func is None: - if task[0]=='c': - c_compiler_name = task[1:] - if c_compiler_name=='none': - c_compiler_name = None - continue - if task[0]=='f': - f_compiler_name = task[1:] - if f_compiler_name=='none': - f_compiler_name = None - continue - if task[0]=='2' and len(task)>1: - prefix = task[1:] - task = task[0] - else: - prefix = None - if task == '4': - argv[1:] = ['sdist','-f'] - continue - elif task in '01235': - cmd_opts = {'config':[],'config_fc':[], - 'build_ext':[],'build_src':[], - 'build_clib':[]} - if c_compiler_name is not None: - c = '--compiler=%s' % (c_compiler_name) - cmd_opts['config'].append(c) - if task != '0': - cmd_opts['build_ext'].append(c) - cmd_opts['build_clib'].append(c) - if f_compiler_name is not None: - c = '--fcompiler=%s' % (f_compiler_name) - cmd_opts['config_fc'].append(c) - if task != '0': - cmd_opts['build_ext'].append(c) - cmd_opts['build_clib'].append(c) - if task=='3': - cmd_opts['build_ext'].append('--inplace') - cmd_opts['build_src'].append('--inplace') - conf = [] - sorted_keys = ['config','config_fc','build_src', - 'build_clib','build_ext'] - for k in sorted_keys: - opts = cmd_opts[k] - if opts: conf.extend([k]+opts) - if task=='0': - if 'config' not in conf: - conf.append('config') - argv[1:] = conf - elif task=='1': - argv[1:] = conf+['build'] - elif task=='2': - if prefix is not None: - argv[1:] = conf+['install','--prefix=%s' % (prefix)] - else: - argv[1:] = conf+['install'] - elif task=='3': - argv[1:] = conf+['build'] - elif task=='5': - if sys.platform=='win32': - argv[1:] = conf+['bdist_wininst'] - else: - argv[1:] = conf+['bdist'] - else: - print 'Skipping unknown task:',`task` - else: - print '-'*68 - try: - task_func(argv,readline) - except Exception,msg: - print 'Failed running task %s: %s' % (task,msg) - break - print '-'*68 - print - - print '-'*72 - return argv From numpy-svn at scipy.org Mon May 28 14:35:10 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 28 May 2007 13:35:10 -0500 (CDT) Subject: [Numpy-svn] r3840 - in branches/multicore: . numpy numpy/core numpy/core/include/numpy numpy/core/include/numpy/fenv numpy/core/src numpy/core/tests numpy/distutils numpy/distutils/command numpy/distutils/fcompiler numpy/f2py numpy/f2py/lib numpy/f2py/lib/parser numpy/f2py/lib/tests numpy/fft numpy/lib numpy/lib/tests numpy/linalg numpy/numarray numpy/random numpy/testing Message-ID: <20070528183510.528FE39C0DB@new.scipy.org> Author: eric Date: 2007-05-28 13:35:05 -0500 (Mon, 28 May 2007) New Revision: 3840 Added: branches/multicore/numpy/distutils/environment.py branches/multicore/numpy/f2py/lib/nary.py branches/multicore/numpy/f2py/lib/tests/ branches/multicore/numpy/f2py/lib/tests/test_derived_scalar.py branches/multicore/numpy/f2py/lib/tests/test_module_module.py branches/multicore/numpy/f2py/lib/tests/test_module_scalar.py branches/multicore/numpy/f2py/lib/tests/test_scalar_function_in.py branches/multicore/numpy/f2py/lib/tests/test_scalar_in_out.py Removed: branches/multicore/numpy/distutils/interactive.py branches/multicore/numpy/f2py/lib/test_derived_scalar.py branches/multicore/numpy/f2py/lib/test_module_module.py branches/multicore/numpy/f2py/lib/test_module_scalar.py branches/multicore/numpy/f2py/lib/test_scalar_function_in.py branches/multicore/numpy/f2py/lib/test_scalar_in_out.py branches/multicore/numpy/f2py/lib/tests/test_derived_scalar.py branches/multicore/numpy/f2py/lib/tests/test_module_module.py branches/multicore/numpy/f2py/lib/tests/test_module_scalar.py branches/multicore/numpy/f2py/lib/tests/test_scalar_function_in.py branches/multicore/numpy/f2py/lib/tests/test_scalar_in_out.py Modified: branches/multicore/ branches/multicore/numpy/_import_tools.py branches/multicore/numpy/add_newdocs.py branches/multicore/numpy/core/__init__.py branches/multicore/numpy/core/_internal.py branches/multicore/numpy/core/arrayprint.py branches/multicore/numpy/core/defmatrix.py branches/multicore/numpy/core/fromnumeric.py branches/multicore/numpy/core/include/numpy/fenv/fenv.c branches/multicore/numpy/core/include/numpy/fenv/fenv.h branches/multicore/numpy/core/include/numpy/ndarrayobject.h branches/multicore/numpy/core/ma.py branches/multicore/numpy/core/numeric.py branches/multicore/numpy/core/records.py branches/multicore/numpy/core/setup.py branches/multicore/numpy/core/src/arraymethods.c branches/multicore/numpy/core/src/arrayobject.c branches/multicore/numpy/core/src/multiarraymodule.c branches/multicore/numpy/core/src/scalarmathmodule.c.src branches/multicore/numpy/core/src/scalartypes.inc.src branches/multicore/numpy/core/tests/test_regression.py branches/multicore/numpy/core/tests/test_unicode.py branches/multicore/numpy/distutils/ccompiler.py branches/multicore/numpy/distutils/command/build.py branches/multicore/numpy/distutils/command/build_clib.py branches/multicore/numpy/distutils/command/build_ext.py branches/multicore/numpy/distutils/command/build_src.py branches/multicore/numpy/distutils/command/config.py branches/multicore/numpy/distutils/command/config_compiler.py branches/multicore/numpy/distutils/conv_template.py branches/multicore/numpy/distutils/core.py branches/multicore/numpy/distutils/exec_command.py branches/multicore/numpy/distutils/fcompiler/__init__.py branches/multicore/numpy/distutils/fcompiler/absoft.py branches/multicore/numpy/distutils/fcompiler/compaq.py branches/multicore/numpy/distutils/fcompiler/g95.py branches/multicore/numpy/distutils/fcompiler/gnu.py branches/multicore/numpy/distutils/fcompiler/hpux.py branches/multicore/numpy/distutils/fcompiler/ibm.py branches/multicore/numpy/distutils/fcompiler/intel.py branches/multicore/numpy/distutils/fcompiler/lahey.py branches/multicore/numpy/distutils/fcompiler/mips.py branches/multicore/numpy/distutils/fcompiler/nag.py branches/multicore/numpy/distutils/fcompiler/none.py branches/multicore/numpy/distutils/fcompiler/pg.py branches/multicore/numpy/distutils/fcompiler/sun.py branches/multicore/numpy/distutils/fcompiler/vast.py branches/multicore/numpy/distutils/from_template.py branches/multicore/numpy/distutils/intelccompiler.py branches/multicore/numpy/distutils/lib2def.py branches/multicore/numpy/distutils/line_endings.py branches/multicore/numpy/distutils/log.py branches/multicore/numpy/distutils/misc_util.py branches/multicore/numpy/distutils/system_info.py branches/multicore/numpy/distutils/unixccompiler.py branches/multicore/numpy/f2py/f2py2e.py branches/multicore/numpy/f2py/lib/main.py branches/multicore/numpy/f2py/lib/parser/doc.txt branches/multicore/numpy/f2py/setup.py branches/multicore/numpy/fft/fftpack_litemodule.c branches/multicore/numpy/lib/function_base.py branches/multicore/numpy/lib/getlimits.py branches/multicore/numpy/lib/shape_base.py branches/multicore/numpy/lib/tests/test_function_base.py branches/multicore/numpy/lib/tests/test_getlimits.py branches/multicore/numpy/lib/ufunclike.py branches/multicore/numpy/lib/utils.py branches/multicore/numpy/linalg/linalg.py branches/multicore/numpy/numarray/_capi.c branches/multicore/numpy/numarray/setup.py branches/multicore/numpy/random/setup.py branches/multicore/numpy/setup.py branches/multicore/numpy/testing/numpytest.py branches/multicore/numpy/version.py Log: Merged revisions 3748-3839 via svnmerge from http://svn.scipy.org/svn/numpy/trunk ........ r3749 | oliphant | 2007-05-11 17:35:26 -0500 (Fri, 11 May 2007) | 1 line Fix nan functions to allow sub-class. ........ r3750 | oliphant | 2007-05-11 18:13:27 -0500 (Fri, 11 May 2007) | 1 line Special check for common error in arange. ........ r3751 | charris | 2007-05-12 14:58:43 -0500 (Sat, 12 May 2007) | 2 lines Add/edit documentation for mean, std, var. ........ r3752 | charris | 2007-05-12 19:35:09 -0500 (Sat, 12 May 2007) | 3 lines Add documentation for diagonal. Reformat documentation of sort, argsort, lexsort, and searchsorted. ........ r3753 | charris | 2007-05-13 01:23:08 -0500 (Sun, 13 May 2007) | 2 lines Add documentation for eigvals, eigvalsh, eig, and eigh. ........ r3754 | stefan | 2007-05-13 03:19:11 -0500 (Sun, 13 May 2007) | 2 lines Add regression tests for tickets 469, 503, 514 and 516. ........ r3755 | stefan | 2007-05-13 12:36:19 -0500 (Sun, 13 May 2007) | 2 lines Add iinfo based on a patch by Albert Strasheim (ticket #250). ........ r3756 | charris | 2007-05-13 15:15:09 -0500 (Sun, 13 May 2007) | 2 lines Fix ticket #506 by applying the patch from cdavid. ........ r3757 | charris | 2007-05-13 18:22:17 -0500 (Sun, 13 May 2007) | 6 lines Add patch from dhuard to histogramdd. Fixes ticket #509. Restructure restructured comments; avoid consolidated lists, they are too ugly to contemplate and move around where they aren't wanted. They can be fixed later if epydoc fixes things up. ........ r3758 | charris | 2007-05-14 00:55:19 -0500 (Mon, 14 May 2007) | 3 lines Restructure documentation of sort, argsort, searchsorted, var, mean, std, and diagonal. ........ r3759 | cookedm | 2007-05-14 04:25:11 -0500 (Mon, 14 May 2007) | 2 lines With gfortran, compile modern Xeon's with EM64T with -march=nocona (#515) ........ r3760 | pearu | 2007-05-14 05:20:41 -0500 (Mon, 14 May 2007) | 1 line Fix doc rest formatting. ........ r3761 | pearu | 2007-05-14 05:24:35 -0500 (Mon, 14 May 2007) | 1 line Fix doc rest formatting - 2. ........ r3762 | pearu | 2007-05-14 05:35:03 -0500 (Mon, 14 May 2007) | 1 line Fix f2py command line doc. ........ r3763 | pearu | 2007-05-14 07:17:49 -0500 (Mon, 14 May 2007) | 1 line Workaround Python distutils bug sf 1718574. ........ r3764 | cookedm | 2007-05-14 19:35:49 -0500 (Mon, 14 May 2007) | 2 lines #520: don't add arch-specific flags when linking with Intel Fortran ........ r3765 | cookedm | 2007-05-15 06:19:28 -0500 (Tue, 15 May 2007) | 2 lines #513: fix up include of fenv.c in numarray for cygwin ........ r3766 | cookedm | 2007-05-15 07:33:27 -0500 (Tue, 15 May 2007) | 2 lines Add stacklevel=2 to DeprecationWarning for ScipyTestCase ........ r3767 | oliphant | 2007-05-15 18:23:57 -0500 (Tue, 15 May 2007) | 1 line Fix problem with records with object elements and add pretty-printing to record objects. Remove the global _multiarray_module_loaded. ........ r3768 | oliphant | 2007-05-16 02:10:37 -0500 (Wed, 16 May 2007) | 1 line Fixed a place where unicode itemsize was being counted twice. This led to array([u'abc'],'U') returning the wrong itemsize. ........ r3770 | oliphant | 2007-05-17 02:23:17 -0500 (Thu, 17 May 2007) | 1 line Perhaps fix the problem with multiarray_module_loaded. ........ r3771 | oliphant | 2007-05-17 05:13:39 -0500 (Thu, 17 May 2007) | 1 line Propagate changes made to umathmodule.c to fix the problem with division and remainder not being consistent for negative numbers. ........ r3772 | oliphant | 2007-05-17 05:37:08 -0500 (Thu, 17 May 2007) | 1 line Fix some bugs with isposinf and isneginf as well as with how allclose dealt with infinities. See ticket #519 ........ r3773 | oliphant | 2007-05-17 06:55:11 -0500 (Thu, 17 May 2007) | 1 line Fix ticekt #511 and start to handle allclose problems. ........ r3774 | cookedm | 2007-05-18 07:56:45 -0500 (Fri, 18 May 2007) | 2 lines fix typo: iteratable -> iterator ........ r3775 | pearu | 2007-05-18 09:32:33 -0500 (Fri, 18 May 2007) | 1 line build_src: introduced --swig and other related options (as in std distutils build_ext command), use --f2py-opts instead of --f2pyflags, improved error messages. ........ r3776 | pearu | 2007-05-18 11:41:44 -0500 (Fri, 18 May 2007) | 1 line Extension modules and libraries are built with suitable compilers/linkers. Improved failure handling. ........ r3777 | pearu | 2007-05-18 11:44:43 -0500 (Fri, 18 May 2007) | 1 line g3 f2py: impl. compiling Fortran codes online (function numpy.f2py.lib.compile), clean up testing. ........ r3778 | pearu | 2007-05-18 11:58:30 -0500 (Fri, 18 May 2007) | 1 line Minor for Python 2.3 support. ........ r3779 | pearu | 2007-05-18 12:33:15 -0500 (Fri, 18 May 2007) | 1 line Fixed warnings on language changes. ........ r3780 | pearu | 2007-05-18 15:17:48 -0500 (Fri, 18 May 2007) | 1 line unify config_fc, build_clib, build_ext commands --fcompiler options so that --fcompiler can be specified only once in a command line ........ r3781 | pearu | 2007-05-18 15:41:10 -0500 (Fri, 18 May 2007) | 1 line added config to --fcompiler option unification method. introduced config_cc for unifying --compiler options. ........ r3782 | pearu | 2007-05-18 15:49:09 -0500 (Fri, 18 May 2007) | 1 line Added --help-fcompiler option to build_ext command. ........ r3783 | pearu | 2007-05-18 16:00:17 -0500 (Fri, 18 May 2007) | 1 line show less messages in --help-fcompiler ........ r3784 | pearu | 2007-05-18 16:25:23 -0500 (Fri, 18 May 2007) | 1 line Added --fcompiler,--help-fcompiler options to build command parallel to --compiler,--help-compiler options. ........ r3785 | pearu | 2007-05-18 16:33:07 -0500 (Fri, 18 May 2007) | 1 line Add descriptions to config_fc and config_cc commands. ........ r3786 | pearu | 2007-05-19 04:54:00 -0500 (Sat, 19 May 2007) | 1 line Fix for win32 platform. ........ r3787 | pearu | 2007-05-19 05:23:16 -0500 (Sat, 19 May 2007) | 1 line Fix fcompiler/compiler unification warning. ........ r3788 | pearu | 2007-05-19 10:20:48 -0500 (Sat, 19 May 2007) | 1 line Fix atlas version detection when using MSVC compiler ........ r3789 | pearu | 2007-05-19 10:21:41 -0500 (Sat, 19 May 2007) | 1 line Fix typo. ........ r3790 | pearu | 2007-05-19 10:24:20 -0500 (Sat, 19 May 2007) | 1 line More typo fixes. ........ r3791 | pearu | 2007-05-19 12:01:39 -0500 (Sat, 19 May 2007) | 1 line win32: fix install when build has been carried out earlier. ........ r3792 | pearu | 2007-05-19 14:44:42 -0500 (Sat, 19 May 2007) | 1 line Clean up and completed (hopefully) MSVC support. ........ r3794 | cookedm | 2007-05-21 08:01:20 -0500 (Mon, 21 May 2007) | 1 line minor cleanups in numpy.distutils (style mostly) ........ r3796 | rkern | 2007-05-21 18:34:44 -0500 (Mon, 21 May 2007) | 1 line Use a more robust method for finding the directory of the setup.py file calling Configuration(). easy_install spoofs __name__, thus confusing the old method. ........ r3797 | rkern | 2007-05-21 19:54:42 -0500 (Mon, 21 May 2007) | 1 line Be robust when the shared distribution object exists does not have data_files set. This can happen when easy_install automatically builds dependencies. ........ r3798 | stefan | 2007-05-22 01:25:44 -0500 (Tue, 22 May 2007) | 2 lines Fix array interface url. ........ r3799 | oliphant | 2007-05-22 04:18:38 -0500 (Tue, 22 May 2007) | 1 line Fix scalar inf comparison in allclose. ........ r3800 | oliphant | 2007-05-22 17:36:10 -0500 (Tue, 22 May 2007) | 1 line Add a few more checks to make sure that numpy unicode scalars report correctly on narrow builds. Fix a long-standing seg-fault that arose when calling u.imag on an object with numpy.unicode_ type. ........ r3801 | oliphant | 2007-05-22 17:55:13 -0500 (Tue, 22 May 2007) | 1 line Added most of patch from #422. ........ r3802 | oliphant | 2007-05-22 18:12:03 -0500 (Tue, 22 May 2007) | 1 line Fix ticket #501 which caused some array printing problems ........ r3803 | oliphant | 2007-05-22 18:33:01 -0500 (Tue, 22 May 2007) | 1 line Remove tests for inequality on unicode scalars --- not sure why they were there in the first place. Fix bug in masked_array. ........ r3804 | oliphant | 2007-05-22 21:49:54 -0500 (Tue, 22 May 2007) | 1 line Re-think the byte-swapping unicode tests. They were correct to begin with. Try to fix the new bug on narrow builds. ........ r3805 | stefan | 2007-05-23 07:43:09 -0500 (Wed, 23 May 2007) | 2 lines Add regression test for ticket #501 [patch by Andrew Straw]. ........ r3806 | oliphant | 2007-05-23 13:07:27 -0500 (Wed, 23 May 2007) | 1 line Remove import multiarray from top of _internal.py ........ r3807 | oliphant | 2007-05-23 13:12:52 -0500 (Wed, 23 May 2007) | 1 line Add an dummy import statement so that freeze programs pick up _internal.p ........ r3808 | oliphant | 2007-05-23 13:47:08 -0500 (Wed, 23 May 2007) | 1 line Fix so that _internal.py gets imported when it is needed. Perhaps this will fix the problem with multiple-interpreters not working correctly. ........ r3809 | oliphant | 2007-05-23 14:30:18 -0500 (Wed, 23 May 2007) | 1 line Expose numpy.iinfo and re-implement so it supports big-endian as well. ........ r3810 | oliphant | 2007-05-23 15:25:31 -0500 (Wed, 23 May 2007) | 1 line Properly decrement references for _internal.py imports ........ r3811 | oliphant | 2007-05-23 15:32:17 -0500 (Wed, 23 May 2007) | 1 line Fix some compiler warnings. ........ r3812 | oliphant | 2007-05-23 17:03:42 -0500 (Wed, 23 May 2007) | 1 line Fix up getlimits to work with Python2.3 ........ r3813 | oliphant | 2007-05-23 17:04:51 -0500 (Wed, 23 May 2007) | 1 line Fix tab/space. ........ r3815 | oliphant | 2007-05-23 17:18:54 -0500 (Wed, 23 May 2007) | 1 line Update version number on trunk. ........ r3817 | edschofield | 2007-05-24 13:31:28 -0500 (Thu, 24 May 2007) | 2 lines Fix docstring typo for vstack() ........ r3818 | edschofield | 2007-05-24 13:40:58 -0500 (Thu, 24 May 2007) | 2 lines Fix docstring formatting for PackageLoader class ........ r3819 | edschofield | 2007-05-24 13:48:47 -0500 (Thu, 24 May 2007) | 2 lines Improve docstring formatting for NumpyTest ........ r3820 | edschofield | 2007-05-24 13:52:25 -0500 (Thu, 24 May 2007) | 2 lines Change scipy -> numpy in who() docstring ........ r3821 | edschofield | 2007-05-24 14:17:17 -0500 (Thu, 24 May 2007) | 3 lines Fix the formatting of docstrings for all functions in fromnumeric.py so they don't wrap when using help() from an 80-character terminal. ........ r3822 | pearu | 2007-05-25 03:16:26 -0500 (Fri, 25 May 2007) | 1 line fix ticket 526 ........ r3823 | cookedm | 2007-05-25 06:20:02 -0500 (Fri, 25 May 2007) | 5 lines merge from distutils-revamp branch (step 1) - minor cleanups - find_executable returns None when no file found (instead of having to check with os.path.isfile) ........ r3824 | cookedm | 2007-05-25 06:41:16 -0500 (Fri, 25 May 2007) | 5 lines merge from distutils-revamp branch (step 2) - fcompiler changes. All flags, executables, etc., should be overridable by the user with config_fc (either command line or setup.cfg) or by environment variables ........ r3825 | cookedm | 2007-05-25 06:41:55 -0500 (Fri, 25 May 2007) | 3 lines merge from distutils-revamp branch (step 3) - minor command/build_src cleanup ........ r3826 | cookedm | 2007-05-25 09:47:42 -0500 (Fri, 25 May 2007) | 4 lines Add a numpy.distutils.log.good function, which when WARN messages would be logged, logs a "nice" anti-warn version. Use this for finding executables to report when we do actually find one. ........ r3827 | cookedm | 2007-05-25 10:05:55 -0500 (Fri, 25 May 2007) | 2 lines Add a few more log.debug's in fcompiler ........ r3828 | cookedm | 2007-05-25 10:17:20 -0500 (Fri, 25 May 2007) | 1 line Fix at least one bug in fcompiler introduced by me ........ r3829 | cookedm | 2007-05-25 14:38:39 -0500 (Fri, 25 May 2007) | 1 line distutils: clean up imports (found by running pyflakes) ........ r3831 | stefan | 2007-05-28 06:51:55 -0500 (Mon, 28 May 2007) | 2 lines Select should not modify output arguments. Add test for basic select functionality. ........ r3832 | stefan | 2007-05-28 07:55:25 -0500 (Mon, 28 May 2007) | 2 lines Clean up select docstring. ........ r3833 | cookedm | 2007-05-28 12:24:26 -0500 (Mon, 28 May 2007) | 2 lines When checking for the _WIN32 preprocessor symbol, don't #error on failure ........ r3834 | cookedm | 2007-05-28 12:47:47 -0500 (Mon, 28 May 2007) | 2 lines Use log.info instead of print in setup.py's ........ r3835 | cookedm | 2007-05-28 12:48:14 -0500 (Mon, 28 May 2007) | 2 lines numpy/setup.py shouldn't be run as a script ........ r3836 | cookedm | 2007-05-28 12:48:49 -0500 (Mon, 28 May 2007) | 1 line Use log.debug instead of print in setup.py's ........ r3837 | cookedm | 2007-05-28 12:49:55 -0500 (Mon, 28 May 2007) | 3 lines Better temporary file handling by using one temporary directory for numpy.distutils, and removing that at exit. Replaces using tempfile.mktemp. ........ r3838 | cookedm | 2007-05-28 12:59:49 -0500 (Mon, 28 May 2007) | 1 line fix my mistake in fcompiler/absoft.py and fcompiler/intel.py ........ r3839 | cookedm | 2007-05-28 13:00:22 -0500 (Mon, 28 May 2007) | 2 lines Remove interactive support. No one uses it. ........ Property changes on: branches/multicore ___________________________________________________________________ Name: svnmerge-integrated - /branches/distutils-revamp:1-2752 /trunk:1-3747 + /branches/distutils-revamp:1-2752 /trunk:1-3839 Modified: branches/multicore/numpy/_import_tools.py =================================================================== --- branches/multicore/numpy/_import_tools.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/_import_tools.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -133,10 +133,10 @@ Usage: - This function is intended to shorten the need to import many of + This function is intended to shorten the need to import many subpackages, say of scipy, constantly with statements such as - import scipy.linalg, scipy.fftpack, scipy.etc... + import scipy.linalg, scipy.fftpack, scipy.etc... Instead, you can say: @@ -154,18 +154,19 @@ Inputs: - - the names (one or more strings) of all the numpy modules one wishes to - load into the top-level namespace. + - the names (one or more strings) of all the numpy modules one + wishes to load into the top-level namespace. Optional keyword inputs: - verbose - integer specifying verbosity level [default: -1]. verbose=-1 will suspend also warnings. - - force - when True, force reloading loaded packages [default: False]. + - force - when True, force reloading loaded packages + [default: False]. - postpone - when True, don't load packages [default: False] - If no input arguments are given, then all of scipy's subpackages are - imported. + If no input arguments are given, then all of scipy's subpackages + are imported. """ frame = self.parent_frame Modified: branches/multicore/numpy/add_newdocs.py =================================================================== --- branches/multicore/numpy/add_newdocs.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/add_newdocs.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -332,7 +332,7 @@ add_newdoc('numpy.core.multiarray','set_numeric_ops', """set_numeric_ops(op=func, ...) - Set some or all of the number methods for all array objects. Don't + Set some or all of the number methods for all array objects. Do not forget **dict can be used as the argument list. Return the functions that were replaced, which can be stored and set later. @@ -356,18 +356,37 @@ add_newdoc('numpy.core.multiarray','lexsort', - """lexsort(keys=, axis=-1) -> array of indices. argsort with list of keys. + """lexsort(keys=, axis=-1) -> array of indices. Argsort with list of keys. - Return an array of indices similar to argsort, except the sorting is - done using the provided sorting keys. First the sort is done using - key[0], then the resulting list of indices is further manipulated by - sorting on key[1], and so forth. The result is a sort on multiple - keys. If the keys represented columns of a spreadsheet, for example, - this would sort using multiple columns (the last key being used for the - primary sort order, the second-to-last key for the secondary sort order, - and so on). The keys argument must be a sequence of things that can be - converted to arrays of the same shape. + Perform an indirect sort using a list of keys. The first key is sorted, + then the second, and so on through the list of keys. At each step the + previous order is preserved when equal keys are encountered. The result is + a sort on multiple keys. If the keys represented columns of a spreadsheet, + for example, this would sort using multiple columns (the last key being + used for the primary sort order, the second-to-last key for the secondary + sort order, and so on). The keys argument must be a sequence of things + that can be converted to arrays of the same shape. + :Parameters: + + a : array type + Array containing values that the returned indices should sort. + + axis : integer + Axis to be indirectly sorted. None indicates that the flattened + array should be used. Default is -1. + + :Returns: + + indices : integer array + Array of indices that sort the keys along the specified axis. The + array has the same shape as the keys. + + :SeeAlso: + + - argsort : indirect sort + - sort : inplace sort + """) add_newdoc('numpy.core.multiarray','can_cast', @@ -650,24 +669,38 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('argsort', """a.argsort(axis=-1, kind='quicksort', order=None) -> indices - Return array of indices that sort a along the given axis. + Perform an indirect sort along the given axis using the algorithm specified + by the kind keyword. It returns an array of indices of the same shape as + 'a' that index data along the given axis in sorted order. - Keyword arguments: + :Parameters: - axis -- axis to be indirectly sorted (default -1) - kind -- sorting algorithm (default 'quicksort') - Possible values: 'quicksort', 'mergesort', or 'heapsort' - order -- If a has fields defined, then the order keyword can be the - field name to sort on or a list (or tuple) of field names - to indicate the order that fields should be used to define - the sort. + axis : integer + Axis to be indirectly sorted. None indicates that the flattened + array should be used. Default is -1. - Returns: array of indices that sort a along the specified axis. + kind : string + Sorting algorithm to use. Possible values are 'quicksort', + 'mergesort', or 'heapsort'. Default is 'quicksort'. - This method executes an indirect sort along the given axis using the - algorithm specified by the kind keyword. It returns an array of indices of - the same shape as 'a' that index data along the given axis in sorted order. + order : list type or None + When a is an array with fields defined, this argument specifies + which fields to compare first, second, etc. Not all fields need be + specified. + :Returns: + + indices : integer array + Array of indices that sort 'a' along the specified axis. + + :SeeAlso: + + - lexsort : indirect stable sort with multiple keys + - sort : inplace sort + + :Notes: + ------ + The various sorts are characterized by average speed, worst case performance, need for work space, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The three @@ -681,9 +714,9 @@ |'heapsort' | 3 | O(n*log(n)) | 0 | no | |------------------------------------------------------| - All the sort algorithms make temporary copies of the data when the sort is - not along the last axis. Consequently, sorts along the last axis are faster - and use less space than sorts along other axis. + All the sort algorithms make temporary copies of the data when the sort is not + along the last axis. Consequently, sorts along the last axis are faster and use + less space than sorts along other axis. """)) @@ -771,8 +804,59 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('diagonal', - """a.diagonal(offset=0, axis1=0, axis2=1) + """a.diagonal(offset=0, axis1=0, axis2=1) -> diagonals + If a is 2-d, return the diagonal of self with the given offset, i.e., the + collection of elements of the form a[i,i+offset]. If a is n-d with n > 2, + then the axes specified by axis1 and axis2 are used to determine the 2-d + subarray whose diagonal is returned. The shape of the resulting array can + be determined by removing axis1 and axis2 and appending an index to the + right equal to the size of the resulting diagonals. + + :Parameters: + offset : integer + Offset of the diagonal from the main diagonal. Can be both positive + and negative. Defaults to main diagonal. + axis1 : integer + Axis to be used as the first axis of the 2-d subarrays from which + the diagonals should be taken. Defaults to first index. + axis2 : integer + Axis to be used as the second axis of the 2-d subarrays from which + the diagonals should be taken. Defaults to second index. + + :Returns: + array_of_diagonals : same type as original array + If a is 2-d, then a 1-d array containing the diagonal is returned. + If a is n-d, n > 2, then an array of diagonals is returned. + + :SeeAlso: + - diag : matlab workalike for 1-d and 2-d arrays. + - diagflat : creates diagonal arrays + - trace : sum along diagonals + + Examples + -------- + + >>> a = arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + >>> a = arange(8).reshape(2,2,2) + >>> a + array([[[0, 1], + [2, 3]], + + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0,-2,-1) + array([[0, 3], + [4, 7]]) + """)) @@ -810,7 +894,7 @@ """a.getfield(dtype, offset) -> field of array as given type. Returns a field of the given array as a certain type. A field is a view of - the array's data with each itemsize determined by the given type and the + the array data with each itemsize determined by the given type and the offset into the current array. """)) @@ -832,16 +916,44 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('mean', - """a.mean(axis=None, dtype=None) + """a.mean(axis=None, dtype=None, out=None) -> mean - Average the array over the given axis. If the axis is None, - average over all dimensions of the array. Equivalent to + Returns the average of the array elements. The average is taken over the + flattened array by default, otherwise over the specified axis. - a.sum(axis, dtype) / size(a, axis). + :Parameters: - The optional dtype argument is the data type for intermediate - calculations in the sum. + axis : integer + Axis along which the means are computed. The default is + to compute the standard deviation of the flattened array. + dtype : type + Type to use in computing the means. For arrays of + integer type the default is float32, for arrays of float types it + is the same as the array type. + + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + + :Returns: + + mean : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. + + :SeeAlso: + + - var : variance + - std : standard deviation + + Notes + ----- + + The mean is the sum of the elements along the axis divided by the + number of elements. + """)) @@ -921,7 +1033,7 @@ Return a new array from this one. The new array must have the same number of elements as self. Also always returns a view or raises a ValueError if - that is impossible.; + that is impossible. """)) @@ -966,46 +1078,38 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('searchsorted', """a.searchsorted(v, side='left') -> index array. - Required arguments: - v -- array of keys to be searched for in a. + Find the indices into a sorted array such that if the corresponding keys in + v were inserted before the indices the order of a would be preserved. If + side='left', then the first such index is returned. If side='right', then + the last such index is returned. If there is no such index because the key + is out of bounds, then the length of a is returned, i.e., the key would + need to be appended. The returned index array has the same shape as v. - Keyword arguments: - side -- {'left', 'right'}, (default 'left'). + :Parameters: - Returns: - index array with the same shape as keys. + v : array or list type + Array of keys to be searched for in a. - The array to be searched must be 1-D and is assumed to be sorted in - ascending order. + side : string + Possible values are : 'left', 'right'. Default is 'left'. Return + the first or last index where the key could be inserted. - The method call + :Returns: - a.searchsorted(v, side='left') + indices : integer array + The returned array has the same shape as v. - returns an index array with the same shape as v such that for each value i - in the index and the corresponding key in v the following holds: + :SeeAlso: - a[j] < key <= a[i] for all j < i, + - sort + - histogram - If such an index does not exist, a.size() is used. Consequently, i is the - index of the first item in 'a' that is >= key. If the key were to be - inserted into a in the slot before the index i, then the order of a would - be preserved and i would be the smallest index with that property. + :Notes: + ------- - The method call + The array a must be 1-d and is assumed to be sorted in ascending order. + Searchsorted uses binary search to find the required insertion points. - a.searchsorted(v, side='right') - - returns an index array with the same shape as v such that for each value i - in the index and the corresponding key in v the following holds: - - a[j] <= key < a[i] for all j < i, - - If such an index does not exist, a.size() is used. Consequently, i is the - index of the first item in 'a' that is > key. If the key were to be - inserted into a in the slot before the index i, then the order of a would - be preserved and i would be the largest index with that property. - """)) @@ -1025,28 +1129,41 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('sort', """a.sort(axis=-1, kind='quicksort', order=None) -> None. - Sort a along the given axis. + Perform an inplace sort along the given axis using the algorithm specified + by the kind keyword. - Keyword arguments: + :Parameters: - axis -- axis to be sorted (default -1) - kind -- sorting algorithm (default 'quicksort') - Possible values: 'quicksort', 'mergesort', or 'heapsort'. - order -- If a has fields defined, then the order keyword can be the - field name to sort on or a list (or tuple) of field names - to indicate the order that fields should be used to define - the sort. + axis : integer + Axis to be sorted along. None indicates that the flattened array + should be used. Default is -1. - Returns: None. + kind : string + Sorting algorithm to use. Possible values are 'quicksort', + 'mergesort', or 'heapsort'. Default is 'quicksort'. - This method sorts 'a' in place along the given axis using the algorithm - specified by the kind keyword. + order : list type or None + When a is an array with fields defined, this argument specifies + which fields to compare first, second, etc. Not all fields need be + specified. - The various sorts may characterized by average speed, worst case + :Returns: + + None + + :SeeAlso: + + - argsort : indirect sort + - lexsort : indirect stable sort on multiple keys + - searchsorted : find keys in sorted array + + :Notes: + ------ + + The various sorts are characterized by average speed, worst case performance, need for work space, and whether they are stable. A stable - sort keeps items with the same key in the same relative order and is most - useful when used with argsort where the key might differ from the items - being sorted. The three available algorithms have the following properties: + sort keeps items with the same key in the same relative order. The three + available algorithms have the following properties: |------------------------------------------------------| | kind | speed | worst case | work space | stable| @@ -1056,9 +1173,9 @@ |'heapsort' | 3 | O(n*log(n)) | 0 | no | |------------------------------------------------------| - All the sort algorithms make temporary copies of the data when the sort is - not along the last axis. Consequently, sorts along the last axis are faster - and use less space than sorts along other axis. + All the sort algorithms make temporary copies of the data when the sort is not + along the last axis. Consequently, sorts along the last axis are faster and use + less space than sorts along other axis. """)) @@ -1072,16 +1189,45 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('std', """a.std(axis=None, dtype=None, out=None) -> standard deviation. - The standard deviation isa measure of the spread of a - distribution. + Returns the standard deviation of the array elements, a measure of the + spread of a distribution. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. - The standard deviation is the square root of the average of the - squared deviations from the mean, i.e. - std = sqrt(mean((x - x.mean())**2,axis=0)). + :Parameters: - For multidimensional arrays, std is computed by default along the - first axis. + axis : integer + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : type + Type to use in computing the standard deviation. For arrays of + integer type the default is float32, for arrays of float types it + is the same as the array type. + + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + + :Returns: + + standard deviation : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. + + :SeeAlso: + + - var : variance + - mean : average + + Notes + ----- + + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e. var = sqrt(mean((x - x.mean())**2)). The + computed standard deviation is biased, i.e., the mean is computed by + dividing by the number of elements, N, rather than by N-1. + """)) @@ -1224,8 +1370,47 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('var', - """a.var(axis=None, dtype=None) + """a.var(axis=None, dtype=None, out=None) -> variance + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by default, + otherwise over the specified axis. + + :Parameters: + + axis : integer + Axis along which the variance is computed. The default is to + compute the variance of the flattened array. + + dtype : type + Type to use in computing the variance. For arrays of integer type + the default is float32, for arrays of float types it is the same as + the array type. + + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + + :Returns: + + variance : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. + + :SeeAlso: + + - std : standard deviation + - mean: average + + Notes + ----- + + The variance is the average of the squared deviations from the mean, i.e. + var = mean((x - x.mean())**2). The computed variance is biased, i.e., + the mean is computed by dividing by the number of elements, N, rather + than by N-1. + """)) @@ -1235,4 +1420,3 @@ Type can be either a new sub-type object or a data-descriptor object """)) - Modified: branches/multicore/numpy/core/__init__.py =================================================================== --- branches/multicore/numpy/core/__init__.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/__init__.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -4,6 +4,7 @@ import multiarray import umath +import _internal # for freeze programs import numerictypes as nt multiarray.set_typeDict(nt.sctypeDict) import _sort Modified: branches/multicore/numpy/core/_internal.py =================================================================== --- branches/multicore/numpy/core/_internal.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/_internal.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -2,7 +2,6 @@ # that implements more complicated stuff. import re -from multiarray import dtype, ndarray import sys if (sys.byteorder == 'little'): @@ -11,6 +10,7 @@ _nbo = '>' def _makenames_list(adict): + from multiarray import dtype allfields = [] fnames = adict.keys() for fname in fnames: @@ -44,6 +44,7 @@ # a dictionary without "names" and "formats" # fields is used as a data-type descriptor. def _usefields(adict, align): + from multiarray import dtype try: names = adict[-1] except KeyError: @@ -109,6 +110,7 @@ # so don't remove the name here, or you'll # break backward compatibilty. def _reconstruct(subtype, shape, dtype): + from multiarray import ndarray return ndarray.__new__(subtype, shape, dtype) @@ -193,6 +195,7 @@ return result def _getintp_ctype(): + from multiarray import dtype val = _getintp_ctype.cache if val is not None: return val Modified: branches/multicore/numpy/core/arrayprint.py =================================================================== --- branches/multicore/numpy/core/arrayprint.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/arrayprint.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -344,6 +344,7 @@ def _floatFormat(data, precision, suppress_small, sign = 0): exp_format = 0 + errstate = _gen.seterr(all='ignore') non_zero = _uf.absolute(data.compress(_uf.not_equal(data, 0))) ##non_zero = _numeric_compress(data) ## if len(non_zero) == 0: @@ -357,6 +358,7 @@ if not suppress_small and (min_val < 0.0001 or max_val/min_val > 1000.): exp_format = 1 + _gen.seterr(**errstate) if exp_format: large_exponent = 0 < min_val < 1e-99 or max_val >= 1e100 max_str_len = 8 + precision + large_exponent Modified: branches/multicore/numpy/core/defmatrix.py =================================================================== --- branches/multicore/numpy/core/defmatrix.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/defmatrix.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -241,12 +241,137 @@ return N.ndarray.sum(self, axis, dtype, out)._align(axis) def mean(self, axis=None, out=None): + """Compute the mean along the specified axis. + + Returns the average of the array elements. The average is taken over + the flattened array by default, otherwise over the specified axis. + + :Parameters: + + axis : integer + Axis along which the means are computed. The default is + to compute the standard deviation of the flattened array. + + dtype : type + Type to use in computing the means. For arrays of integer type + the default is float32, for arrays of float types it is the + same as the array type. + + out : ndarray + Alternative output array in which to place the result. It must + have the same shape as the expected output but the type will be + cast if necessary. + + :Returns: + + mean : The return type varies, see above. + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. + + :SeeAlso: + + - var : variance + - std : standard deviation + + Notes + ----- + + The mean is the sum of the elements along the axis divided by the + number of elements. + + """ return N.ndarray.mean(self, axis, out)._align(axis) def std(self, axis=None, dtype=None, out=None): + """Compute the standard deviation along the specified axis. + + Returns the standard deviation of the array elements, a measure of the + spread of a distribution. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. + + :Parameters: + + axis : integer + Axis along which the standard deviation is computed. The + default is to compute the standard deviation of the flattened + array. + + dtype : type + Type to use in computing the standard deviation. For arrays of + integer type the default is float32, for arrays of float types + it is the same as the array type. + + out : ndarray + Alternative output array in which to place the result. It must + have the same shape as the expected output but the type will be + cast if necessary. + + :Returns: + + standard deviation : The return type varies, see above. + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. + + :SeeAlso: + + - var : variance + - mean : average + + Notes + ----- + + The standard deviation is the square root of the average of the + squared deviations from the mean, i.e. var = sqrt(mean((x - + x.mean())**2)). The computed standard deviation is biased, i.e., the + mean is computed by dividing by the number of elements, N, rather + than by N-1. + + """ return N.ndarray.std(self, axis, dtype, out)._align(axis) def var(self, axis=None, dtype=None, out=None): + """Compute the variance along the specified axis. + + Returns the variance of the array elements, a measure of the spread of + a distribution. The variance is computed for the flattened array by + default, otherwise over the specified axis. + + :Parameters: + + axis : integer + Axis along which the variance is computed. The default is to + compute the variance of the flattened array. + + dtype : type + Type to use in computing the variance. For arrays of integer + type the default is float32, for arrays of float types it is + the same as the array type. + + out : ndarray + Alternative output array in which to place the result. It must + have the same shape as the expected output but the type will be + cast if necessary. + + :Returns: + + variance : depends, see above + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. + + :SeeAlso: + + - std : standard deviation + - mean : average + + Notes + ----- + + The variance is the average of the squared deviations from the mean, + i.e. var = mean((x - x.mean())**2). The computed variance is + biased, i.e., the mean is computed by dividing by the number of + elements, N, rather than by N-1. + + """ return N.ndarray.var(self, axis, dtype, out)._align(axis) def prod(self, axis=None, dtype=None, out=None): Modified: branches/multicore/numpy/core/fromnumeric.py =================================================================== --- branches/multicore/numpy/core/fromnumeric.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/fromnumeric.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,4 +1,5 @@ # Module containing non-deprecated functions borrowed from Numeric. +__docformat__ = "restructuredtext en" # functions that are now methods __all__ = ['take', 'reshape', 'choose', 'repeat', 'put', @@ -40,12 +41,13 @@ result = wrap(result) return result + def take(a, indices, axis=None, out=None, mode='raise'): """Return an array with values pulled from the given array at the given indices. - This function does the same thing as "fancy" indexing; however, it can be - easier to use if you need to specify a given axis. + This function does the same thing as "fancy" indexing; however, it can + be easier to use if you need to specify a given axis. :Parameters: - `a` : array @@ -53,12 +55,13 @@ - `indices` : int array The indices of the values to extract. - `axis` : None or int, optional (default=None) - The axis over which to select values. None signifies that the operation - should be performed over the flattened array. + The axis over which to select values. None signifies that the + operation should be performed over the flattened array. - `out` : array, optional - If provided, the result will be inserted into this array. It should be - of the appropriate shape and dtype. - - `mode` : one of 'raise', 'wrap', or 'clip', optional (default='raise') + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype. + - `mode` : one of 'raise', 'wrap', or 'clip', optional + (default='raise') Specifies how out-of-bounds indices will behave. - 'raise' : raise an error - 'wrap' : wrap around @@ -76,6 +79,7 @@ return _wrapit(a, 'take', indices, axis, out, mode) return take(indices, axis, out, mode) + # not deprecated --- copy if necessary, view otherwise def reshape(a, newshape, order='C'): """Return an array that uses the data of the given array, but with a new @@ -92,8 +96,8 @@ :Returns: - `reshaped_array` : array - This will be a new view object if possible; otherwise, it will return - a copy. + This will be a new view object if possible; otherwise, it will + return a copy. :See also: numpy.ndarray.reshape() is the equivalent method. @@ -104,23 +108,25 @@ return _wrapit(a, 'reshape', newshape, order=order) return reshape(newshape, order=order) + def choose(a, choices, out=None, mode='raise'): """Use an index array to construct a new array from a set of choices. - Given an array of integers in {0, 1, ..., n-1} and a set of n choice arrays, - this function will create a new array that merges each of the choice arrays. - Where a value in `a` is i, then the new array will have the value that - choices[i] contains in the same place. + Given an array of integers in {0, 1, ..., n-1} and a set of n choice + arrays, this function will create a new array that merges each of the + choice arrays. Where a value in `a` is i, then the new array will have + the value that choices[i] contains in the same place. :Parameters: - `a` : int array - This array must contain integers in [0, n-1], where n is the number of - choices. + This array must contain integers in [0, n-1], where n is the number + of choices. - `choices` : sequence of arrays - Each of the choice arrays should have the same shape as the index array. + Each of the choice arrays should have the same shape as the index + array. - `out` : array, optional - If provided, the result will be inserted into this array. It should be - of the appropriate shape and dtype + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype - `mode` : one of 'raise', 'wrap', or 'clip', optional (default='raise') Specifies how out-of-bounds indices will behave. - 'raise' : raise an error @@ -134,7 +140,7 @@ numpy.ndarray.choose() is the equivalent method. :Example: - >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], + >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], ... [20, 21, 22, 23], [30, 31, 32, 33]] >>> choose([2, 3, 1, 0], choices) array([20, 31, 12, 3]) @@ -142,7 +148,7 @@ array([20, 31, 12, 3]) >>> choose([2, 4, 1, 0], choices, mode='wrap') array([20, 1, 12, 3]) - + """ try: choose = a.choose @@ -150,18 +156,20 @@ return _wrapit(a, 'choose', choices, out=out, mode=mode) return choose(choices, out=out, mode=mode) + def repeat(a, repeats, axis=None): """Repeat elements of an array. :Parameters: - `a` : array - `repeats` : int or int array - The number of repetitions for each element. If a plain integer, then it - is applied to all elements. If an array, it needs to be of the same - length as the chosen axis. + The number of repetitions for each element. If a plain integer, then + it is applied to all elements. If an array, it needs to be of the + same length as the chosen axis. - `axis` : None or int, optional (default=None) - The axis along which to repeat values. If None, then this function will - operated on the flattened array `a` and return a similarly flat result. + The axis along which to repeat values. If None, then this function + will operated on the flattened array `a` and return a similarly flat + result. :Returns: - `repeated_array` : array @@ -174,7 +182,7 @@ array([0, 0, 1, 1, 2, 2]) >>> repeat([0, 1, 2], [2, 3, 4]) array([0, 0, 1, 1, 1, 2, 2, 2, 2]) - + """ try: repeat = a.repeat @@ -182,20 +190,21 @@ return _wrapit(a, 'repeat', repeats, axis) return repeat(repeats, axis) + def put (a, ind, v, mode='raise'): - """put(a, ind, v) results in a[n] = v[n] for all n in ind - If v is shorter than mask it will be repeated as necessary. - In particular v can be a scalar or length 1 array. - The routine put is the equivalent of the following (although the loop - is in C for speed): + """put(a, ind, v) results in a[n] = v[n] for all n in ind. If v is + shorter than mask it will be repeated as necessary. In particular v can + be a scalar or length 1 array. The routine put is the equivalent of the + following (although the loop is in C for speed): - ind = array(indices, copy=False) - v = array(values, copy=False).astype(a.dtype) - for i in ind: a.flat[i] = v[i] - a must be a contiguous numpy array. + ind = array(indices, copy=False) + v = array(values, copy=False).astype(a.dtype) + for i in ind: a.flat[i] = v[i] + a must be a contiguous numpy array. """ return a.put(ind, v, mode) + def swapaxes(a, axis1, axis2): """swapaxes(a, axis1, axis2) returns array a with axis1 and axis2 interchanged. @@ -206,10 +215,11 @@ return _wrapit(a, 'swapaxes', axis1, axis2) return swapaxes(axis1, axis2) + def transpose(a, axes=None): - """transpose(a, axes=None) returns a view of the array with - dimensions permuted according to axes. If axes is None - (default) returns array with dimensions reversed. + """transpose(a, axes=None) returns a view of the array with dimensions + permuted according to axes. If axes is None (default) returns array + with dimensions reversed. """ try: transpose = a.transpose @@ -217,44 +227,69 @@ return _wrapit(a, 'transpose', axes) return transpose(axes) + def sort(a, axis=-1, kind='quicksort', order=None): - """Returns copy of 'a' sorted along the given axis. + """Return copy of 'a' sorted along the given axis. - Keyword arguments: + *Description* - axis -- axis to be sorted (default -1). Can be None - to indicate that a flattened and sorted array should - be returned (the array method does not support this). - kind -- sorting algorithm (default 'quicksort') - Possible values: 'quicksort', 'mergesort', or 'heapsort'. - order -- For an array with fields defined, this argument allows - specification of which fields to compare first, second, - etc. Not all fields need be specified. + Perform an inplace sort along the given axis using the algorithm + specified by the kind keyword. + *Parameters*: - Returns: None. + a : array type + Array to be sorted. - This method sorts 'a' in place along the given axis using the algorithm - specified by the kind keyword. + axis : integer + Axis to be sorted along. None indicates that the flattened + array should be used. Default is -1. - The various sorts may characterized by average speed, worst case - performance, need for work space, and whether they are stable. A stable - sort keeps items with the same key in the same relative order and is most - useful when used with argsort where the key might differ from the items - being sorted. The three available algorithms have the following properties: + kind : string + Sorting algorithm to use. Possible values are 'quicksort', + 'mergesort', or 'heapsort'. Default is 'quicksort'. - |------------------------------------------------------| - | kind | speed | worst case | work space | stable| - |------------------------------------------------------| - |'quicksort'| 1 | O(n^2) | 0 | no | - |'mergesort'| 2 | O(n*log(n)) | ~n/2 | yes | - |'heapsort' | 3 | O(n*log(n)) | 0 | no | - |------------------------------------------------------| + order : list type or None + When a is an array with fields defined, this argument + specifies which fields to compare first, second, etc. Not + all fields need be specified. - All the sort algorithms make temporary copies of the data when the sort is - not along the last axis. Consequently, sorts along the last axis are faster - and use less space than sorts along other axis. + *Returns*: + sorted_array : type is unchanged. + + *SeeAlso*: + + argsort + Indirect sort + lexsort + Indirect stable sort on multiple keys + searchsorted + Find keys in sorted array + + *Notes* + + The various sorts are characterized by average speed, worst case + performance, need for work space, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: + + +-----------+-------+-------------+------------+-------+ + | kind | speed | worst case | work space | stable| + +===========+=======+=============+============+=======+ + | quicksort | 1 | O(n^2) | 0 | no | + +-----------+-------+-------------+------------+-------+ + | mergesort | 2 | O(n*log(n)) | ~n/2 | yes | + +-----------+-------+-------------+------------+-------+ + | heapsort | 3 | O(n*log(n)) | 0 | no | + +-----------+-------+-------------+------------+-------+ + + All the sort algorithms make temporary copies of the data when + the sort is not along the last axis. Consequently, sorts along + the last axis are faster and use less space than sorts along + other axis. + """ if axis is None: a = asanyarray(a).flatten() @@ -264,43 +299,70 @@ a.sort(axis, kind, order) return a + def argsort(a, axis=-1, kind='quicksort', order=None): """Returns array of indices that index 'a' in sorted order. - Keyword arguments: + *Description* - axis -- axis to be indirectly sorted (default -1) - Can be None to indicate return indices into the - flattened array. - kind -- sorting algorithm (default 'quicksort') - Possible values: 'quicksort', 'mergesort', or 'heapsort' - order -- For an array with fields defined, this argument allows - specification of which fields to compare first, second, - etc. Not all fields need be specified. + Perform an indirect sort along the given axis using the algorithm + specified by the kind keyword. It returns an array of indices of the + same shape as a that index data along the given axis in sorted order. - Returns: array of indices that sort 'a' along the specified axis. + *Parameters*: - This method executes an indirect sort along the given axis using the - algorithm specified by the kind keyword. It returns an array of indices of - the same shape as 'a' that index data along the given axis in sorted order. + a : array type + Array containing values that the returned indices should + sort. - The various sorts are characterized by average speed, worst case - performance, need for work space, and whether they are stable. A stable - sort keeps items with the same key in the same relative order. The three - available algorithms have the following properties: + axis : integer + Axis to be indirectly sorted. None indicates that the + flattened array should be used. Default is -1. - |------------------------------------------------------| - | kind | speed | worst case | work space | stable| - |------------------------------------------------------| - |'quicksort'| 1 | O(n^2) | 0 | no | - |'mergesort'| 2 | O(n*log(n)) | ~n/2 | yes | - |'heapsort' | 3 | O(n*log(n)) | 0 | no | - |------------------------------------------------------| + kind : string + Sorting algorithm to use. Possible values are 'quicksort', + 'mergesort', or 'heapsort'. Default is 'quicksort'. - All the sort algorithms make temporary copies of the data when the sort is not - along the last axis. Consequently, sorts along the last axis are faster and use - less space than sorts along other axis. + order : list type or None + When a is an array with fields defined, this argument + specifies which fields to compare first, second, etc. Not + all fields need be specified. + *Returns*: + + indices : integer array + Array of indices that sort 'a' along the specified axis. + + *SeeAlso*: + + lexsort + Indirect stable sort with multiple keys + sort + Inplace sort + + *Notes* + + The various sorts are characterized by average speed, worst case + performance, need for work space, and whether they are stable. A + stable sort keeps items with the same key in the same relative + order. The three available algorithms have the following + properties: + + +-----------+-------+-------------+------------+-------+ + | kind | speed | worst case | work space | stable| + +===========+=======+=============+============+=======+ + | quicksort | 1 | O(n^2) | 0 | no | + +-----------+-------+-------------+------------+-------+ + | mergesort | 2 | O(n*log(n)) | ~n/2 | yes | + +-----------+-------+-------------+------------+-------+ + | heapsort | 3 | O(n*log(n)) | 0 | no | + +-----------+-------+-------------+------------+-------+ + + All the sort algorithms make temporary copies of the data when + the sort is not along the last axis. Consequently, sorts along + the last axis are faster and use less space than sorts along + other axis. + """ try: argsort = a.argsort @@ -308,6 +370,7 @@ return _wrapit(a, 'argsort', axis, kind, order) return argsort(axis, kind, order) + def argmax(a, axis=None): """argmax(a,axis=None) returns the indices to the maximum value of the 1-D arrays along the given axis. @@ -318,6 +381,7 @@ return _wrapit(a, 'argmax', axis) return argmax(axis) + def argmin(a, axis=None): """argmin(a,axis=None) returns the indices to the minimum value of the 1-D arrays along the given axis. @@ -328,50 +392,53 @@ return _wrapit(a, 'argmin', axis) return argmin(axis) + def searchsorted(a, v, side='left'): - """-> index array. Inserting v[i] before a[index[i]] maintains a in order. + """Returns indices where keys in v should be inserted to maintain order. - Required arguments: - a -- sorted 1-D array to be searched. - v -- array of keys to be searched for in a. + *Description* - Keyword arguments: - side -- {'left', 'right'}, default('left'). + Find the indices into a sorted array such that if the + corresponding keys in v were inserted before the indices the + order of a would be preserved. If side='left', then the first + such index is returned. If side='right', then the last such index + is returned. If there is no such index because the key is out of + bounds, then the length of a is returned, i.e., the key would + need to be appended. The returned index array has the same shape + as v. - Returns: - array of indices with the same shape as v. + *Parameters*: - The array to be searched must be 1-D and is assumed to be sorted in - ascending order. + a : array + 1-d array sorted in ascending order. - The function call + v : array or list type + Array of keys to be searched for in a. - searchsorted(a, v, side='left') + side : string + Possible values are : 'left', 'right'. Default is 'left'. + Return the first or last index where the key could be + inserted. - returns an index array with the same shape as v such that for each value i - in the index and the corresponding key in v the following holds: + *Returns*: - a[j] < key <= a[i] for all j < i, + indices : integer array + Array of insertion points with the same shape as v. - If such an index does not exist, a.size() is used. Consequently, i is the - index of the first item in 'a' that is >= key. If the key were to be - inserted into a in the slot before the index i, then the order of a would - be preserved and i would be the smallest index with that property. + *SeeAlso*: - The function call + sort + Inplace sort + histogram + Produce histogram from 1-d data - searchsorted(a, v, side='right') - returns an index array with the same shape as v such that for each value i - in the index and the corresponding key in v the following holds: + *Notes* - a[j] <= key < a[i] for all j < i, + The array a must be 1-d and is assumed to be sorted in ascending + order. Searchsorted uses binary search to find the required + insertion points. - If such an index does not exist, a.size() is used. Consequently, i is the - index of the first item in 'a' that is > key. If the key were to be - inserted into a in the slot before the index i, then the order of a would - be preserved and i would be the largest index with that property. - """ try: searchsorted = a.searchsorted @@ -379,13 +446,14 @@ return _wrapit(a, 'searchsorted', v, side) return searchsorted(v, side) + def resize(a, new_shape): """resize(a,new_shape) returns a new array with the specified shape. - The original array's total size can be any size. It - fills the new array with repeated copies of a. + The original array's total size can be any size. It fills the new + array with repeated copies of a. - Note that a.resize(new_shape) will fill array with 0's - beyond current definition of a. + Note that a.resize(new_shape) will fill array with 0's beyond current + definition of a. """ if isinstance(new_shape, (int, nt.integer)): @@ -410,6 +478,7 @@ return reshape(a, new_shape) + def squeeze(a): "Returns a with any ones from the shape of a removed" try: @@ -418,12 +487,75 @@ return _wrapit(a, 'squeeze') return squeeze() + def diagonal(a, offset=0, axis1=0, axis2=1): - """diagonal(a, offset=0, axis1=0, axis2=1) returns the given diagonals - defined by the last two dimensions of the array. + """Return specified diagonals. Uses first two indices by default. + + *Description* + + If a is 2-d, returns the diagonal of self with the given offset, + i.e., the collection of elements of the form a[i,i+offset]. If a is + n-d with n > 2, then the axes specified by axis1 and axis2 are used + to determine the 2-d subarray whose diagonal is returned. The shape + of the resulting array can be determined by removing axis1 and axis2 + and appending an index to the right equal to the size of the + resulting diagonals. + + *Parameters*: + + offset : integer + Offset of the diagonal from the main diagonal. Can be both + positive and negative. Defaults to main diagonal. + + axis1 : integer + Axis to be used as the first axis of the 2-d subarrays from + which the diagonals should be taken. Defaults to first axis. + + axis2 : integer + Axis to be used as the second axis of the 2-d subarrays from + which the diagonals should be taken. Defaults to second axis. + + *Returns*: + + array_of_diagonals : type of original array + If a is 2-d, then a 1-d array containing the diagonal is + returned. + If a is n-d, n > 2, then an array of diagonals is returned. + + *SeeAlso*: + + diag : + Matlab workalike for 1-d and 2-d arrays + diagflat : + creates diagonal arrays + trace : + sum along diagonals + + *Examples*: + + >>> a = arange(4).reshape(2,2) + >>> a + array([[0, 1], + [2, 3]]) + >>> a.diagonal() + array([0, 3]) + >>> a.diagonal(1) + array([1]) + + >>> a = arange(8).reshape(2,2,2) + >>> a + array([[[0, 1], + [2, 3]], + [[4, 5], + [6, 7]]]) + >>> a.diagonal(0,-2,-1) + array([[0, 3], + [4, 7]]) + """ return asarray(a).diagonal(offset, axis1, axis2) + def trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None): """trace(a,offset=0, axis1=0, axis2=1) returns the sum along diagonals (defined by the last two dimenions) of the array. @@ -431,9 +563,9 @@ return asarray(a).trace(offset, axis1, axis2, dtype, out) def ravel(m,order='C'): - """ravel(m) returns a 1d array corresponding to all the elements of it's - argument. The new array is a view of m if possible, otherwise it is - a copy. + """ravel(m) returns a 1d array corresponding to all the elements of + its argument. The new array is a view of m if possible, otherwise it + is a copy. """ a = asarray(m) return a.ravel(order) @@ -450,8 +582,8 @@ return res def shape(a): - """shape(a) returns the shape of a (as a function call which - also works on nested sequences). + """shape(a) returns the shape of a (as a function call which also + works on nested sequences). """ try: result = a.shape @@ -661,23 +793,26 @@ out -- existing array to use for output (default copy of a). Returns: - Reference to out, where None specifies a copy of the original array a. + Reference to out, where None specifies a copy of the original + array a. - Round to the specified number of decimals. When 'decimals' is negative it - specifies the number of positions to the left of the decimal point. The - real and imaginary parts of complex numbers are rounded separately. - Nothing is done if the array is not of float type and 'decimals' is greater - than or equal to 0. + Round to the specified number of decimals. When 'decimals' is + negative it specifies the number of positions to the left of the + decimal point. The real and imaginary parts of complex numbers are + rounded separately. Nothing is done if the array is not of float + type and 'decimals' is greater than or equal to 0. - The keyword 'out' may be used to specify a different array to hold the - result rather than the default 'a'. If the type of the array specified by - 'out' differs from that of 'a', the result is cast to the new type, - otherwise the original type is kept. Floats round to floats by default. + The keyword 'out' may be used to specify a different array to hold + the result rather than the default 'a'. If the type of the array + specified by 'out' differs from that of 'a', the result is cast to + the new type, otherwise the original type is kept. Floats round to + floats by default. - Numpy rounds to even. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to - 0.0, etc. Results may also be surprising due to the inexact representation - of decimal fractions in IEEE floating point and the errors introduced in - scaling the numbers when 'decimals' is something other than 0. + Numpy rounds to even. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 + round to 0.0, etc. Results may also be surprising due to the inexact + representation of decimal fractions in IEEE floating point and the + errors introduced in scaling the numbers when 'decimals' is something + other than 0. The function around is an alias for round_. @@ -691,12 +826,48 @@ around = round_ def mean(a, axis=None, dtype=None, out=None): - """mean(a, axis=None, dtype=None) - Return the arithmetic mean. + """Compute the mean along the specified axis. - The mean is the sum of the elements divided by the number of elements. + *Description* - See also: average + Returns the average of the array elements. The average is taken + over the flattened array by default, otherwise over the specified + axis. + + *Parameters*: + + axis : integer + Axis along which the means are computed. The default is + to compute the standard deviation of the flattened array. + + dtype : type + Type to use in computing the means. For arrays of integer + type the default is float32, for arrays of float types it is + the same as the array type. + + out : ndarray + Alternative output array in which to place the result. It + must have the same shape as the expected output but the type + will be cast if necessary. + + *Returns*: + + mean : The return type varies, see above. + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. + + *SeeAlso*: + + var + Variance + std + Standard deviation + + *Notes* + + The mean is the sum of the elements along the axis divided by the + number of elements. + """ try: mean = a.mean @@ -704,14 +875,55 @@ return _wrapit(a, 'mean', axis, dtype, out) return mean(axis, dtype, out) + def std(a, axis=None, dtype=None, out=None): - """std(sample, axis=None, dtype=None) - Return the standard deviation, a measure of the spread of a distribution. + """Compute the standard deviation along the specified axis. - The standard deviation is the square root of the average of the squared - deviations from the mean, i.e. std = sqrt(mean((x - x.mean())**2)). + *Description* - See also: var + Returns the standard deviation of the array elements, a measure + of the spread of a distribution. The standard deviation is + computed for the flattened array by default, otherwise over the + specified axis. + + *Parameters*: + + axis : integer + Axis along which the standard deviation is computed. The + default is to compute the standard deviation of the flattened + array. + + dtype : type + Type to use in computing the standard deviation. For arrays + of integer type the default is float32, for arrays of float + types it is the same as the array type. + + out : ndarray + Alternative output array in which to place the result. It + must have the same shape as the expected output but the type + will be cast if necessary. + + *Returns*: + + standard_deviation : The return type varies, see above. + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. + + *SeeAlso*: + + var + Variance + mean + Average + + *Notes* + + The standard deviation is the square root of the average of the + squared deviations from the mean, i.e. var = sqrt(mean((x - + x.mean())**2)). The computed standard deviation is biased, i.e., + the mean is computed by dividing by the number of elements, N, + rather than by N-1. + """ try: std = a.std @@ -719,14 +931,52 @@ return _wrapit(a, 'std', axis, dtype, out) return std(axis, dtype, out) + def var(a, axis=None, dtype=None, out=None): - """var(sample, axis=None, dtype=None) - Return the variance, a measure of the spread of a distribution. + """Compute the variance along the specified axis. - The variance is the average of the squared deviations from the mean, - i.e. var = mean((x - x.mean())**2). + *Description* - See also: std + Returns the variance of the array elements, a measure of the + spread of a distribution. The variance is computed for the + flattened array by default, otherwise over the specified axis. + + *Parameters*: + + axis : integer + Axis along which the variance is computed. The default is to + compute the variance of the flattened array. + + dtype : type + Type to use in computing the variance. For arrays of integer + type the default is float32, for arrays of float types it is + the same as the array type. + + out : ndarray + Alternative output array in which to place the result. It + must have the same shape as the expected output but the type + will be cast if necessary. + + *Returns*: + + variance : depends, see above + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. + + *SeeAlso*: + + std + Standard deviation + mean + Average + + *Notes* + + The variance is the average of the squared deviations from the + mean, i.e. var = mean((x - x.mean())**2). The computed variance + is biased, i.e., the mean is computed by dividing by the number + of elements, N, rather than by N-1. + """ try: var = a.var Modified: branches/multicore/numpy/core/include/numpy/fenv/fenv.c =================================================================== --- branches/multicore/numpy/core/include/numpy/fenv/fenv.c 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/include/numpy/fenv/fenv.c 2007-05-28 18:35:05 UTC (rev 3840) @@ -29,7 +29,7 @@ #include #include "fenv.h" -const fenv_t __fe_dfl_env = { +const fenv_t npy__fe_dfl_env = { 0xffff0000, 0xffff0000, 0xffffffff, Modified: branches/multicore/numpy/core/include/numpy/fenv/fenv.h =================================================================== --- branches/multicore/numpy/core/include/numpy/fenv/fenv.h 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/include/numpy/fenv/fenv.h 2007-05-28 18:35:05 UTC (rev 3840) @@ -62,8 +62,8 @@ __BEGIN_DECLS /* Default floating-point environment */ -extern const fenv_t __fe_dfl_env; -#define FE_DFL_ENV (&__fe_dfl_env) +extern const fenv_t npy__fe_dfl_env; +#define FE_DFL_ENV (&npy__fe_dfl_env) #define __fldcw(__cw) __asm __volatile("fldcw %0" : : "m" (__cw)) #define __fldenv(__env) __asm __volatile("fldenv %0" : : "m" (__env)) Modified: branches/multicore/numpy/core/include/numpy/ndarrayobject.h =================================================================== --- branches/multicore/numpy/core/include/numpy/ndarrayobject.h 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/include/numpy/ndarrayobject.h 2007-05-28 18:35:05 UTC (rev 3840) @@ -1234,6 +1234,13 @@ #define fortran fortran_ /* For some compilers */ +/* Array Flags Object */ +typedef struct PyArrayFlagsObject { + PyObject_HEAD + PyObject *arr; + int flags; +} PyArrayFlagsObject; + /* Mirrors buffer object to ptr */ typedef struct { @@ -1777,7 +1784,7 @@ /* This is the form of the struct that's returned pointed by the PyCObject attribute of an array __array_struct__. See - http://numeric.scipy.org/array_interface.html for the full + http://numpy.scipy.org/array_interface.shtml for the full documentation. */ typedef struct { int two; /* contains the integer 2 as a sanity check */ Modified: branches/multicore/numpy/core/ma.py =================================================================== --- branches/multicore/numpy/core/ma.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/ma.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -797,7 +797,13 @@ m = self._mask dout = self._data[i] if m is nomask: - return dout + try: + if dout.size == 1: + return dout + else: + return masked_array(dout, fill_value=self._fill_value) + except AttributeError: + return dout mi = m[i] if mi.size == 1: if mi: @@ -807,16 +813,6 @@ else: return masked_array(dout, mi, fill_value=self._fill_value) - def __getslice__(self, i, j): - "Get slice described by i, j" - self.unshare_mask() - m = self._mask - dout = self._data[i:j] - if m is nomask: - return masked_array(dout, fill_value=self._fill_value) - else: - return masked_array(dout, mask = m[i:j], fill_value=self._fill_value) - # -------- # setitem and setslice notes # note that if value is masked, it means to mask those locations. @@ -826,7 +822,7 @@ "Set item described by index. If value is masked, mask those locations." d = self._data if self is masked: - raise MAError, 'Cannot alter the masked element.' + raise MAError, 'Cannot alter masked elements.' if value is masked: if self._mask is nomask: self._mask = make_mask_none(d.shape) @@ -850,30 +846,6 @@ self.unshare_mask() self._mask[index] = m - def __setslice__(self, i, j, value): - "Set slice i:j; if value is masked, mask those locations." - d = self._data - if self is masked: - raise MAError, "Cannot alter the 'masked' object." - if value is masked: - if self._mask is nomask: - self._mask = make_mask_none(d.shape) - self._shared_mask = False - self._mask[i:j] = True - return - m = getmask(value) - value = filled(value).astype(d.dtype) - d[i:j] = value - if m is nomask: - if self._mask is not nomask: - self.unshare_mask() - self._mask[i:j] = False - else: - if self._mask is nomask: - self._mask = make_mask_none(self._data.shape) - self._shared_mask = False - self._mask[i:j] = m - def __nonzero__(self): """returns true if any element is non-zero or masked Modified: branches/multicore/numpy/core/numeric.py =================================================================== --- branches/multicore/numpy/core/numeric.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/numeric.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -818,10 +818,10 @@ a = array([1]+n*[0],dtype=dtype) b = empty((n,n),dtype=dtype) - # Note that this assignment depends on the convention that since the a array - # is shorter than the flattened b array, then the a array will be repeated - # until it is the appropriate size. Given a's construction, this nicely sets - # the diagonal to all ones. + # Note that this assignment depends on the convention that since the a + # array is shorter than the flattened b array, then the a array will + # be repeated until it is the appropriate size. Given a's construction, + # this nicely sets the diagonal to all ones. b.flat = a return b @@ -835,10 +835,22 @@ """ x = array(a, copy=False) y = array(b, copy=False) - d = less_equal(absolute(x-y), atol + rtol * absolute(y)) - return d.ravel().all() + d1 = less_equal(absolute(x-y), atol + rtol * absolute(y)) + xinf = isinf(x) + yinf = isinf(y) + if (not xinf.any() and not yinf.any()): + return d1.all() + d3 = (x[xinf] == y[yinf]) + d4 = (~xinf & ~yinf) + if d3.size < 2: + if d3.size==0: + return False + return d3 + if d3.all(): + return d1[d4].all() + else: + return False - def array_equal(a1, a2): try: a1, a2 = asarray(a1), asarray(a2) Modified: branches/multicore/numpy/core/records.py =================================================================== --- branches/multicore/numpy/core/records.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/records.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -133,11 +133,15 @@ if res: obj = self.getfield(*res[:2]) # if it has fields return a recarray, - # if it's a string return 'SU' return a chararray - # otherwise return a normal array - if obj.dtype.fields: + # if it's a string ('SU') return a chararray + # otherwise return the object + try: + dt = obj.dtype + except AttributeError: + return obj + if dt.fields: return obj.view(obj.__class__) - if obj.dtype.char in 'SU': + if dt.char in 'SU': return obj.view(chararray) return obj else: @@ -160,6 +164,16 @@ raise AttributeError, "'record' object has no "\ "attribute '%s'" % attr + def pprint(self): + # pretty-print all fields + names = self.dtype.names + maxlen = max([len(name) for name in names]) + rows = [] + fmt = '%% %ds: %%s' %maxlen + for name in names: + rows.append(fmt%(name, getattr(self, name))) + return "\n".join(rows) + # The recarray is almost identical to a standard array (which supports # named fields already) The biggest difference is that it can use # attribute-lookup to find the fields and it is constructed using Modified: branches/multicore/numpy/core/setup.py =================================================================== --- branches/multicore/numpy/core/setup.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/setup.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -2,6 +2,7 @@ import os import sys from os.path import join +from numpy.distutils import log from distutils.dep_util import newer FUNCTIONS_TO_CHECK = [ @@ -37,8 +38,7 @@ target = join(build_dir,'config.h') if newer(__file__,target): config_cmd = config.get_config_cmd() - print 'Generating',target - # + log.info('Generating %s',target) tc = generate_testcode(target) from distutils import sysconfig python_include = sysconfig.get_python_inc() @@ -145,7 +145,7 @@ sys.path.insert(0, codegen_dir) try: m = __import__(module_name) - print 'executing', script + log.info('executing %s', script) h_file, c_file, doc_file = m.generate_api(build_dir) finally: del sys.path[0] Modified: branches/multicore/numpy/core/src/arraymethods.c =================================================================== --- branches/multicore/numpy/core/src/arraymethods.c 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/src/arraymethods.c 2007-05-28 18:35:05 UTC (rev 3840) @@ -653,14 +653,19 @@ &descr)) return NULL; if (descr == self->descr) { - obj = _ARET(PyArray_NewCopy(self,0)); + obj = _ARET(PyArray_NewCopy(self,NPY_ANYORDER)); Py_XDECREF(descr); return obj; } if (descr->names != NULL) { - return PyArray_FromArray(self, descr, NPY_FORCECAST); + int flags; + flags = NPY_FORCECAST; + if (PyArray_ISFORTRAN(self)) { + flags |= NPY_FORTRAN; + } + return PyArray_FromArray(self, descr, flags); } - return PyArray_CastToType(self, descr, 0); + return PyArray_CastToType(self, descr, PyArray_ISFORTRAN(self)); } /* default sub-type implementation */ @@ -867,14 +872,18 @@ if (order == Py_None) order = NULL; if (order != NULL) { PyObject *new_name; + PyObject *_numpy_internal; saved = self->descr; if (saved->names == NULL) { PyErr_SetString(PyExc_ValueError, "Cannot specify " \ "order when the array has no fields."); return NULL; } + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; new_name = PyObject_CallMethod(_numpy_internal, "_newnames", "OO", saved, order); + Py_DECREF(_numpy_internal); if (new_name == NULL) return NULL; newd = PyArray_DescrNew(saved); newd->names = new_name; @@ -909,14 +918,18 @@ if (order == Py_None) order = NULL; if (order != NULL) { PyObject *new_name; + PyObject *_numpy_internal; saved = self->descr; if (saved->names == NULL) { PyErr_SetString(PyExc_ValueError, "Cannot specify " \ "order when the array has no fields."); return NULL; } + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; new_name = PyObject_CallMethod(_numpy_internal, "_newnames", "OO", saved, order); + Py_DECREF(_numpy_internal); if (new_name == NULL) return NULL; newd = PyArray_DescrNew(saved); newd->names = new_name; Modified: branches/multicore/numpy/core/src/arrayobject.c =================================================================== --- branches/multicore/numpy/core/src/arrayobject.c 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/src/arrayobject.c 2007-05-28 18:35:05 UTC (rev 3840) @@ -1378,13 +1378,13 @@ byte_swap_vector(destptr, length, 4); #else /* need aligned data buffer */ - if (!PyArray_ISBEHAVED(base)) { + if ((swap) || ((((intp)data) % descr->alignment) != 0)) { buffer = _pya_malloc(itemsize); if (buffer == NULL) return PyErr_NoMemory(); alloc = 1; memcpy(buffer, data, itemsize); - if (!PyArray_ISNOTSWAPPED(base)) { + if (swap) { byte_swap_vector(buffer, itemsize >> 2, 4); } @@ -6131,9 +6131,15 @@ static PyObject * array_ctypes_get(PyArrayObject *self) { - return PyObject_CallMethod(_numpy_internal, "_ctypes", - "ON", self, - PyLong_FromVoidPtr(self->data)); + PyObject *_numpy_internal; + PyObject *ret; + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; + ret = PyObject_CallMethod(_numpy_internal, "_ctypes", + "ON", self, + PyLong_FromVoidPtr(self->data)); + Py_DECREF(_numpy_internal); + return ret; } static PyObject * @@ -6859,10 +6865,7 @@ if ((nd == 0) || PyString_Check(s) || \ PyUnicode_Check(s) || PyBuffer_Check(s)) { - if PyUnicode_Check(s) - *itemsize = MAX(*itemsize, 4*n); - else - *itemsize = MAX(*itemsize, n); + *itemsize = MAX(*itemsize, n); return 0; } for (i=0; itype_num > mintype->type_num) outtype_num = chktype->type_num; - else - outtype_num = mintype->type_num; + else { + if (PyDataType_ISOBJECT(chktype) && \ + PyDataType_ISSTRING(mintype)) { + return PyArray_DescrFromType(NPY_OBJECT); + } + else { + outtype_num = mintype->type_num; + } + } save_num = outtype_num; while(outtype_num < PyArray_NTYPES && @@ -10839,6 +10850,7 @@ arraydescr_protocol_descr_get(PyArray_Descr *self) { PyObject *dobj, *res; + PyObject *_numpy_internal; if (self->names == NULL) { /* get default */ @@ -10853,8 +10865,12 @@ return res; } - return PyObject_CallMethod(_numpy_internal, "_array_descr", - "O", self); + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; + res = PyObject_CallMethod(_numpy_internal, "_array_descr", + "O", self); + Py_DECREF(_numpy_internal); + return res; } /* returns 1 for a builtin type @@ -11632,12 +11648,6 @@ /** Array Flags Object **/ -typedef struct PyArrayFlagsObject { - PyObject_HEAD - PyObject *arr; - int flags; -} PyArrayFlagsObject; - /*OBJECT_API Get New ArrayFlagsObject */ Modified: branches/multicore/numpy/core/src/multiarraymodule.c =================================================================== --- branches/multicore/numpy/core/src/multiarraymodule.c 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/src/multiarraymodule.c 2007-05-28 18:35:05 UTC (rev 3840) @@ -23,12 +23,10 @@ #include "numpy/arrayobject.h" #define PyAO PyArrayObject + static PyObject *typeDict=NULL; /* Must be explicitly loaded */ -static PyObject *_numpy_internal=NULL; /* A Python module for callbacks */ -static int _multiarray_module_loaded=0; - static PyArray_Descr * _arraydescr_fromobj(PyObject *obj) { @@ -4861,10 +4859,14 @@ { PyObject *listobj; PyArray_Descr *res; + PyObject *_numpy_internal; if (!PyString_Check(obj)) return NULL; + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; listobj = PyObject_CallMethod(_numpy_internal, "_commastring", "O", obj); + Py_DECREF(_numpy_internal); if (!listobj) return NULL; if (!PyList_Check(listobj) || PyList_GET_SIZE(listobj)<1) { PyErr_SetString(PyExc_RuntimeError, "_commastring is " \ @@ -4928,9 +4930,15 @@ static PyArray_Descr * _use_fields_dict(PyObject *obj, int align) { - return (PyArray_Descr *)PyObject_CallMethod(_numpy_internal, - "_usefields", - "Oi", obj, align); + PyObject *_numpy_internal; + PyArray_Descr *res; + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; + res = (PyArray_Descr *)PyObject_CallMethod(_numpy_internal, + "_usefields", + "Oi", obj, align); + Py_DECREF(_numpy_internal); + return res; } static PyArray_Descr * @@ -6358,7 +6366,7 @@ } if (i < count) { - PyErr_SetString(PyExc_ValueError, "iteratable too short"); + PyErr_SetString(PyExc_ValueError, "iterator too short"); goto done; } @@ -6641,7 +6649,15 @@ double value; *next = PyNumber_Subtract(stop, start); - if (!(*next)) return -1; + if (!(*next)) { + if (PyTuple_Check(stop)) { + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, + "arange: scalar arguments expected "\ + "instead of a tuple."); + } + return -1; + } val = PyNumber_TrueDivide(*next, step); Py_DECREF(*next); *next=NULL; if (!val) return -1; @@ -7493,8 +7509,6 @@ PyObject *m, *d, *s; PyObject *c_api; - if (_multiarray_module_loaded) return; - _multiarray_module_loaded = 1; /* Create the module and add the functions */ m = Py_InitModule("multiarray", array_module_methods); if (!m) goto err; @@ -7579,11 +7593,8 @@ set_flaginfo(d); if (set_typeinfo(d) != 0) goto err; + return; - _numpy_internal = \ - PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal != NULL) return; - err: if (!PyErr_Occurred()) { PyErr_SetString(PyExc_RuntimeError, Modified: branches/multicore/numpy/core/src/scalarmathmodule.c.src =================================================================== --- branches/multicore/numpy/core/src/scalarmathmodule.c.src 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/src/scalarmathmodule.c.src 2007-05-28 18:35:05 UTC (rev 3840) @@ -218,7 +218,14 @@ } #endif else { +#if @neg@ + @name@ tmp; + tmp = a / b; + if (((a > 0) != (b > 0)) && (a % b != 0)) tmp--; + *out = tmp; +#else *out = a / b; +#endif } } #define @name at _ctype_floor_divide @name at _ctype_divide Modified: branches/multicore/numpy/core/src/scalartypes.inc.src =================================================================== --- branches/multicore/numpy/core/src/scalartypes.inc.src 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/src/scalartypes.inc.src 2007-05-28 18:35:05 UTC (rev 3840) @@ -158,6 +158,9 @@ where only a reference for flexible types is returned */ +/* This may not work right on narrow builds for NumPy unicode scalars. + */ + /*OBJECT_API Cast Scalar to c-type */ @@ -737,8 +740,12 @@ static PyObject * voidtype_flags_get(PyVoidScalarObject *self) { - return PyObject_CallMethod(_numpy_internal, "flagsobj", "Oii", - self, self->flags, 1); + PyObject *flagobj; + flagobj = PyArrayFlags_Type.tp_alloc(&PyArrayFlags_Type, 0); + if (flagobj == NULL) return NULL; + ((PyArrayFlagsObject *)flagobj)->arr = NULL; + ((PyArrayFlagsObject *)flagobj)->flags = self->flags; + return flagobj; } static PyObject * @@ -752,13 +759,7 @@ static PyObject * gentype_data_get(PyObject *self) { - PyArray_Descr *typecode; - PyObject *ret; - - typecode = PyArray_DescrFromScalar(self); - ret = PyBuffer_FromObject(self, 0, typecode->elsize); - Py_DECREF(typecode); - return ret; + return PyBuffer_FromObject(self, 0, Py_END_OF_BUFFER); } @@ -767,9 +768,16 @@ { PyArray_Descr *typecode; PyObject *ret; + int elsize; typecode = PyArray_DescrFromScalar(self); - ret = PyInt_FromLong((long) typecode->elsize); + elsize = typecode->elsize; +#ifndef Py_UNICODE_WIDE + if (typecode->type_num == NPY_UNICODE) { + elsize >>= 1; + } +#endif + ret = PyInt_FromLong((long) elsize); Py_DECREF(typecode); return ret; } @@ -928,9 +936,11 @@ } else { char *temp; + int elsize; typecode = PyArray_DescrFromScalar(self); - temp = PyDataMem_NEW(typecode->elsize); - memset(temp, '\0', typecode->elsize); + elsize = typecode->elsize; + temp = PyDataMem_NEW(elsize); + memset(temp, '\0', elsize); ret = PyArray_Scalar(temp, typecode, NULL); PyDataMem_FREE(temp); } @@ -1633,6 +1643,11 @@ numbytes = outcode->elsize; *ptrptr = (void *)scalar_value(self, outcode); +#ifndef Py_UNICODE_WIDE + if (outcode->type_num == NPY_UNICODE) { + numbytes >>= 1; + } +#endif Py_DECREF(outcode); return numbytes; } @@ -1643,8 +1658,14 @@ PyArray_Descr *outcode; outcode = PyArray_DescrFromScalar(self); - if (lenp) + if (lenp) { *lenp = outcode->elsize; +#ifndef Py_UNICODE_WIDE + if (outcode->type_num == NPY_UNICODE) { + *lenp >>= 1; + } +#endif + } Py_DECREF(outcode); return 1; } @@ -2640,12 +2661,17 @@ { PyObject *tup; PyObject *ret; + PyObject *_numpy_internal; + if (!PyDict_Check(fields)) { PyErr_SetString(PyExc_TypeError, "Fields must be a dictionary"); return NULL; } + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; tup = PyObject_CallMethod(_numpy_internal, "_makenames_list", "O", fields); + Py_DECREF(_numpy_internal); if (tup == NULL) return NULL; ret = PyTuple_GET_ITEM(tup, 0); ret = PySequence_Tuple(ret); Modified: branches/multicore/numpy/core/tests/test_regression.py =================================================================== --- branches/multicore/numpy/core/tests/test_regression.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/core/tests/test_regression.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -633,6 +633,11 @@ assert_equal(N.array("a\x00\x0b\x0c\x00").item(), 'a\x00\x0b\x0c') + def check_mem_string_concat(self, level=rlevel): + """Ticket #469""" + x = N.array([]) + N.append(x,'asdasd\tasdasd') + def check_matrix_multiply_by_1d_vector(self, level=rlevel) : """Ticket #473""" def mul() : @@ -653,5 +658,34 @@ N.take(x,[0,2],axis=1,out=b) assert_array_equal(a,b) + def check_array_str_64bit(self, level=rlevel): + """Ticket #501""" + s = N.array([1, N.nan],dtype=N.float64) + errstate = N.seterr(all='raise') + try: + sstr = N.array_str(s) + finally: + N.seterr(**errstate) + + def check_frompyfunc_endian(self, level=rlevel): + """Ticket #503""" + from math import radians + uradians = N.frompyfunc(radians, 1, 1) + big_endian = N.array([83.4, 83.5], dtype='>f8') + little_endian = N.array([83.4, 83.5], dtype='='2.3': - language = ext.language or self.compiler.detect_language(sources) - else: - language = ext.language - if cxx_sources: - linker = self.compiler.cxx_compiler().link_shared_object - if sys.version[:3]>='2.3': - kws = {'target_lang':language} + kws = {'target_lang':ext.language} else: kws = {} linker(objects, ext_filename, - libraries=self.get_libraries(ext) + c_libraries + clib_libraries, - library_dirs=ext.library_dirs+c_library_dirs+clib_library_dirs, + libraries=libraries, + library_dirs=library_dirs, runtime_library_dirs=ext.runtime_library_dirs, extra_postargs=extra_args, export_symbols=self.get_export_symbols(ext), debug=self.debug, build_temp=self.build_temp,**kws) - def _libs_with_msvc_and_fortran(self, c_libraries, c_library_dirs): + def _libs_with_msvc_and_fortran(self, fcompiler, c_libraries, + c_library_dirs): + if fcompiler is None: return + + for libname in c_libraries: + if libname.startswith('msvc'): continue + fileexists = False + for libdir in c_library_dirs or []: + libfile = os.path.join(libdir,'%s.lib' % (libname)) + if os.path.isfile(libfile): + fileexists = True + break + if fileexists: continue + # make g77-compiled static libs available to MSVC + fileexists = False + for libdir in c_library_dirs: + libfile = os.path.join(libdir,'lib%s.a' % (libname)) + if os.path.isfile(libfile): + # copy libname.a file to name.lib so that MSVC linker + # can find it + libfile2 = os.path.join(self.build_temp, libname + '.lib') + copy_file(libfile, libfile2) + if self.build_temp not in c_library_dirs: + c_library_dirs.append(self.build_temp) + fileexists = True + break + if fileexists: continue + log.warn('could not find library %r in directories %s' + % (libname, c_library_dirs)) + # Always use system linker when using MSVC compiler. f_lib_dirs = [] - for dir in self.fcompiler.library_dirs: + for dir in fcompiler.library_dirs: # correct path when compiling in Cygwin but with normal Win # Python if dir.startswith('/usr/lib'): @@ -342,17 +434,16 @@ c_library_dirs.extend(f_lib_dirs) # make g77-compiled static libs available to MSVC - lib_added = False - for lib in self.fcompiler.libraries: - if not lib.startswith('msvcr'): + for lib in fcompiler.libraries: + if not lib.startswith('msvc'): c_libraries.append(lib) p = combine_paths(f_lib_dirs, 'lib' + lib + '.a') if p: dst_name = os.path.join(self.build_temp, lib + '.lib') - copy_file(p[0], dst_name) - if not lib_added: + if not os.path.isfile(dst_name): + copy_file(p[0], dst_name) + if self.build_temp not in c_library_dirs: c_library_dirs.append(self.build_temp) - lib_added = True def get_source_files (self): self.check_extensions_list(self.extensions) Modified: branches/multicore/numpy/distutils/command/build_src.py =================================================================== --- branches/multicore/numpy/distutils/command/build_src.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/command/build_src.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -8,12 +8,23 @@ from distutils.command import build_ext from distutils.dep_util import newer_group, newer from distutils.util import get_platform +from distutils.errors import DistutilsError, DistutilsSetupError +try: + import Pyrex.Compiler + have_pyrex = True +except ImportError: + have_pyrex = False + +# this import can't be done here, as it uses numpy stuff only available +# after it's installed +#import numpy.f2py from numpy.distutils import log from numpy.distutils.misc_util import fortran_ext_match, \ appendpath, is_string, is_sequence from numpy.distutils.from_template import process_file as process_f_file from numpy.distutils.conv_template import process_file as process_c_file +from numpy.distutils.exec_command import splitcmdline class build_src(build_ext.build_ext): @@ -21,8 +32,12 @@ user_options = [ ('build-src=', 'd', "directory to \"build\" sources to"), - ('f2pyflags=', None, "additonal flags to f2py"), - ('swigflags=', None, "additional flags to swig"), + ('f2py-opts=', None, "list of f2py command line options"), + ('swig=', None, "path to the SWIG executable"), + ('swig-opts=', None, "list of SWIG command line options"), + ('swig-cpp', None, "make SWIG create C++ files (default is autodetected from sources)"), + ('f2pyflags=', None, "additional flags to f2py (use --f2py-opts= instead)"), # obsolete + ('swigflags=', None, "additional flags to swig (use --swig-opts= instead)"), # obsolete ('force', 'f', "forcibly build everything (ignore file timestamps)"), ('inplace', 'i', "ignore build-lib and put compiled extensions into the source " + @@ -44,9 +59,12 @@ self.force = None self.inplace = None self.package_dir = None - self.f2pyflags = None - self.swigflags = None - return + self.f2pyflags = None # obsolete + self.f2py_opts = None + self.swigflags = None # obsolete + self.swig_opts = None + self.swig_cpp = None + self.swig = None def finalize_options(self): self.set_undefined_options('build', @@ -63,36 +81,59 @@ if self.build_src is None: plat_specifier = ".%s-%s" % (get_platform(), sys.version[0:3]) self.build_src = os.path.join(self.build_base, 'src'+plat_specifier) - if self.inplace is None: - build_ext = self.get_finalized_command('build_ext') - self.inplace = build_ext.inplace # py_modules_dict is used in build_py.find_package_modules self.py_modules_dict = {} - if self.f2pyflags is None: - self.f2pyflags = [] + if self.f2pyflags: + if self.f2py_opts: + log.warn('ignoring --f2pyflags as --f2py-opts already used') + else: + self.f2py_opts = self.f2pyflags + self.f2pyflags = None + if self.f2py_opts is None: + self.f2py_opts = [] else: - self.f2pyflags = self.f2pyflags.split() # XXX spaces?? + self.f2py_opts = splitcmdline(self.f2py_opts) - if self.swigflags is None: - self.swigflags = [] + if self.swigflags: + if self.swig_opts: + log.warn('ignoring --swigflags as --swig-opts already used') + else: + self.swig_opts = self.swigflags + self.swigflags = None + + if self.swig_opts is None: + self.swig_opts = [] else: - self.swigflags = self.swigflags.split() # XXX spaces?? - return + self.swig_opts = splitcmdline(self.swig_opts) + # use options from build_ext command + build_ext = self.get_finalized_command('build_ext') + if self.inplace is None: + self.inplace = build_ext.inplace + if self.swig_cpp is None: + self.swig_cpp = build_ext.swig_cpp + for c in ['swig','swig_opt']: + o = '--'+c.replace('_','-') + v = getattr(build_ext,c,None) + if v: + if getattr(self,c): + log.warn('both build_src and build_ext define %s option' % (o)) + else: + log.info('using "%s=%s" option from build_ext command' % (o,v)) + setattr(self, c, v) + def run(self): if not (self.extensions or self.libraries): return self.build_sources() - return - def build_sources(self): if self.inplace: - self.get_package_dir = self.get_finalized_command('build_py')\ - .get_package_dir + self.get_package_dir = \ + self.get_finalized_command('build_py').get_package_dir self.build_py_modules_sources() @@ -107,8 +148,6 @@ self.build_data_files_sources() - return - def build_data_files_sources(self): if not self.data_files: return @@ -141,9 +180,8 @@ filenames = get_data_files((d,files)) new_data_files.append((d, filenames)) else: - raise + raise TypeError(repr(data)) self.data_files[:] = new_data_files - return def build_py_modules_sources(self): if not self.py_modules: @@ -170,7 +208,6 @@ else: new_py_modules.append(source) self.py_modules[:] = new_py_modules - return def build_library_sources(self, lib_name, build_info): sources = list(build_info.get('sources',[])) @@ -187,7 +224,8 @@ sources, h_files = self.filter_h_files(sources) if h_files: - print self.package,'- nothing done with h_files=',h_files + log.info('%s - nothing done with h_files = %s', + self.package, h_files) #for f in h_files: # self.distribution.headers.append((lib_name,f)) @@ -232,14 +270,13 @@ sources, h_files = self.filter_h_files(sources) if h_files: - print package,'- nothing done with h_files=',h_files + log.info('%s - nothing done with h_files = %s', + package, h_files) #for f in h_files: # self.distribution.headers.append((package,f)) ext.sources = sources - return - def generate_sources(self, sources, extension): new_sources = [] func_sources = [] @@ -334,12 +371,6 @@ return new_sources def pyrex_sources(self, sources, extension): - have_pyrex = False - try: - import Pyrex - have_pyrex = True - except ImportError: - pass new_sources = [] ext_name = extension.name.split('.')[-1] for source in sources: @@ -355,22 +386,20 @@ if have_pyrex: log.info("pyrexc:> %s" % (target_file)) self.mkpath(target_dir) - from Pyrex.Compiler import Main - options = Main.CompilationOptions( - defaults=Main.default_options, + options = Pyrex.Compiler.Main.CompilationOptions( + defaults=Pyrex.Compiler.Main.default_options, output_file=target_file) - pyrex_result = Main.compile(source, options=options) + pyrex_result = Pyrex.Compiler.Main.compile(source, + options=options) if pyrex_result.num_errors != 0: - raise RuntimeError("%d errors in Pyrex compile" % - pyrex_result.num_errors) + raise DistutilsError,"%d errors while compiling %r with Pyrex" \ + % (pyrex_result.num_errors, source) elif os.path.isfile(target_file): - log.warn("Pyrex needed to compile %s but not available."\ - " Using old target %s"\ + log.warn("Pyrex required for compiling %r but not available,"\ + " using old target %r"\ % (source, target_file)) else: - raise SystemError,"Non-existing target %r. "\ - "Perhaps you need to install Pyrex."\ - % (target_file) + raise DistutilsError,"Pyrex required for compiling %r but not available" % (source) new_sources.append(target_file) else: new_sources.append(source) @@ -395,9 +424,9 @@ if os.path.isfile(source): name = get_f2py_modulename(source) if name != ext_name: - raise ValueError('mismatch of extension names: %s ' - 'provides %r but expected %r' % ( - source, name, ext_name)) + raise DistutilsSetupError('mismatch of extension names: %s ' + 'provides %r but expected %r' % ( + source, name, ext_name)) target_file = os.path.join(target_dir,name+'module.c') else: log.debug(' source %s does not exist: skipping f2py\'ing.' \ @@ -406,16 +435,16 @@ skip_f2py = 1 target_file = os.path.join(target_dir,name+'module.c') if not os.path.isfile(target_file): - log.debug(' target %s does not exist:\n '\ - 'Assuming %smodule.c was generated with '\ - '"build_src --inplace" command.' \ - % (target_file, name)) + log.warn(' target %s does not exist:\n '\ + 'Assuming %smodule.c was generated with '\ + '"build_src --inplace" command.' \ + % (target_file, name)) target_dir = os.path.dirname(base) target_file = os.path.join(target_dir,name+'module.c') if not os.path.isfile(target_file): - raise ValueError("%r missing" % (target_file,)) - log.debug(' Yes! Using %s as up-to-date target.' \ - % (target_file)) + raise DistutilsSetupError("%r missing" % (target_file,)) + log.info(' Yes! Using %r as up-to-date target.' \ + % (target_file)) target_dirs.append(target_dir) f2py_sources.append(source) f2py_targets[source] = target_file @@ -430,7 +459,7 @@ map(self.mkpath, target_dirs) - f2py_options = extension.f2py_options + self.f2pyflags + f2py_options = extension.f2py_options + self.f2py_opts if self.distribution.libraries: for name,build_info in self.distribution.libraries: @@ -441,7 +470,7 @@ if f2py_sources: if len(f2py_sources) != 1: - raise ValueError( + raise DistutilsSetupError( 'only one .pyf file is allowed per extension module but got'\ ' more: %r' % (f2py_sources,)) source = f2py_sources[0] @@ -451,8 +480,9 @@ if (self.force or newer_group(depends, target_file,'newer')) \ and not skip_f2py: log.info("f2py: %s" % (source)) - import numpy.f2py as f2py2e - f2py2e.run_main(f2py_options + ['--build-dir',target_dir,source]) + import numpy.f2py + numpy.f2py.run_main(f2py_options + + ['--build-dir',target_dir,source]) else: log.debug(" skipping '%s' f2py interface (up-to-date)" % (source)) else: @@ -467,10 +497,10 @@ depends = f_sources + extension.depends if (self.force or newer_group(depends, target_file, 'newer')) \ and not skip_f2py: - import numpy.f2py as f2py2e log.info("f2py:> %s" % (target_file)) self.mkpath(target_dir) - f2py2e.run_main(f2py_options + ['--lower', + import numpy.f2py + numpy.f2py.run_main(f2py_options + ['--lower', '--build-dir',target_dir]+\ ['-m',ext_name]+f_sources) else: @@ -478,7 +508,7 @@ % (target_file)) if not os.path.isfile(target_file): - raise ValueError("%r missing" % (target_file,)) + raise DistutilsError("f2py target file %r not generated" % (target_file,)) target_c = os.path.join(self.build_src,'fortranobject.c') target_h = os.path.join(self.build_src,'fortranobject.h') @@ -490,8 +520,8 @@ extension.include_dirs.append(self.build_src) if not skip_f2py: - import numpy.f2py as f2py2e - d = os.path.dirname(f2py2e.__file__) + import numpy.f2py + d = os.path.dirname(numpy.f2py.__file__) source_c = os.path.join(d,'src','fortranobject.c') source_h = os.path.join(d,'src','fortranobject.h') if newer(source_c,target_c) or newer(source_h,target_h): @@ -500,9 +530,9 @@ self.copy_file(source_h,target_h) else: if not os.path.isfile(target_c): - raise ValueError("%r missing" % (target_c,)) + raise DistutilsSetupError("f2py target_c file %r not found" % (target_c,)) if not os.path.isfile(target_h): - raise ValueError("%r missing" % (target_h,)) + raise DistutilsSetupError("f2py target_h file %r not found" % (target_h,)) for name_ext in ['-f2pywrappers.f','-f2pywrappers2.f90']: filename = os.path.join(target_dir,ext_name + name_ext) @@ -522,8 +552,12 @@ target_dirs = [] py_files = [] # swig generated .py files target_ext = '.c' - typ = None - is_cpp = 0 + if self.swig_cpp: + typ = 'c++' + is_cpp = True + else: + typ = None + is_cpp = False skip_swig = 0 ext_name = extension.name.split('.')[-1] @@ -539,35 +573,43 @@ if os.path.isfile(source): name = get_swig_modulename(source) if name != ext_name[1:]: - raise ValueError( + raise DistutilsSetupError( 'mismatch of extension names: %s provides %r' ' but expected %r' % (source, name, ext_name[1:])) if typ is None: typ = get_swig_target(source) is_cpp = typ=='c++' - if is_cpp: - target_ext = '.cpp' + if is_cpp: target_ext = '.cpp' else: - assert typ == get_swig_target(source), repr(typ) + typ2 = get_swig_target(source) + if typ!=typ2: + log.warn('expected %r but source %r defines %r swig target' \ + % (typ, source, typ2)) + if typ2=='c++': + log.warn('resetting swig target to c++ (some targets may have .c extension)') + is_cpp = True + target_ext = '.cpp' + else: + log.warn('assuming that %r has c++ swig target' % (source)) target_file = os.path.join(target_dir,'%s_wrap%s' \ % (name, target_ext)) else: - log.debug(' source %s does not exist: skipping swig\'ing.' \ + log.warn(' source %s does not exist: skipping swig\'ing.' \ % (source)) name = ext_name[1:] skip_swig = 1 target_file = _find_swig_target(target_dir, name) if not os.path.isfile(target_file): - log.debug(' target %s does not exist:\n '\ - 'Assuming %s_wrap.{c,cpp} was generated with '\ - '"build_src --inplace" command.' \ + log.warn(' target %s does not exist:\n '\ + 'Assuming %s_wrap.{c,cpp} was generated with '\ + '"build_src --inplace" command.' \ % (target_file, name)) target_dir = os.path.dirname(base) target_file = _find_swig_target(target_dir, name) if not os.path.isfile(target_file): - raise ValueError("%r missing" % (target_file,)) - log.debug(' Yes! Using %s as up-to-date target.' \ - % (target_file)) + raise DistutilsSetupError("%r missing" % (target_file,)) + log.warn(' Yes! Using %r as up-to-date target.' \ + % (target_file)) target_dirs.append(target_dir) new_sources.append(target_file) py_files.append(os.path.join(py_target_dir, name+'.py')) @@ -583,7 +625,7 @@ return new_sources + py_files map(self.mkpath, target_dirs) - swig = self.find_swig() + swig = self.swig or self.find_swig() swig_cmd = [swig, "-python"] if is_cpp: swig_cmd.append('-c++') @@ -595,7 +637,7 @@ if self.force or newer_group(depends, target, 'newer'): log.info("%s: %s" % (os.path.basename(swig) \ + (is_cpp and '++' or ''), source)) - self.spawn(swig_cmd + self.swigflags \ + self.spawn(swig_cmd + self.swig_opts \ + ["-o", target, '-outdir', py_target_dir, source]) else: log.debug(" skipping '%s' swig interface (up-to-date)" \ Modified: branches/multicore/numpy/distutils/command/config.py =================================================================== --- branches/multicore/numpy/distutils/command/config.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/command/config.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -7,6 +7,7 @@ from distutils.command.config import config as old_config from distutils.command.config import LANG_EXT from distutils import log +from distutils.file_util import copy_file from numpy.distutils.exec_command import exec_command LANG_EXT['f77'] = '.f' @@ -14,22 +15,13 @@ class config(old_config): old_config.user_options += [ - ('fcompiler=', None, - "specify the Fortran compiler type"), + ('fcompiler=', None, "specify the Fortran compiler type"), ] def initialize_options(self): self.fcompiler = None old_config.initialize_options(self) - return - def finalize_options(self): - old_config.finalize_options(self) - f = self.distribution.get_command_obj('config_fc') - self.set_undefined_options('config_fc', - ('fcompiler', 'fcompiler')) - return - def _check_compiler (self): old_config._check_compiler(self) from numpy.distutils.fcompiler import FCompiler, new_fcompiler @@ -39,7 +31,6 @@ self.fcompiler.customize(self.distribution) self.fcompiler.customize_cmd(self) self.fcompiler.show_customization() - return def _wrap_method(self,mth,lang,args): from distutils.ccompiler import CompileError @@ -62,6 +53,47 @@ def _link (self, body, headers, include_dirs, libraries, library_dirs, lang): + if self.compiler.compiler_type=='msvc': + libraries = (libraries or [])[:] + library_dirs = (library_dirs or [])[:] + if lang in ['f77','f90']: + lang = 'c' # always use system linker when using MSVC compiler + if self.fcompiler: + for d in self.fcompiler.library_dirs or []: + # correct path when compiling in Cygwin but with + # normal Win Python + if d.startswith('/usr/lib'): + s,o = exec_command(['cygpath', '-w', d], + use_tee=False) + if not s: d = o + library_dirs.append(d) + for libname in self.fcompiler.libraries or []: + if libname not in libraries: + libraries.append(libname) + for libname in libraries: + if libname.startswith('msvc'): continue + fileexists = False + for libdir in library_dirs or []: + libfile = os.path.join(libdir,'%s.lib' % (libname)) + if os.path.isfile(libfile): + fileexists = True + break + if fileexists: continue + # make g77-compiled static libs available to MSVC + fileexists = False + for libdir in library_dirs: + libfile = os.path.join(libdir,'lib%s.a' % (libname)) + if os.path.isfile(libfile): + # copy libname.a file to name.lib so that MSVC linker + # can find it + libfile2 = os.path.join(libdir,'%s.lib' % (libname)) + copy_file(libfile, libfile2) + self.temp_files.append(libfile2) + fileexists = True + break + if fileexists: continue + log.warn('could not find library %r in directories %s' \ + % (libname, library_dirs)) return self._wrap_method(old_config._link,lang, (body, headers, include_dirs, libraries, library_dirs, lang)) Modified: branches/multicore/numpy/distutils/command/config_compiler.py =================================================================== --- branches/multicore/numpy/distutils/command/config_compiler.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/command/config_compiler.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,19 +1,16 @@ -import sys from distutils.core import Command +from numpy.distutils import log -#XXX: Implement confic_cc for enhancing C/C++ compiler options. #XXX: Linker flags def show_fortran_compilers(_cache=[]): # Using cache to prevent infinite recursion if _cache: return _cache.append(1) - from numpy.distutils.fcompiler import show_fcompilers import distutils.core dist = distutils.core._setup_distribution show_fcompilers(dist) - return class config_fc(Command): """ Distutils command to hold user specified options @@ -22,6 +19,8 @@ config_fc command is used by the FCompiler.customize() method. """ + description = "specify Fortran 77/Fortran 90 compiler information" + user_options = [ ('fcompiler=',None,"specify Fortran compiler type"), ('f77exec=', None, "specify F77 compiler command"), @@ -53,12 +52,72 @@ self.debug = None self.noopt = None self.noarch = None - return def finalize_options(self): + log.info('unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options') + build_clib = self.get_finalized_command('build_clib') + build_ext = self.get_finalized_command('build_ext') + config = self.get_finalized_command('config') + build = self.get_finalized_command('build') + cmd_list = [self, config, build_clib, build_ext, build] + for a in ['fcompiler']: + l = [] + for c in cmd_list: + v = getattr(c,a) + if v is not None: + if not isinstance(v, str): v = v.compiler_type + if v not in l: l.append(v) + if not l: v1 = None + else: v1 = l[0] + if len(l)>1: + log.warn(' commands have different --%s options: %s'\ + ', using first in list as default' % (a, l)) + if v1: + for c in cmd_list: + if getattr(c,a) is None: setattr(c, a, v1) + + def run(self): # Do nothing. return +class config_cc(Command): + """ Distutils command to hold user specified options + to C/C++ compilers. + """ + + description = "specify C/C++ compiler information" + + user_options = [ + ('compiler=',None,"specify C/C++ compiler type"), + ] + + def initialize_options(self): + self.compiler = None + + def finalize_options(self): + log.info('unifing config_cc, config, build_clib, build_ext, build commands --compiler options') + build_clib = self.get_finalized_command('build_clib') + build_ext = self.get_finalized_command('build_ext') + config = self.get_finalized_command('config') + build = self.get_finalized_command('build') + cmd_list = [self, config, build_clib, build_ext, build] + for a in ['compiler']: + l = [] + for c in cmd_list: + v = getattr(c,a) + if v is not None: + if not isinstance(v, str): v = v.compiler_type + if v not in l: l.append(v) + if not l: v1 = None + else: v1 = l[0] + if len(l)>1: + log.warn(' commands have different --%s options: %s'\ + ', using first in list as default' % (a, l)) + if v1: + for c in cmd_list: + if getattr(c,a) is None: setattr(c, a, v1) + return + def run(self): # Do nothing. return Modified: branches/multicore/numpy/distutils/conv_template.py =================================================================== --- branches/multicore/numpy/distutils/conv_template.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/conv_template.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -18,15 +18,10 @@ __all__ = ['process_str', 'process_file'] -import string,os,sys -if sys.version[:3]>='2.3': - import re -else: - import pre as re - False = 0 - True = 1 +import os +import sys +import re - def parse_structure(astr): spanlist = [] # subroutines @@ -66,7 +61,8 @@ # with 'a,b,c,a,b,c,a,b,c,a,b,c' astr = parenrep.sub(paren_repl,astr) # replaces occurences of xxx*3 with xxx, xxx, xxx - astr = ','.join([plainrep.sub(paren_repl,x.strip()) for x in astr.split(',')]) + astr = ','.join([plainrep.sub(paren_repl,x.strip()) + for x in astr.split(',')]) return astr def unique_key(adict): @@ -85,40 +81,38 @@ done = True return newkey -def namerepl(match): - global _names, _thissub - name = match.group(1) - return _names[name][_thissub] - def expand_sub(substr, namestr, line): - global _names, _thissub # find all named replacements reps = named_re.findall(namestr) - _names = {} - _names.update(_special_names) + names = {} + names.update(_special_names) numsubs = None for rep in reps: name = rep[0].strip() thelist = conv(rep[1]) - _names[name] = thelist + names[name] = thelist # make lists out of string entries in name dictionary - for name in _names.keys(): - entry = _names[name] + for name in names.keys(): + entry = names[name] entrylist = entry.split(',') - _names[name] = entrylist + names[name] = entrylist num = len(entrylist) if numsubs is None: numsubs = num - elif (numsubs != num): + elif numsubs != num: print namestr print substr raise ValueError, "Mismatch in number to replace" # now replace all keys for each of the lists mystr = '' + thissub = [None] + def namerepl(match): + name = match.group(1) + return names[name][thissub[0]] for k in range(numsubs): - _thissub = k + thissub[0] = k mystr += ("#line %d\n%s\n\n" % (line, template_re.sub(namerepl, substr))) return mystr Modified: branches/multicore/numpy/distutils/core.py =================================================================== --- branches/multicore/numpy/distutils/core.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/core.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -16,25 +16,20 @@ from distutils.core import setup as old_setup have_setuptools = False +import warnings +import distutils.core +import distutils.dist + from numpy.distutils.extension import Extension -from numpy.distutils.command import config -from numpy.distutils.command import build -from numpy.distutils.command import build_py -from numpy.distutils.command import config_compiler -from numpy.distutils.command import build_ext -from numpy.distutils.command import build_clib -from numpy.distutils.command import build_src -from numpy.distutils.command import build_scripts -from numpy.distutils.command import sdist -from numpy.distutils.command import install_data -from numpy.distutils.command import install_headers -from numpy.distutils.command import install -from numpy.distutils.command import bdist_rpm +from numpy.distutils.command import config, config_compiler, \ + build, build_py, build_ext, build_clib, build_src, build_scripts, \ + sdist, install_data, install_headers, install, bdist_rpm from numpy.distutils.misc_util import get_data_files, is_sequence, is_string numpy_cmdclass = {'build': build.build, 'build_src': build_src.build_src, 'build_scripts': build_scripts.build_scripts, + 'config_cc': config_compiler.config_cc, 'config_fc': config_compiler.config_fc, 'config': config.config, 'build_ext': build_ext.build_ext, @@ -60,20 +55,15 @@ continue dv = d[k] if isinstance(dv, tuple): - dv += tuple(v) - continue - if isinstance(dv, list): - dv += list(v) - continue - if isinstance(dv, dict): + d[k] = dv + tuple(v) + elif isinstance(dv, list): + d[k] = dv + list(v) + elif isinstance(dv, dict): _dict_append(dv, **v) - continue - if isinstance(dv, str): - assert isinstance(v,str),`type(v)` - d[k] = v - continue - raise TypeError,`type(dv)` - return + elif is_string(dv): + d[k] = dv + v + else: + raise TypeError, repr(type(dv)) def _command_line_ok(_cache=[]): """ Return True if command line does not contain any @@ -93,24 +83,22 @@ _cache.append(ok) return ok -def _exit_interactive_session(_cache=[]): - if _cache: - return # been here - _cache.append(1) - print '-'*72 - raw_input('Press ENTER to close the interactive session..') - print '='*72 +def get_distribution(always=False): + dist = distutils.core._setup_distribution + # XXX Hack to get numpy installable with easy_install. + # The problem is easy_install runs it's own setup(), which + # sets up distutils.core._setup_distribution. However, + # when our setup() runs, that gets overwritten and lost. + # We can't use isinstance, as the DistributionWithoutHelpCommands + # class is local to a function in setuptools.command.easy_install + if dist is not None and \ + repr(dist).find('DistributionWithoutHelpCommands') != -1: + dist = None + if always and dist is None: + dist = distutils.dist.Distribution() + return dist def setup(**attr): - - if len(sys.argv)<=1 and not attr.get('script_args',[]): - from interactive import interactive_sys_argv - import atexit - atexit.register(_exit_interactive_session) - sys.argv[:] = interactive_sys_argv(sys.argv) - if len(sys.argv)>1: - return setup(**attr) - cmdclass = numpy_cmdclass.copy() new_attr = attr.copy() @@ -123,7 +111,6 @@ # or help request in command in the line. configuration = new_attr.pop('configuration') - import distutils.core old_dist = distutils.core._setup_distribution old_stop = distutils.core._setup_stop_after distutils.core._setup_distribution = None @@ -172,7 +159,6 @@ return old_setup(**new_attr) def _check_append_library(libraries, item): - import warnings for libitem in libraries: if is_sequence(libitem): if is_sequence(item): @@ -197,10 +183,8 @@ if item==libitem: return libraries.append(item) - return def _check_append_ext_library(libraries, (lib_name,build_info)): - import warnings for item in libraries: if is_sequence(item): if item[0]==lib_name: @@ -214,4 +198,3 @@ " no build_info" % (lib_name,)) break libraries.append((lib_name,build_info)) - return Copied: branches/multicore/numpy/distutils/environment.py (from rev 3839, trunk/numpy/distutils/environment.py) Modified: branches/multicore/numpy/distutils/exec_command.py =================================================================== --- branches/multicore/numpy/distutils/exec_command.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/exec_command.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -49,16 +49,16 @@ __all__ = ['exec_command','find_executable'] import os -import re import sys -import tempfile -from numpy.distutils.misc_util import is_sequence +from numpy.distutils.misc_util import is_sequence, make_temp_file +from numpy.distutils import log -############################################################ +def temp_file_name(): + fo, name = make_temp_file() + fo.close() + return name -from log import _global_log as log - ############################################################ def get_pythonexe(): @@ -123,60 +123,48 @@ ############################################################ def find_executable(exe, path=None): - """ Return full path of a executable. + """Return full path of a executable. + + Symbolic links are not followed. """ log.debug('find_executable(%r)' % exe) orig_exe = exe + if path is None: path = os.environ.get('PATH',os.defpath) if os.name=='posix' and sys.version[:3]>'2.1': realpath = os.path.realpath else: realpath = lambda a:a - if exe[0]=='"': + + if exe.startswith('"'): exe = exe[1:-1] - suffices = [''] + + suffixes = [''] if os.name in ['nt','dos','os2']: fn,ext = os.path.splitext(exe) - extra_suffices = ['.exe','.com','.bat'] - if ext.lower() not in extra_suffices: - suffices = extra_suffices + extra_suffixes = ['.exe','.com','.bat'] + if ext.lower() not in extra_suffixes: + suffixes = extra_suffixes + if os.path.isabs(exe): paths = [''] else: - paths = map(os.path.abspath, path.split(os.pathsep)) - if 0 and os.name == 'nt': - new_paths = [] - cygwin_paths = [] - for path in paths: - d,p = os.path.splitdrive(path) - if p.lower().find('cygwin') >= 0: - cygwin_paths.append(path) - else: - new_paths.append(path) - paths = new_paths + cygwin_paths + paths = [ os.path.abspath(p) for p in path.split(os.pathsep) ] + for path in paths: - fn = os.path.join(path,exe) - for s in suffices: + fn = os.path.join(path, exe) + for s in suffixes: f_ext = fn+s if not os.path.islink(f_ext): - # see comment below. f_ext = realpath(f_ext) - if os.path.isfile(f_ext) and os.access(f_ext,os.X_OK): - log.debug('Found executable %s' % f_ext) + if os.path.isfile(f_ext) and os.access(f_ext, os.X_OK): + log.good('Found executable %s' % f_ext) return f_ext - if os.path.islink(exe): - # Don't follow symbolic links. E.g. when using colorgcc then - # gcc -> /usr/bin/colorgcc - # g77 -> /usr/bin/colorgcc - pass - else: - exe = realpath(exe) - if not os.path.isfile(exe) or os.access(exe,os.X_OK): - log.warn('Could not locate executable %s' % orig_exe) - return orig_exe - return exe + log.warn('Could not locate executable %s' % orig_exe) + return None + ############################################################ def _preserve_environment( names ): @@ -276,17 +264,17 @@ else: command_str = command - tmpfile = tempfile.mktemp() + tmpfile = temp_file_name() stsfile = None if use_tee: - stsfile = tempfile.mktemp() + stsfile = temp_file_name() filter = '' if use_tee == 2: filter = r'| tr -cd "\n" | tr "\n" "."; echo' command_posix = '( %s ; echo $? > %s ) 2>&1 | tee %s %s'\ % (command_str,stsfile,tmpfile,filter) else: - stsfile = tempfile.mktemp() + stsfile = temp_file_name() command_posix = '( %s ; echo $? > %s ) > %s 2>&1'\ % (command_str,stsfile,tmpfile) #command_posix = '( %s ) > %s 2>&1' % (command_str,tmpfile) @@ -323,9 +311,9 @@ log.debug('_exec_command_python(...)') python_exe = get_pythonexe() - cmdfile = tempfile.mktemp() - stsfile = tempfile.mktemp() - outfile = tempfile.mktemp() + cmdfile = temp_file_name() + stsfile = temp_file_name() + outfile = temp_file_name() f = open(cmdfile,'w') f.write('import os\n') @@ -409,10 +397,10 @@ so_dup = os.dup(so_fileno) se_dup = os.dup(se_fileno) - outfile = tempfile.mktemp() + outfile = temp_file_name() fout = open(outfile,'w') if using_command: - errfile = tempfile.mktemp() + errfile = temp_file_name() ferr = open(errfile,'w') log.debug('Running %s(%s,%r,%r,os.environ)' \ @@ -601,7 +589,7 @@ def test_execute_in(**kws): pythonexe = get_pythonexe() - tmpfile = tempfile.mktemp() + tmpfile = temp_file_name() fn = os.path.basename(tmpfile) tmpdir = os.path.dirname(tmpfile) f = open(tmpfile,'w') Modified: branches/multicore/numpy/distutils/fcompiler/__init__.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/__init__.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/__init__.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -10,20 +10,32 @@ import os import sys import re +import new +try: + set +except NameError: + from sets import Set as set + from distutils.sysconfig import get_config_var, get_python_lib from distutils.fancy_getopt import FancyGetopt -from distutils.errors import DistutilsModuleError,DistutilsArgError,\ - DistutilsExecError,CompileError,LinkError,DistutilsPlatformError +from distutils.errors import DistutilsModuleError, \ + DistutilsExecError, CompileError, LinkError, DistutilsPlatformError from distutils.util import split_quoted from numpy.distutils.ccompiler import CCompiler, gen_lib_options from numpy.distutils import log -from numpy.distutils.command.config_compiler import config_fc -from numpy.distutils.misc_util import is_string, is_sequence +from numpy.distutils.misc_util import is_string, make_temp_file +from numpy.distutils.environment import EnvironmentConfig +from numpy.distutils.exec_command import find_executable from distutils.spawn import _nt_quote_args +__metaclass__ = type + +class CompilerNotFound(Exception): + pass + class FCompiler(CCompiler): - """ Abstract base class to define the interface that must be implemented + """Abstract base class to define the interface that must be implemented by real Fortran compiler classes. Methods that subclasses may redefine: @@ -38,7 +50,7 @@ DON'T call these methods (except get_version) after constructing a compiler instance or inside any other method. All methods, except get_version_cmd() and get_flags_version(), may - call get_version() method. + call the get_version() method. After constructing a compiler instance, always call customize(dist=None) method that finalizes compiler construction and makes the following @@ -53,7 +65,55 @@ library_dirs """ + # These are the environment variables and distutils keys used. + # Each configuration descripition is + # (, , ) + # The hook names are handled by the self._environment_hook method. + # - names starting with 'self.' call methods in this class + # - names starting with 'exe.' return the key in the executables dict + # - names like'flags.YYY' return self.get_flag_YYY() + distutils_vars = EnvironmentConfig( + noopt = (None, None, 'noopt'), + noarch = (None, None, 'noarch'), + debug = (None, None, 'debug'), + verbose = (None, None, 'verbose'), + ) + + command_vars = EnvironmentConfig( + distutils_section='config_fc', + compiler_f77 = ('exe.compiler_f77', 'F77', 'f77exec'), + compiler_f90 = ('exe.compiler_f90', 'F90', 'f90exec'), + compiler_fix = ('exe.compiler_fix', 'F90', 'f90exec'), + version_cmd = ('self.get_version_cmd', None, None), + linker_so = ('self.get_linker_so', 'LDSHARED', 'ldshared'), + linker_exe = ('self.get_linker_exe', 'LD', 'ld'), + archiver = (None, 'AR', 'ar'), + ranlib = (None, 'RANLIB', 'ranlib'), + ) + + flag_vars = EnvironmentConfig( + distutils_section='config_fc', + version = ('flags.version', None, None), + f77 = ('flags.f77', 'F77FLAGS', 'f77flags'), + f90 = ('flags.f90', 'F90FLAGS', 'f90flags'), + free = ('flags.free', 'FREEFLAGS', 'freeflags'), + fix = ('flags.fix', None, None), + opt = ('flags.opt', 'FOPT', 'opt'), + opt_f77 = ('flags.opt_f77', None, None), + opt_f90 = ('flags.opt_f90', None, None), + arch = ('flags.arch', 'FARCH', 'arch'), + arch_f77 = ('flags.arch_f77', None, None), + arch_f90 = ('flags.arch_f90', None, None), + debug = ('flags.debug', 'FDEBUG', None, None), + debug_f77 = ('flags.debug_f77', None, None), + debug_f90 = ('flags.debug_f90', None, None), + flags = ('self.get_flags', 'FFLAGS', 'fflags'), + linker_so = ('flags.linker_so', 'LDFLAGS', 'ldflags'), + linker_exe = ('flags.linker_exe', 'LDFLAGS', 'ldflags'), + ar = ('flags.ar', 'ARFLAGS', 'arflags'), + ) + language_map = {'.f':'f77', '.for':'f77', '.F':'f77', # XXX: needs preprocessor @@ -67,14 +127,15 @@ version_pattern = None + possible_executables = [] executables = { - 'version_cmd' : ["f77","-v"], + 'version_cmd' : ["f77", "-v"], 'compiler_f77' : ["f77"], 'compiler_f90' : ["f90"], - 'compiler_fix' : ["f90","-fixed"], - 'linker_so' : ["f90","-shared"], + 'compiler_fix' : ["f90", "-fixed"], + 'linker_so' : ["f90", "-shared"], 'linker_exe' : ["f90"], - 'archiver' : ["ar","-cr"], + 'archiver' : ["ar", "-cr"], 'ranlib' : None, } @@ -103,6 +164,26 @@ shared_lib_format = "%s%s" exe_extension = "" + def __init__(self, *args, **kw): + CCompiler.__init__(self, *args, **kw) + self.distutils_vars = self.distutils_vars.clone(self._environment_hook) + self.command_vars = self.command_vars.clone(self._environment_hook) + self.flag_vars = self.flag_vars.clone(self._environment_hook) + self.executables = self.executables.copy() + for e in ['version_cmd', 'compiler_f77', 'compiler_f90', + 'compiler_fix', 'linker_so', 'linker_exe', 'archiver', + 'ranlib']: + if e not in self.executables: + self.executables[e] = None + + def __copy__(self): + obj = new.instance(self.__class__, self.__dict__) + obj.distutils_vars = obj.distutils_vars.clone(obj._environment_hook) + obj.command_vars = obj.command_vars.clone(obj._environment_hook) + obj.flag_vars = obj.flag_vars.clone(obj._environment_hook) + obj.executables = obj.executables.copy() + return obj + # If compiler does not support compiling Fortran 90 then it can # suggest using another compiler. For example, gnu would suggest # gnu95 compiler type when there are F90 sources. @@ -114,113 +195,166 @@ ## results if used elsewhere. So, you have been warned.. def find_executables(self): - """Modify self.executables to hold found executables, instead of - searching for them at class creation time.""" - pass + """Go through the self.executables dictionary, and attempt to + find and assign appropiate executables. + Executable names are looked for in the environment (environment + variables, the distutils.cfg, and command line), the 0th-element of + the command list, and the self.possible_executables list. + + Also, if the 0th element is "" or "", the Fortran 77 + or the Fortran 90 compiler executable is used, unless overridden + by an environment setting. + """ + exe_cache = {} + def cached_find_executable(exe): + if exe in exe_cache: + return exe_cache[exe] + fc_exe = find_executable(exe) + exe_cache[exe] = fc_exe + return fc_exe + def set_exe(exe_key, f77=None, f90=None): + cmd = self.executables.get(exe_key, None) + if not cmd: + return None + # Note that we get cmd[0] here if the environment doesn't + # have anything set + exe_from_environ = getattr(self.command_vars, exe_key) + if not exe_from_environ: + possibles = [f90, f77] + self.possible_executables + else: + possibles = [exe_from_environ] + self.possible_executables + + seen = set() + unique_possibles = [] + for e in possibles: + if e == '': + e = f77 + elif e == '': + e = f90 + if not e or e in seen: + continue + seen.add(e) + unique_possibles.append(e) + + for exe in unique_possibles: + fc_exe = cached_find_executable(exe) + if fc_exe: + cmd[0] = fc_exe + return fc_exe + return None + + f90 = set_exe('compiler_f90') +# if not f90: +# raise CompilerNotFound('f90') + f77 = set_exe('compiler_f77', f90=f90) + if not f77: + raise CompilerNotFound('f77') + set_exe('compiler_fix', f90=f90) + + set_exe('linker_so', f77=f77, f90=f90) + set_exe('linker_exe', f77=f77, f90=f90) + set_exe('version_cmd', f77=f77, f90=f90) + + set_exe('archiver') + set_exe('ranlib') + def get_version_cmd(self): - """ Compiler command to print out version information. """ - f77 = self.executables['compiler_f77'] + """Compiler command to print out version information.""" + f77 = self.executables.get('compiler_f77') if f77 is not None: f77 = f77[0] - cmd = self.executables['version_cmd'] + cmd = self.executables.get('version_cmd') if cmd is not None: cmd = cmd[0] - if cmd==f77: + if cmd == f77: cmd = self.compiler_f77[0] else: - f90 = self.executables['compiler_f90'] + f90 = self.executables.get('compiler_f90') if f90 is not None: f90 = f90[0] - if cmd==f90: - cmd = self.compiler_f90[0] + if cmd == f90: + cmd = self.compiler_f90[0] return cmd def get_linker_so(self): - """ Linker command to build shared libraries. """ - f77 = self.executables['compiler_f77'] + """Linker command to build shared libraries.""" + f77 = self.executables.get('compiler_f77') if f77 is not None: f77 = f77[0] - ln = self.executables['linker_so'] + ln = self.executables.get('linker_so') if ln is not None: ln = ln[0] - if ln==f77: + if ln == f77: ln = self.compiler_f77[0] else: - f90 = self.executables['compiler_f90'] + f90 = self.executables.get('compiler_f90') if f90 is not None: f90 = f90[0] - if ln==f90: - ln = self.compiler_f90[0] + if ln == f90: + ln = self.compiler_f90[0] return ln def get_linker_exe(self): - """ Linker command to build shared libraries. """ - f77 = self.executables['compiler_f77'] + """Linker command to build shared libraries.""" + f77 = self.executables.get('compiler_f77') if f77 is not None: f77 = f77[0] ln = self.executables.get('linker_exe') if ln is not None: ln = ln[0] - if ln==f77: + if ln == f77: ln = self.compiler_f77[0] else: - f90 = self.executables['compiler_f90'] + f90 = self.executables.get('compiler_f90') if f90 is not None: f90 = f90[0] - if ln==f90: - ln = self.compiler_f90[0] + if ln == f90: + ln = self.compiler_f90[0] return ln def get_flags(self): - """ List of flags common to all compiler types. """ + """List of flags common to all compiler types.""" return [] + self.pic_flags + + def _get_executable_flags(self, key): + cmd = self.executables.get(key, None) + if cmd is None: + return [] + return cmd[1:] + def get_flags_version(self): - """ List of compiler flags to print out version information. """ - if self.executables['version_cmd']: - return self.executables['version_cmd'][1:] - return [] + """List of compiler flags to print out version information.""" + return self._get_executable_flags('version_cmd') def get_flags_f77(self): - """ List of Fortran 77 specific flags. """ - if self.executables['compiler_f77']: - return self.executables['compiler_f77'][1:] - return [] + """List of Fortran 77 specific flags.""" + return self._get_executable_flags('compiler_f77') def get_flags_f90(self): - """ List of Fortran 90 specific flags. """ - if self.executables['compiler_f90']: - return self.executables['compiler_f90'][1:] - return [] + """List of Fortran 90 specific flags.""" + return self._get_executable_flags('compiler_f90') def get_flags_free(self): - """ List of Fortran 90 free format specific flags. """ + """List of Fortran 90 free format specific flags.""" return [] def get_flags_fix(self): - """ List of Fortran 90 fixed format specific flags. """ - if self.executables['compiler_fix']: - return self.executables['compiler_fix'][1:] - return [] + """List of Fortran 90 fixed format specific flags.""" + return self._get_executable_flags('compiler_fix') def get_flags_linker_so(self): - """ List of linker flags to build a shared library. """ - if self.executables['linker_so']: - return self.executables['linker_so'][1:] - return [] + """List of linker flags to build a shared library.""" + return self._get_executable_flags('linker_so') def get_flags_linker_exe(self): - """ List of linker flags to build an executable. """ - if self.executables['linker_exe']: - return self.executables['linker_exe'][1:] - return [] + """List of linker flags to build an executable.""" + return self._get_executable_flags('linker_exe') def get_flags_ar(self): - """ List of archiver flags. """ - if self.executables['archiver']: - return self.executables['archiver'][1:] - return [] + """List of archiver flags. """ + return self._get_executable_flags('archiver') def get_flags_opt(self): - """ List of architecture independent compiler flags. """ + """List of architecture independent compiler flags.""" return [] def get_flags_arch(self): - """ List of architecture dependent compiler flags. """ + """List of architecture dependent compiler flags.""" return [] def get_flags_debug(self): - """ List of compiler flags to compile with debugging information. """ + """List of compiler flags to compile with debugging information.""" return [] get_flags_opt_f77 = get_flags_opt_f90 = get_flags_opt @@ -228,18 +362,18 @@ get_flags_debug_f77 = get_flags_debug_f90 = get_flags_debug def get_libraries(self): - """ List of compiler libraries. """ + """List of compiler libraries.""" return self.libraries[:] def get_library_dirs(self): - """ List of compiler library directories. """ + """List of compiler library directories.""" return self.library_dirs[:] ############################################################ ## Public methods: - def customize(self, dist=None): - """ Customize Fortran compiler. + def customize(self, dist): + """Customize Fortran compiler. This method gets Fortran compiler specific information from (i) class definition, (ii) environment, (iii) distutils config @@ -250,88 +384,71 @@ instance is needed for (iii) and (iv). """ log.info('customize %s' % (self.__class__.__name__)) - from distutils.dist import Distribution - if dist is None: - # These hooks are for testing only! - dist = Distribution() - dist.script_name = os.path.basename(sys.argv[0]) - dist.script_args = ['config_fc'] + sys.argv[1:] - dist.cmdclass['config_fc'] = config_fc - dist.parse_config_files() - dist.parse_command_line() - if isinstance(dist,Distribution): - conf = dist.get_option_dict('config_fc') - else: - assert isinstance(dist,dict) - conf = dist - noopt = conf.get('noopt',[None,0])[1] + self.distutils_vars.use_distribution(dist) + self.command_vars.use_distribution(dist) + self.flag_vars.use_distribution(dist) + + self.find_executables() + + noopt = self.distutils_vars.get('noopt', False) if 0: # change to `if 1:` when making release. # Don't use architecture dependent compiler flags: - noarch = 1 + noarch = True else: - noarch = conf.get('noarch',[None,noopt])[1] - debug = conf.get('debug',[None,0])[1] + noarch = self.distutils_vars.get('noarch', noopt) + debug = self.distutils_vars.get('debug', False) - self.find_executables() + f77 = self.command_vars.compiler_f77 + f90 = self.command_vars.compiler_f90 - f77 = self.__get_cmd('compiler_f77','F77',(conf,'f77exec')) - f90 = self.__get_cmd('compiler_f90','F90',(conf,'f90exec')) - # Temporarily setting f77,f90 compilers so that - # version_cmd can use their executables. - if f77: - self.set_executables(compiler_f77=[f77]) - if f90: - self.set_executables(compiler_f90=[f90]) - # Must set version_cmd before others as self.get_flags* # methods may call self.get_version. - vers_cmd = self.__get_cmd(self.get_version_cmd) + vers_cmd = self.command_vars.version_cmd if vers_cmd: - vflags = self.__get_flags(self.get_flags_version) + vflags = self.flag_vars.version self.set_executables(version_cmd=[vers_cmd]+vflags) + f77flags = [] + f90flags = [] + freeflags = [] + fixflags = [] + if f77: - f77flags = self.__get_flags(self.get_flags_f77,'F77FLAGS', - (conf,'f77flags')) + f77flags = self.flag_vars.f77 if f90: - f90flags = self.__get_flags(self.get_flags_f90,'F90FLAGS', - (conf,'f90flags')) - freeflags = self.__get_flags(self.get_flags_free,'FREEFLAGS', - (conf,'freeflags')) + f90flags = self.flag_vars.f90 + freeflags = self.flag_vars.free # XXX Assuming that free format is default for f90 compiler. - fix = self.__get_cmd('compiler_fix','F90',(conf,'f90exec')) + fix = self.command_vars.compiler_fix if fix: - fixflags = self.__get_flags(self.get_flags_fix) + f90flags + fixflags = self.flag_vars.fix + f90flags - oflags,aflags,dflags = [],[],[] + oflags, aflags, dflags = [], [], [] + def to_list(flags): + if is_string(flags): + return [flags] + return flags + # examine get_flags__ for extra flags + # only add them if the method is different from get_flags_ + def get_flags(tag, flags): + # note that self.flag_vars. calls self.get_flags_() + flags.extend(to_list(getattr(self.flag_vars, tag))) + this_get = getattr(self, 'get_flags_' + tag) + for name, c, flagvar in [('f77', f77, f77flags), + ('f90', f90, f90flags), + ('f90', fix, fixflags)]: + t = '%s_%s' % (tag, name) + if c and this_get is not getattr(self, 'get_flags_' + t): + flagvar.extend(to_list(getattr(self.flag_vars, t))) + return oflags if not noopt: - oflags = self.__get_flags(self.get_flags_opt,'FOPT',(conf,'opt')) - if f77 and self.get_flags_opt is not self.get_flags_opt_f77: - f77flags += self.__get_flags(self.get_flags_opt_f77) - if f90 and self.get_flags_opt is not self.get_flags_opt_f90: - f90flags += self.__get_flags(self.get_flags_opt_f90) - if fix and self.get_flags_opt is not self.get_flags_opt_f90: - fixflags += self.__get_flags(self.get_flags_opt_f90) + get_flags('opt', oflags) if not noarch: - aflags = self.__get_flags(self.get_flags_arch,'FARCH', - (conf,'arch')) - if f77 and self.get_flags_arch is not self.get_flags_arch_f77: - f77flags += self.__get_flags(self.get_flags_arch_f77) - if f90 and self.get_flags_arch is not self.get_flags_arch_f90: - f90flags += self.__get_flags(self.get_flags_arch_f90) - if fix and self.get_flags_arch is not self.get_flags_arch_f90: - fixflags += self.__get_flags(self.get_flags_arch_f90) + get_flags('arch', aflags) if debug: - dflags = self.__get_flags(self.get_flags_debug,'FDEBUG') - if f77 and self.get_flags_debug is not self.get_flags_debug_f77: - f77flags += self.__get_flags(self.get_flags_debug_f77) - if f90 and self.get_flags_debug is not self.get_flags_debug_f90: - f90flags += self.__get_flags(self.get_flags_debug_f90) - if fix and self.get_flags_debug is not self.get_flags_debug_f90: - fixflags += self.__get_flags(self.get_flags_debug_f90) + get_flags('debug', dflags) - fflags = self.__get_flags(self.get_flags,'FFLAGS') \ - + dflags + oflags + aflags + fflags = to_list(self.flag_vars.flags) + dflags + oflags + aflags if f77: self.set_executables(compiler_f77=[f77]+f77flags+fflags) @@ -339,10 +456,11 @@ self.set_executables(compiler_f90=[f90]+freeflags+f90flags+fflags) if fix: self.set_executables(compiler_fix=[fix]+fixflags+fflags) + #XXX: Do we need LDSHARED->SOSHARED, LDFLAGS->SOFLAGS - linker_so = self.__get_cmd(self.get_linker_so,'LDSHARED') + linker_so = self.command_vars.linker_so if linker_so: - linker_so_flags = self.__get_flags(self.get_flags_linker_so,'LDFLAGS') + linker_so_flags = to_list(self.flag_vars.linker_so) if sys.platform.startswith('aix'): python_lib = get_python_lib(standard_lib=1) ld_so_aix = os.path.join(python_lib, 'config', 'ld_so_aix') @@ -352,37 +470,32 @@ linker_so = [linker_so] self.set_executables(linker_so=linker_so+linker_so_flags) - linker_exe = self.__get_cmd(self.get_linker_exe,'LD') + linker_exe = self.command_vars.linker_exe if linker_exe: - linker_exe_flags = self.__get_flags(self.get_flags_linker_exe,'LDFLAGS') + linker_exe_flags = to_list(self.flag_vars.linker_exe) self.set_executables(linker_exe=[linker_exe]+linker_exe_flags) - ar = self.__get_cmd('archiver','AR') + + ar = self.command_vars.archiver if ar: - arflags = self.__get_flags(self.get_flags_ar,'ARFLAGS') + arflags = to_list(self.flag_vars.ar) self.set_executables(archiver=[ar]+arflags) - ranlib = self.__get_cmd('ranlib','RANLIB') + ranlib = self.command_vars.ranlib if ranlib: self.set_executables(ranlib=[ranlib]) self.set_library_dirs(self.get_library_dirs()) self.set_libraries(self.get_libraries()) - - verbose = conf.get('verbose',[None,0])[1] - if verbose: - self.dump_properties() - return - def dump_properties(self): - """ Print out the attributes of a compiler instance. """ + """Print out the attributes of a compiler instance.""" props = [] for key in self.executables.keys() + \ ['version','libraries','library_dirs', 'object_switch','compile_switch']: if hasattr(self,key): v = getattr(self,key) - props.append((key, None, '= '+`v`)) + props.append((key, None, '= '+repr(v))) props.sort() pretty_printer = FancyGetopt(props) @@ -429,7 +542,8 @@ if os.name == 'nt': compiler = _nt_quote_args(compiler) - command = compiler + cc_args + extra_flags + s_args + o_args + extra_postargs + command = compiler + cc_args + extra_flags + s_args + o_args \ + + extra_postargs display = '%s: %s' % (os.path.basename(compiler[0]) + flavor, src) @@ -512,167 +626,146 @@ log.debug("skipping %s (up-to-date)", output_filename) return - - ## Private methods: - - def __get_cmd(self, command, envvar=None, confvar=None): - if command is None: - var = None - elif is_string(command): - var = self.executables[command] - if var is not None: - var = var[0] + def _environment_hook(self, name, hook_name): + if hook_name is None: + return None + if is_string(hook_name): + if hook_name.startswith('self.'): + hook_name = hook_name[5:] + hook = getattr(self, hook_name) + return hook() + elif hook_name.startswith('exe.'): + hook_name = hook_name[4:] + var = self.executables[hook_name] + if var: + return var[0] + else: + return None + elif hook_name.startswith('flags.'): + hook_name = hook_name[6:] + hook = getattr(self, 'get_flags_' + hook_name) + return hook() else: - var = command() - if envvar is not None: - var = os.environ.get(envvar, var) - if confvar is not None: - var = confvar[0].get(confvar[1], [None,var])[1] - return var + return hook_name() - def __get_flags(self, command, envvar=None, confvar=None): - if command is None: - var = [] - elif is_string(command): - var = self.executables[command][1:] - else: - var = command() - if envvar is not None: - var = os.environ.get(envvar, var) - if confvar is not None: - var = confvar[0].get(confvar[1], [None,var])[1] - if is_string(var): - var = split_quoted(var) - return var - ## class FCompiler -fcompiler_class = {'gnu':('gnu','GnuFCompiler', - "GNU Fortran Compiler"), - 'gnu95':('gnu','Gnu95FCompiler', - "GNU 95 Fortran Compiler"), - 'g95':('g95','G95FCompiler', - "G95 Fortran Compiler"), - 'pg':('pg','PGroupFCompiler', - "Portland Group Fortran Compiler"), - 'absoft':('absoft','AbsoftFCompiler', - "Absoft Corp Fortran Compiler"), - 'mips':('mips','MipsFCompiler', - "MIPSpro Fortran Compiler"), - 'sun':('sun','SunFCompiler', - "Sun|Forte Fortran 95 Compiler"), - 'intel':('intel','IntelFCompiler', - "Intel Fortran Compiler for 32-bit apps"), - 'intelv':('intel','IntelVisualFCompiler', - "Intel Visual Fortran Compiler for 32-bit apps"), - 'intele':('intel','IntelItaniumFCompiler', - "Intel Fortran Compiler for Itanium apps"), - 'intelev':('intel','IntelItaniumVisualFCompiler', - "Intel Visual Fortran Compiler for Itanium apps"), - 'intelem':('intel','IntelEM64TFCompiler', - "Intel Fortran Compiler for EM64T-based apps"), - 'nag':('nag','NAGFCompiler', - "NAGWare Fortran 95 Compiler"), - 'compaq':('compaq','CompaqFCompiler', - "Compaq Fortran Compiler"), - 'compaqv':('compaq','CompaqVisualFCompiler', - "DIGITAL|Compaq Visual Fortran Compiler"), - 'vast':('vast','VastFCompiler', - "Pacific-Sierra Research Fortran 90 Compiler"), - 'hpux':('hpux','HPUXFCompiler', - "HP Fortran 90 Compiler"), - 'lahey':('lahey','LaheyFCompiler', - "Lahey/Fujitsu Fortran 95 Compiler"), - 'ibm':('ibm','IbmFCompiler', - "IBM XL Fortran Compiler"), - 'f':('f','FFCompiler', - "Fortran Company/NAG F Compiler"), - 'none':('none','NoneFCompiler',"Fake Fortran compiler") - } - _default_compilers = ( - # Platform mappings - ('win32',('gnu','intelv','absoft','compaqv','intelev','gnu95','g95')), - ('cygwin.*',('gnu','intelv','absoft','compaqv','intelev','gnu95','g95')), - ('linux.*',('gnu','intel','lahey','pg','absoft','nag','vast','compaq', + # sys.platform mappings + ('win32', ('gnu','intelv','absoft','compaqv','intelev','gnu95','g95')), + ('cygwin.*', ('gnu','intelv','absoft','compaqv','intelev','gnu95','g95')), + ('linux.*', ('gnu','intel','lahey','pg','absoft','nag','vast','compaq', 'intele','intelem','gnu95','g95')), - ('darwin.*',('nag','absoft','ibm','gnu','gnu95','g95')), - ('sunos.*',('sun','gnu','gnu95','g95')), - ('irix.*',('mips','gnu','gnu95',)), - ('aix.*',('ibm','gnu','gnu95',)), - # OS mappings - ('posix',('gnu','gnu95',)), - ('nt',('gnu','gnu95',)), - ('mac',('gnu','gnu95',)), + ('darwin.*', ('nag', 'absoft', 'ibm', 'intel', 'gnu', 'gnu95', 'g95')), + ('sunos.*', ('sun','gnu','gnu95','g95')), + ('irix.*', ('mips','gnu','gnu95',)), + ('aix.*', ('ibm','gnu','gnu95',)), + # os.name mappings + ('posix', ('gnu','gnu95',)), + ('nt', ('gnu','gnu95',)), + ('mac', ('gnu','gnu95',)), ) -def _find_existing_fcompiler(compilers, osname=None, platform=None, requiref90=None): - for compiler in compilers: +fcompiler_class = None + +def load_all_fcompiler_classes(): + """Cache all the FCompiler classes found in modules in the + numpy.distutils.fcompiler package. + """ + from glob import glob + global fcompiler_class + if fcompiler_class is not None: + return + pys = os.path.join(os.path.dirname(__file__), '*.py') + fcompiler_class = {} + for fname in glob(pys): + module_name, ext = os.path.splitext(os.path.basename(fname)) + module_name = 'numpy.distutils.fcompiler.' + module_name + __import__ (module_name) + module = sys.modules[module_name] + if hasattr(module, 'compilers'): + for cname in module.compilers: + klass = getattr(module, cname) + fcompiler_class[klass.compiler_type] = (klass.compiler_type, + klass, + klass.description) + +def _find_existing_fcompiler(compiler_types, + osname=None, platform=None, + requiref90=False): + from numpy.distutils.core import get_distribution + dist = get_distribution(always=True) + for compiler_type in compiler_types: v = None try: - c = new_fcompiler(plat=platform, compiler=compiler) - c.customize() + c = new_fcompiler(plat=platform, compiler=compiler_type) + c.customize(dist) v = c.get_version() if requiref90 and c.compiler_f90 is None: v = None new_compiler = c.suggested_f90_compiler if new_compiler: - log.warn('Trying %r compiler as suggested by %r compiler for f90 support.' % (compiler, new_compiler)) + log.warn('Trying %r compiler as suggested by %r ' + 'compiler for f90 support.' % (compiler_type, + new_compiler)) c = new_fcompiler(plat=platform, compiler=new_compiler) - c.customize() + c.customize(dist) v = c.get_version() if v is not None: - compiler = new_compiler + compiler_type = new_compiler if requiref90 and c.compiler_f90 is None: - raise ValueError,'%s does not support compiling f90 codes, skipping.' \ - % (c.__class__.__name__) + raise ValueError('%s does not support compiling f90 codes, ' + 'skipping.' % (c.__class__.__name__)) except DistutilsModuleError: - pass - except Exception, msg: - log.warn(msg) + log.debug("_find_existing_fcompiler: compiler_type='%s' raised DistutilsModuleError", compiler_type) + except CompilerNotFound: + log.debug("_find_existing_fcompiler: compiler_type='%s' not found", compiler_type) if v is not None: - return compiler - return + return compiler_type + return None -def get_default_fcompiler(osname=None, platform=None, requiref90=None): - """ Determine the default Fortran compiler to use for the given platform. """ +def available_fcompilers_for_platform(osname=None, platform=None): if osname is None: osname = os.name if platform is None: platform = sys.platform - matching_compilers = [] - for pattern, compiler in _default_compilers: - if re.match(pattern, platform) is not None or \ - re.match(pattern, osname) is not None: - if is_sequence(compiler): - matching_compilers.extend(list(compiler)) - else: - matching_compilers.append(compiler) - if not matching_compilers: - matching_compilers.append('gnu') - compiler = _find_existing_fcompiler(matching_compilers, - osname=osname, - platform=platform, - requiref90=requiref90) - if compiler is not None: - return compiler - return matching_compilers[0] + matching_compiler_types = [] + for pattern, compiler_type in _default_compilers: + if re.match(pattern, platform) or re.match(pattern, osname): + for ct in compiler_type: + if ct not in matching_compiler_types: + matching_compiler_types.append(ct) + if not matching_compiler_types: + matching_compiler_types.append('gnu') + return matching_compiler_types +def get_default_fcompiler(osname=None, platform=None, requiref90=False): + """Determine the default Fortran compiler to use for the given + platform.""" + matching_compiler_types = available_fcompilers_for_platform(osname, + platform) + compiler_type = _find_existing_fcompiler(matching_compiler_types, + osname=osname, + platform=platform, + requiref90=requiref90) + return compiler_type + def new_fcompiler(plat=None, compiler=None, verbose=0, dry_run=0, force=0, - requiref90=0): - """ Generate an instance of some FCompiler subclass for the supplied + requiref90=False): + """Generate an instance of some FCompiler subclass for the supplied platform/compiler combination. """ + load_all_fcompiler_classes() if plat is None: plat = os.name + if compiler is None: + compiler = get_default_fcompiler(plat, requiref90=requiref90) try: - if compiler is None: - compiler = get_default_fcompiler(plat,requiref90=requiref90) - (module_name, class_name, long_description) = fcompiler_class[compiler] + module_name, klass, long_description = fcompiler_class[compiler] except KeyError: msg = "don't know how to compile Fortran code on platform '%s'" % plat if compiler is not None: @@ -681,88 +774,76 @@ % (','.join(fcompiler_class.keys())) raise DistutilsPlatformError, msg - try: - module_name = 'numpy.distutils.fcompiler.'+module_name - __import__ (module_name) - module = sys.modules[module_name] - klass = vars(module)[class_name] - except ImportError: - raise DistutilsModuleError, \ - "can't compile Fortran code: unable to load module '%s'" % \ - module_name - except KeyError: - raise DistutilsModuleError, \ - ("can't compile Fortran code: unable to find class '%s' " + - "in module '%s'") % (class_name, module_name) - compiler = klass(None, dry_run, force) - log.debug('new_fcompiler returns %s' % (klass)) + compiler = klass(verbose=verbose, dry_run=dry_run, force=force) return compiler -def show_fcompilers(dist = None): - """ Print list of available compilers (used by the "--help-fcompiler" +def show_fcompilers(dist=None): + """Print list of available compilers (used by the "--help-fcompiler" option to "config_fc"). """ if dist is None: from distutils.dist import Distribution + from numpy.distutils.command.config_compiler import config_fc dist = Distribution() dist.script_name = os.path.basename(sys.argv[0]) dist.script_args = ['config_fc'] + sys.argv[1:] + try: + dist.script_args.remove('--help-fcompiler') + except ValueError: + pass dist.cmdclass['config_fc'] = config_fc dist.parse_config_files() dist.parse_command_line() - compilers = [] compilers_na = [] compilers_ni = [] - for compiler in fcompiler_class.keys(): - v = 'N/A' + if not fcompiler_class: + load_all_fcompiler_classes() + platform_compilers = available_fcompilers_for_platform() + for compiler in platform_compilers: + v = None + log.set_verbosity(-2) try: - c = new_fcompiler(compiler=compiler) + c = new_fcompiler(compiler=compiler, verbose=dist.verbose) c.customize(dist) v = c.get_version() - except DistutilsModuleError: - pass - except Exception, msg: - log.warn(msg) + except (DistutilsModuleError, CompilerNotFound), e: + log.debug("show_fcompilers: %s not found" % (compiler,)) + log.debug(repr(e)) + if v is None: compilers_na.append(("fcompiler="+compiler, None, fcompiler_class[compiler][2])) - elif v=='N/A': - compilers_ni.append(("fcompiler="+compiler, None, - fcompiler_class[compiler][2])) else: + c.dump_properties() compilers.append(("fcompiler="+compiler, None, fcompiler_class[compiler][2] + ' (%s)' % v)) + compilers_ni = list(set(fcompiler_class.keys()) - set(platform_compilers)) + compilers_ni = [("fcompiler="+fc, None, fcompiler_class[fc][2]) + for fc in compilers_ni] + compilers.sort() compilers_na.sort() + compilers_ni.sort() pretty_printer = FancyGetopt(compilers) - pretty_printer.print_help("List of available Fortran compilers:") + pretty_printer.print_help("Fortran compilers found:") pretty_printer = FancyGetopt(compilers_na) - pretty_printer.print_help("List of unavailable Fortran compilers:") + pretty_printer.print_help("Compilers available for this " + "platform, but not found:") if compilers_ni: pretty_printer = FancyGetopt(compilers_ni) - pretty_printer.print_help("List of unimplemented Fortran compilers:") + pretty_printer.print_help("Compilers not available on this platform:") print "For compiler details, run 'config_fc --verbose' setup command." + def dummy_fortran_file(): - import atexit - import tempfile - dummy_name = tempfile.mktemp()+'__dummy' - dummy = open(dummy_name+'.f','w') - dummy.write(" subroutine dummy()\n end\n") - dummy.close() - def rm_file(name=dummy_name,log_threshold=log._global_log.threshold): - save_th = log._global_log.threshold - log.set_threshold(log_threshold) - try: os.remove(name+'.f'); log.debug('removed '+name+'.f') - except OSError: pass - try: os.remove(name+'.o'); log.debug('removed '+name+'.o') - except OSError: pass - log.set_threshold(save_th) - atexit.register(rm_file) - return dummy_name + fo, name = make_temp_file(suffix='.f') + fo.write(" subroutine dummy()\n end\n") + fo.close() + return name[:-2] + is_f_file = re.compile(r'.*[.](for|ftn|f77|f)\Z',re.I).match _has_f_header = re.compile(r'-[*]-\s*fortran\s*-[*]-',re.I).search _has_f90_header = re.compile(r'-[*]-\s*f90\s*-[*]-',re.I).search Modified: branches/multicore/numpy/distutils/fcompiler/absoft.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/absoft.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/absoft.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -7,15 +7,17 @@ # generated extension modules (works for f2py v2.45.241_1936 and up) import os -import sys from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler, dummy_fortran_file from numpy.distutils.misc_util import cyg2win32 +compilers = ['AbsoftFCompiler'] + class AbsoftFCompiler(FCompiler): compiler_type = 'absoft' + description = 'Absoft Corp Fortran Compiler' #version_pattern = r'FORTRAN 77 Compiler (?P[^\s*,]*).*?Absoft Corp' version_pattern = r'(f90:.*?(Absoft Pro FORTRAN Version|FORTRAN 77 Compiler|Absoft Fortran Compiler Version|Copyright Absoft Corporation.*?Version))'+\ r' (?P[^\s*,]*)(.*?Absoft Corp|)' @@ -28,12 +30,11 @@ # Note that fink installs g77 as f77, so need to use f90 for detection. executables = { - 'version_cmd' : ["f90", "-V -c %(fname)s.f -o %(fname)s.o" \ - % {'fname':cyg2win32(dummy_fortran_file())}], + 'version_cmd' : ['', None], 'compiler_f77' : ["f77"], 'compiler_fix' : ["f90"], 'compiler_f90' : ["f90"], - 'linker_so' : ["f90"], + 'linker_so' : [""], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } @@ -44,6 +45,10 @@ module_dir_switch = None module_include_switch = '-p' + def get_flags_version(self): + f = cyg2win32(dummy_fortran_file()) + return ['-V', '-c', f+'.f', '-o', f+'.o'] + def get_flags_linker_so(self): if os.name=='nt': opt = ['/dll'] Modified: branches/multicore/numpy/distutils/fcompiler/compaq.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/compaq.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/compaq.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -4,12 +4,19 @@ import os import sys -from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler +compilers = ['CompaqFCompiler'] +if os.name != 'posix': + # Otherwise we'd get a false positive on posix systems with + # case-insensitive filesystems (like darwin), because we'll pick + # up /bin/df + compilers.append('CompaqVisualFCompiler') + class CompaqFCompiler(FCompiler): compiler_type = 'compaq' + description = 'Compaq Fortran Compiler' version_pattern = r'Compaq Fortran (?P[^\s]*).*' if sys.platform[:5]=='linux': @@ -18,11 +25,11 @@ fc_exe = 'f90' executables = { - 'version_cmd' : [fc_exe, "-version"], + 'version_cmd' : ['', "-version"], 'compiler_f77' : [fc_exe, "-f77rtl","-fixed"], 'compiler_fix' : [fc_exe, "-fixed"], 'compiler_f90' : [fc_exe], - 'linker_so' : [fc_exe], + 'linker_so' : [''], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } @@ -47,6 +54,7 @@ class CompaqVisualFCompiler(FCompiler): compiler_type = 'compaqv' + description = 'DIGITAL or Compaq Visual Fortran Compiler' version_pattern = r'(DIGITAL|Compaq) Visual Fortran Optimizing Compiler'\ ' Version (?P[^\s]*).*' @@ -68,11 +76,11 @@ ar_exe = m.lib executables = { - 'version_cmd' : ['DF', "/what"], - 'compiler_f77' : ['DF', "/f77rtl","/fixed"], - 'compiler_fix' : ['DF', "/fixed"], - 'compiler_f90' : ['DF'], - 'linker_so' : ['DF'], + 'version_cmd' : ['', "/what"], + 'compiler_f77' : [fc_exe, "/f77rtl","/fixed"], + 'compiler_fix' : [fc_exe, "/fixed"], + 'compiler_f90' : [fc_exe], + 'linker_so' : [''], 'archiver' : [ar_exe, "/OUT:"], 'ranlib' : None } Modified: branches/multicore/numpy/distutils/fcompiler/g95.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/g95.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/g95.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,14 +1,13 @@ # http://g95.sourceforge.net/ -import os -import sys - -from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler -class G95FCompiler(FCompiler): +compilers = ['G95FCompiler'] +class G95FCompiler(FCompiler): compiler_type = 'g95' + description = 'G95 Fortran Compiler' + # version_pattern = r'G95 \((GCC (?P[\d.]+)|.*?) \(g95!\) (?P.*)\).*' # $ g95 --version # G95 (GCC 4.0.3 (g95!) May 22 2006) @@ -17,13 +16,12 @@ # $ g95 --version # G95 (GCC 4.0.3 (g95 0.90!) Aug 22 2006) - executables = { - 'version_cmd' : ["g95", "--version"], + 'version_cmd' : ["", "--version"], 'compiler_f77' : ["g95", "-ffixed-form"], 'compiler_fix' : ["g95", "-ffixed-form"], 'compiler_f90' : ["g95"], - 'linker_so' : ["g95","-shared"], + 'linker_so' : ["","-shared"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } @@ -41,8 +39,6 @@ if __name__ == '__main__': from distutils import log log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler - #compiler = new_fcompiler(compiler='g95') compiler = G95FCompiler() compiler.customize() print compiler.get_version() Modified: branches/multicore/numpy/distutils/fcompiler/gnu.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/gnu.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/gnu.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -5,12 +5,14 @@ from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler -from numpy.distutils.exec_command import exec_command, find_executable -from numpy.distutils.misc_util import mingw32, msvc_runtime_library +from numpy.distutils.exec_command import exec_command +from numpy.distutils.misc_util import msvc_runtime_library -class GnuFCompiler(FCompiler): +compilers = ['GnuFCompiler', 'Gnu95FCompiler'] +class GnuFCompiler(FCompiler): compiler_type = 'gnu' + description = 'GNU Fortran 77 compiler' def gnu_version_match(self, version_string): """Handle the different versions of GNU fortran compilers""" @@ -44,22 +46,23 @@ # GNU Fortran 0.5.25 20010319 (prerelease) # Redhat: GNU Fortran (GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)) 3.2.2 20030222 (Red Hat Linux 3.2.2-5) + possible_executables = ['g77', 'f77'] executables = { - 'version_cmd' : ["g77", "--version"], - 'compiler_f77' : ["g77", "-g", "-Wall","-fno-second-underscore"], + 'version_cmd' : [None, "--version"], + 'compiler_f77' : [None, "-g", "-Wall", "-fno-second-underscore"], 'compiler_f90' : None, # Use --fcompiler=gnu95 for f90 codes 'compiler_fix' : None, - 'linker_so' : ["g77", "-g", "-Wall"], + 'linker_so' : [None, "-g", "-Wall"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"], - 'linker_exe' : ["g77", "-g", "-Wall"] + 'linker_exe' : [None, "-g", "-Wall"] } module_dir_switch = None module_include_switch = None # Cygwin: f771: warning: -fPIC ignored for target (all code is # position independent) - if os.name != 'nt' and sys.platform!='cygwin': + if os.name != 'nt' and sys.platform != 'cygwin': pic_flags = ['-fPIC'] # use -mno-cygwin for g77 when Python is not Cygwin-Python @@ -71,13 +74,6 @@ suggested_f90_compiler = 'gnu95' - def find_executables(self): - for fc_exe in [find_executable(c) for c in ['g77','f77']]: - if os.path.isfile(fc_exe): - break - for key in ['version_cmd', 'compiler_f77', 'linker_so', 'linker_exe']: - self.executables[key][0] = fc_exe - #def get_linker_so(self): # # win32 linking should be handled by standard linker # # Darwin g77 cannot be used as a linker. @@ -106,7 +102,7 @@ opt.extend(['-undefined', 'dynamic_lookup', '-bundle']) else: opt.append("-shared") - if sys.platform[:5]=='sunos': + if sys.platform.startswith('sunos'): # SunOS often has dynamically loaded symbols defined in the # static library libg2c.a The linker doesn't like this. To # ignore the problem, use the -mimpure-text flag. It isn't @@ -178,13 +174,10 @@ def get_flags_arch(self): opt = [] - if sys.platform=='darwin': - if os.name != 'posix': - # this should presumably correspond to Apple - if cpu.is_ppc(): - opt.append('-arch ppc') - elif cpu.is_i386(): - opt.append('-arch i386') + if sys.platform == 'darwin': + # Since Apple doesn't distribute a GNU Fortran compiler, we + # can't add -arch ppc or -arch i386, as only their version + # of the GNU compilers accepts those. for a in '601 602 603 603e 604 604e 620 630 740 7400 7450 750'\ '403 505 801 821 823 860'.split(): if getattr(cpu,'is_ppc%s'%a)(): @@ -225,6 +218,8 @@ march_opt = '-march=nocona' elif cpu.is_Core2(): march_opt = '-march=nocona' + elif cpu.is_Xeon() and cpu.is_64bit(): + march_opt = '-march=nocona' elif cpu.is_Prescott(): march_opt = '-march=prescott' elif cpu.is_PentiumIV(): @@ -274,8 +269,8 @@ return opt class Gnu95FCompiler(GnuFCompiler): - compiler_type = 'gnu95' + description = 'GNU Fortran 95 compiler' def version_match(self, version_string): v = self.gnu_version_match(version_string) @@ -291,17 +286,18 @@ # GNU Fortran 95 (GCC) 4.2.0 20060218 (experimental) # GNU Fortran (GCC) 4.3.0 20070316 (experimental) + possible_executables = ['gfortran', 'f95'] executables = { - 'version_cmd' : ["gfortran", "--version"], - 'compiler_f77' : ["gfortran", "-Wall", "-ffixed-form", + 'version_cmd' : ["", "--version"], + 'compiler_f77' : [None, "-Wall", "-ffixed-form", "-fno-second-underscore"], - 'compiler_f90' : ["gfortran", "-Wall", "-fno-second-underscore"], - 'compiler_fix' : ["gfortran", "-Wall", "-ffixed-form", + 'compiler_f90' : [None, "-Wall", "-fno-second-underscore"], + 'compiler_fix' : [None, "-Wall", "-ffixed-form", "-fno-second-underscore"], - 'linker_so' : ["gfortran", "-Wall"], + 'linker_so' : ["", "-Wall"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"], - 'linker_exe' : ["gfortran", "-Wall"] + 'linker_exe' : [None,"-Wall"] } # use -mno-cygwin flag for g77 when Python is not Cygwin-Python @@ -315,14 +311,6 @@ g2c = 'gfortran' - def find_executables(self): - for fc_exe in [find_executable(c) for c in ['gfortran','f95']]: - if os.path.isfile(fc_exe): - break - for key in ['version_cmd', 'compiler_f77', 'compiler_f90', - 'compiler_fix', 'linker_so', 'linker_exe']: - self.executables[key][0] = fc_exe - def get_libraries(self): opt = GnuFCompiler.get_libraries(self) if sys.platform == 'darwin': @@ -332,8 +320,9 @@ if __name__ == '__main__': from distutils import log log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler - #compiler = new_fcompiler(compiler='gnu') compiler = GnuFCompiler() compiler.customize() print compiler.get_version() + compiler = Gnu95FCompiler() + compiler.customize() + print compiler.get_version() Modified: branches/multicore/numpy/distutils/fcompiler/hpux.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/hpux.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/hpux.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,16 +1,15 @@ -import os -import sys - -from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler +compilers = ['HPUXFCompiler'] + class HPUXFCompiler(FCompiler): compiler_type = 'hpux' + description = 'HP Fortran 90 Compiler' version_pattern = r'HP F90 (?P[^\s*,]*)' executables = { - 'version_cmd' : ["f90", "+version"], + 'version_cmd' : ["", "+version"], 'compiler_f77' : ["f90"], 'compiler_fix' : ["f90"], 'compiler_f90' : ["f90"], Modified: branches/multicore/numpy/distutils/fcompiler/ibm.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/ibm.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/ibm.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -4,16 +4,19 @@ from numpy.distutils.fcompiler import FCompiler from numpy.distutils.exec_command import exec_command, find_executable +from numpy.distutils.misc_util import make_temp_file from distutils import log -from distutils.sysconfig import get_python_lib -class IbmFCompiler(FCompiler): +compilers = ['IBMFCompiler'] +class IBMFCompiler(FCompiler): compiler_type = 'ibm' + description = 'IBM XL Fortran Compiler' version_pattern = r'(xlf\(1\)\s*|)IBM XL Fortran ((Advanced Edition |)Version |Enterprise Edition V)(?P[^\s*]*)' #IBM XL Fortran Enterprise Edition V10.1 for AIX \nVersion: 10.01.0000.0004 + executables = { - 'version_cmd' : ["xlf","-qversion"], + 'version_cmd' : ["", "-qversion"], 'compiler_f77' : ["xlf"], 'compiler_fix' : ["xlf90", "-qfixed"], 'compiler_f90' : ["xlf90"], @@ -63,15 +66,13 @@ opt.append('-bshared') version = self.get_version(ok_status=[0,40]) if version is not None: - import tempfile if sys.platform.startswith('aix'): xlf_cfg = '/etc/xlf.cfg' else: xlf_cfg = '/etc/opt/ibmcmp/xlf/%s/xlf.cfg' % version - new_cfg = tempfile.mktemp()+'_xlf.cfg' + fo, new_cfg = make_temp_file(suffix='_xlf.cfg') log.info('Creating '+new_cfg) fi = open(xlf_cfg,'r') - fo = open(new_cfg,'w') crt1_match = re.compile(r'\s*crt\s*[=]\s*(?P.*)/crt1.o').match for line in fi.readlines(): m = crt1_match(line) @@ -88,10 +89,7 @@ return ['-O5'] if __name__ == '__main__': - from distutils import log log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler - #compiler = new_fcompiler(compiler='ibm') - compiler = IbmFCompiler() + compiler = IBMFCompiler() compiler.customize() print compiler.get_version() Modified: branches/multicore/numpy/distutils/fcompiler/intel.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/intel.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/intel.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -3,14 +3,14 @@ # of intele # http://developer.intel.com/software/products/compilers/flin/ -import os -import sys - from numpy.distutils.cpuinfo import cpu from numpy.distutils.ccompiler import simple_version_match from numpy.distutils.fcompiler import FCompiler, dummy_fortran_file -from numpy.distutils.exec_command import find_executable +compilers = ['IntelFCompiler', 'IntelVisualFCompiler', + 'IntelItaniumFCompiler', 'IntelItaniumVisualFCompiler', + 'IntelEM64TFCompiler'] + def intel_version_match(type): # Match against the important stuff in the version string return simple_version_match(start=r'Intel.*?Fortran.*?%s.*?Version' % (type,)) @@ -18,19 +18,17 @@ class IntelFCompiler(FCompiler): compiler_type = 'intel' + description = 'Intel Fortran Compiler for 32-bit apps' version_match = intel_version_match('32-bit') - for fc_exe in map(find_executable,['ifort','ifc']): - if os.path.isfile(fc_exe): - break + possible_executables = ['ifort', 'ifc'] executables = { - 'version_cmd' : [fc_exe, "-FI -V -c %(fname)s.f -o %(fname)s.o" \ - % {'fname':dummy_fortran_file()}], - 'compiler_f77' : [fc_exe,"-72","-w90","-w95"], - 'compiler_fix' : [fc_exe,"-FI"], - 'compiler_f90' : [fc_exe], - 'linker_so' : [fc_exe,"-shared"], + 'version_cmd' : ['', None], + 'compiler_f77' : [None,"-72","-w90","-w95"], + 'compiler_f90' : [None], + 'compiler_fix' : [None,"-FI"], + 'linker_so' : ["","-shared"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } @@ -39,6 +37,10 @@ module_dir_switch = '-module ' # Don't remove ending space! module_include_switch = '-I' + def get_flags_version(self): + f = dummy_fortran_file() + return ['-FI', '-V', '-c', f + '.f', '-o', f + '.o'] + def get_flags(self): opt = self.pic_flags + ["-cm"] return opt @@ -76,11 +78,11 @@ v = self.get_version() if v and v >= '8.0': opt.append('-nofor_main') - opt.extend(self.get_flags_arch()) return opt class IntelItaniumFCompiler(IntelFCompiler): compiler_type = 'intele' + description = 'Intel Fortran Compiler for Itanium apps' version_match = intel_version_match('Itanium') @@ -89,37 +91,32 @@ #Copyright (C) 1985-2006 Intel Corporation.? All rights reserved. #30 DAY EVALUATION LICENSE - for fc_exe in map(find_executable,['ifort','efort','efc']): - if os.path.isfile(fc_exe): - break + possible_executables = ['ifort', 'efort', 'efc'] executables = { - 'version_cmd' : [fc_exe, "-FI -V -c %(fname)s.f -o %(fname)s.o" \ - % {'fname':dummy_fortran_file()}], - 'compiler_f77' : [fc_exe,"-FI","-w90","-w95"], - 'compiler_fix' : [fc_exe,"-FI"], - 'compiler_f90' : [fc_exe], - 'linker_so' : [fc_exe,"-shared"], + 'version_cmd' : ['', None], + 'compiler_f77' : [None,"-FI","-w90","-w95"], + 'compiler_fix' : [None,"-FI"], + 'compiler_f90' : [None], + 'linker_so' : ['', "-shared"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } class IntelEM64TFCompiler(IntelFCompiler): compiler_type = 'intelem' + description = 'Intel Fortran Compiler for EM64T-based apps' version_match = intel_version_match('EM64T-based') - for fc_exe in map(find_executable,['ifort','efort','efc']): - if os.path.isfile(fc_exe): - break + possible_executables = ['ifort', 'efort', 'efc'] executables = { - 'version_cmd' : [fc_exe, "-FI -V -c %(fname)s.f -o %(fname)s.o" \ - % {'fname':dummy_fortran_file()}], - 'compiler_f77' : [fc_exe,"-FI","-w90","-w95"], - 'compiler_fix' : [fc_exe,"-FI"], - 'compiler_f90' : [fc_exe], - 'linker_so' : [fc_exe,"-shared"], + 'version_cmd' : ['', None], + 'compiler_f77' : [None, "-FI", "-w90", "-w95"], + 'compiler_fix' : [None, "-FI"], + 'compiler_f90' : [None], + 'linker_so' : ['', "-shared"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } @@ -134,20 +131,19 @@ # and the Visual compilers? class IntelVisualFCompiler(FCompiler): - compiler_type = 'intelv' + description = 'Intel Visual Fortran Compiler for 32-bit apps' version_match = intel_version_match('32-bit') ar_exe = 'lib.exe' - fc_exe = 'ifl' + possible_executables = ['ifl'] executables = { - 'version_cmd' : [fc_exe, "-FI -V -c %(fname)s.f -o %(fname)s.o" \ - % {'fname':dummy_fortran_file()}], - 'compiler_f77' : [fc_exe,"-FI","-w90","-w95"], - 'compiler_fix' : [fc_exe,"-FI","-4L72","-w"], - 'compiler_f90' : [fc_exe], - 'linker_so' : [fc_exe,"-shared"], + 'version_cmd' : ['', None], + 'compiler_f77' : [None,"-FI","-w90","-w95"], + 'compiler_fix' : [None,"-FI","-4L72","-w"], + 'compiler_f90' : [None], + 'linker_so' : ['', "-shared"], 'archiver' : [ar_exe, "/verbose", "/OUT:"], 'ranlib' : None } @@ -158,6 +154,10 @@ module_dir_switch = '/module:' #No space after /module: module_include_switch = '/I' + def get_flags_version(self): + f = dummy_fortran_file() + return ['-FI', '-V', '-c', f + '.f', '-o', f + '.o'] + def get_flags(self): opt = ['/nologo','/MD','/nbs','/Qlowercase','/us'] return opt @@ -186,20 +186,20 @@ return opt class IntelItaniumVisualFCompiler(IntelVisualFCompiler): + compiler_type = 'intelev' + description = 'Intel Visual Fortran Compiler for Itanium apps' - compiler_type = 'intelev' version_match = intel_version_match('Itanium') - fc_exe = 'efl' # XXX this is a wild guess + possible_executables = ['efl'] # XXX this is a wild guess ar_exe = IntelVisualFCompiler.ar_exe executables = { - 'version_cmd' : [fc_exe, "-FI -V -c %(fname)s.f -o %(fname)s.o" \ - % {'fname':dummy_fortran_file()}], - 'compiler_f77' : [fc_exe,"-FI","-w90","-w95"], - 'compiler_fix' : [fc_exe,"-FI","-4L72","-w"], - 'compiler_f90' : [fc_exe], - 'linker_so' : [fc_exe,"-shared"], + 'version_cmd' : ['', None], + 'compiler_f77' : [None,"-FI","-w90","-w95"], + 'compiler_fix' : [None,"-FI","-4L72","-w"], + 'compiler_f90' : [None], + 'linker_so' : ['',"-shared"], 'archiver' : [ar_exe, "/verbose", "/OUT:"], 'ranlib' : None } Modified: branches/multicore/numpy/distutils/fcompiler/lahey.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/lahey.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/lahey.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,16 +1,17 @@ import os -import sys -from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler +compilers = ['LaheyFCompiler'] + class LaheyFCompiler(FCompiler): compiler_type = 'lahey' + description = 'Lahey/Fujitsu Fortran 95 Compiler' version_pattern = r'Lahey/Fujitsu Fortran 95 Compiler Release (?P[^\s*]*)' executables = { - 'version_cmd' : ["lf95", "--version"], + 'version_cmd' : ["", "--version"], 'compiler_f77' : ["lf95", "--fix"], 'compiler_fix' : ["lf95", "--fix"], 'compiler_f90' : ["lf95"], Modified: branches/multicore/numpy/distutils/fcompiler/mips.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/mips.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/mips.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,16 +1,16 @@ -import os -import sys - from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler -class MipsFCompiler(FCompiler): +compilers = ['MIPSFCompiler'] +class MIPSFCompiler(FCompiler): + compiler_type = 'mips' + description = 'MIPSpro Fortran Compiler' version_pattern = r'MIPSpro Compilers: Version (?P[^\s*,]*)' executables = { - 'version_cmd' : ["f90", "-version"], + 'version_cmd' : ["", "-version"], 'compiler_f77' : ["f77", "-f77"], 'compiler_fix' : ["f90", "-fixedform"], 'compiler_f90' : ["f90"], Modified: branches/multicore/numpy/distutils/fcompiler/nag.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/nag.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/nag.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,20 +1,20 @@ -import os import sys - -from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler +compilers = ['NAGFCompiler'] + class NAGFCompiler(FCompiler): compiler_type = 'nag' + description = 'NAGWare Fortran 95 Compiler' version_pattern = r'NAGWare Fortran 95 compiler Release (?P[^\s]*)' executables = { - 'version_cmd' : ["f95", "-V"], + 'version_cmd' : ["", "-V"], 'compiler_f77' : ["f95", "-fixed"], 'compiler_fix' : ["f95", "-fixed"], 'compiler_f90' : ["f95"], - 'linker_so' : ["f95"], + 'linker_so' : [""], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } Modified: branches/multicore/numpy/distutils/fcompiler/none.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/none.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/none.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,24 +1,30 @@ from numpy.distutils.fcompiler import FCompiler +compilers = ['NoneFCompiler'] + class NoneFCompiler(FCompiler): compiler_type = 'none' + description = 'Fake Fortran compiler' - executables = {'compiler_f77':['/path/to/nowhere/none'], - 'compiler_f90':['/path/to/nowhere/none'], - 'compiler_fix':['/path/to/nowhere/none'], - 'linker_so':['/path/to/nowhere/none'], - 'archiver':['/path/to/nowhere/none'], - 'ranlib':['/path/to/nowhere/none'], - 'version_cmd':['/path/to/nowhere/none'], + executables = {'compiler_f77' : None, + 'compiler_f90' : None, + 'compiler_fix' : None, + 'linker_so' : None, + 'linker_exe' : None, + 'archiver' : None, + 'ranlib' : None, + 'version_cmd' : None, } + def find_executables(self): + pass + if __name__ == '__main__': from distutils import log log.set_verbosity(2) - from numpy.distutils.fcompiler import new_fcompiler compiler = NoneFCompiler() compiler.customize() print compiler.get_version() Modified: branches/multicore/numpy/distutils/fcompiler/pg.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/pg.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/pg.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,19 +1,18 @@ # http://www.pgroup.com -import os -import sys - -from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler import FCompiler +compilers = ['PGroupFCompiler'] + class PGroupFCompiler(FCompiler): compiler_type = 'pg' + description = 'Portland Group Fortran Compiler' version_pattern = r'\s*pg(f77|f90|hpf) (?P[\d.-]+).*' executables = { - 'version_cmd' : ["pgf77", "-V 2>/dev/null"], + 'version_cmd' : ["", "-V 2>/dev/null"], 'compiler_f77' : ["pgf77"], 'compiler_fix' : ["pgf90", "-Mfixed"], 'compiler_f90' : ["pgf90"], Modified: branches/multicore/numpy/distutils/fcompiler/sun.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/sun.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/sun.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,24 +1,23 @@ -import os -import sys - -from numpy.distutils.cpuinfo import cpu from numpy.distutils.ccompiler import simple_version_match from numpy.distutils.fcompiler import FCompiler +compilers = ['SunFCompiler'] + class SunFCompiler(FCompiler): compiler_type = 'sun' + description = 'Sun or Forte Fortran 95 Compiler' # ex: # f90: Sun WorkShop 6 update 2 Fortran 95 6.2 Patch 111690-10 2003/08/28 version_match = simple_version_match( start=r'f9[05]: (Sun|Forte|WorkShop).*Fortran 95') executables = { - 'version_cmd' : ["f90", "-V"], + 'version_cmd' : ["", "-V"], 'compiler_f77' : ["f90"], 'compiler_fix' : ["f90", "-fixed"], 'compiler_f90' : ["f90"], - 'linker_so' : ["f90","-Bdynamic","-G"], + 'linker_so' : ["","-Bdynamic","-G"], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } Modified: branches/multicore/numpy/distutils/fcompiler/vast.py =================================================================== --- branches/multicore/numpy/distutils/fcompiler/vast.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/fcompiler/vast.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,12 +1,13 @@ import os -import sys -from numpy.distutils.cpuinfo import cpu from numpy.distutils.fcompiler.gnu import GnuFCompiler +compilers = ['VastFCompiler'] + class VastFCompiler(GnuFCompiler): compiler_type = 'vast' + description = 'Pacific-Sierra Research Fortran 90 Compiler' version_pattern = r'\s*Pacific-Sierra Research vf90 '\ '(Personal|Professional)\s+(?P[^\s]*)' @@ -19,7 +20,7 @@ 'compiler_f77' : ["g77"], 'compiler_fix' : ["f90", "-Wv,-ya"], 'compiler_f90' : ["f90"], - 'linker_so' : ["f90"], + 'linker_so' : [""], 'archiver' : ["ar", "-cr"], 'ranlib' : ["ranlib"] } @@ -31,8 +32,8 @@ def get_version_cmd(self): f90 = self.compiler_f90[0] - d,b = os.path.split(f90) - vf90 = os.path.join(d,'v'+b) + d, b = os.path.split(f90) + vf90 = os.path.join(d, 'v'+b) return vf90 def get_flags_arch(self): Modified: branches/multicore/numpy/distutils/from_template.py =================================================================== --- branches/multicore/numpy/distutils/from_template.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/from_template.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -48,15 +48,9 @@ __all__ = ['process_str','process_file'] -import string,os,sys -if sys.version[:3]>='2.3': - import re -else: - import pre as re - False = 0 - True = 1 -if sys.version[:5]=='2.2.1': - import re +import os +import sys +import re routine_start_re = re.compile(r'(\n|\A)(( (\$|\*))|)\s*(subroutine|function)\b',re.I) routine_end_re = re.compile(r'\n\s*end\s*(subroutine|function)\b.*(\n|\Z)',re.I) Modified: branches/multicore/numpy/distutils/intelccompiler.py =================================================================== --- branches/multicore/numpy/distutils/intelccompiler.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/intelccompiler.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,5 +1,4 @@ -import os from distutils.unixccompiler import UnixCCompiler from numpy.distutils.exec_command import find_executable @@ -26,5 +25,5 @@ # On Itanium, the Intel Compiler used to be called ecc, let's search for # it (now it's also icc, so ecc is last in the search). for cc_exe in map(find_executable,['icc','ecc']): - if os.path.isfile(cc_exe): + if cc_exe: break Deleted: branches/multicore/numpy/distutils/interactive.py =================================================================== --- branches/multicore/numpy/distutils/interactive.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/interactive.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,187 +0,0 @@ -import os -import sys -from pprint import pformat - -__all__ = ['interactive_sys_argv'] - -def show_information(*args): - print 'Python',sys.version - for a in ['platform','prefix','byteorder','path']: - print 'sys.%s = %s' % (a,pformat(getattr(sys,a))) - for a in ['name']: - print 'os.%s = %s' % (a,pformat(getattr(os,a))) - if hasattr(os,'uname'): - print 'system,node,release,version,machine = ',os.uname() - -def show_environ(*args): - for k,i in os.environ.items(): - print ' %s = %s' % (k, i) - -def show_fortran_compilers(*args): - from fcompiler import show_fcompilers - show_fcompilers({}) - -def show_compilers(*args): - from distutils.ccompiler import show_compilers - show_compilers() - -def show_tasks(argv,ccompiler,fcompiler): - print """\ - -Tasks: - i - Show python/platform/machine information - ie - Show environment information - c - Show C compilers information - c - Set C compiler (current:%s) - f - Show Fortran compilers information - f - Set Fortran compiler (current:%s) - e - Edit proposed sys.argv[1:]. - -Task aliases: - 0 - Configure - 1 - Build - 2 - Install - 2 - Install with prefix. - 3 - Inplace build - 4 - Source distribution - 5 - Binary distribution - -Proposed sys.argv = %s - """ % (ccompiler, fcompiler, argv) - - -from exec_command import splitcmdline - -def edit_argv(*args): - argv = args[0] - readline = args[1] - if readline is not None: - readline.add_history(' '.join(argv[1:])) - try: - s = raw_input('Edit argv [UpArrow to retrive %r]: ' % (' '.join(argv[1:]))) - except EOFError: - return - if s: - argv[1:] = splitcmdline(s) - return - -def interactive_sys_argv(argv): - print '='*72 - print 'Starting interactive session' - print '-'*72 - - readline = None - try: - try: - import readline - except ImportError: - pass - else: - import tempfile - tdir = tempfile.gettempdir() - username = os.environ.get('USER',os.environ.get('USERNAME','UNKNOWN')) - histfile = os.path.join(tdir,".pyhist_interactive_setup-" + username) - try: - try: readline.read_history_file(histfile) - except IOError: pass - import atexit - atexit.register(readline.write_history_file, histfile) - except AttributeError: pass - except Exception, msg: - print msg - - task_dict = {'i':show_information, - 'ie':show_environ, - 'f':show_fortran_compilers, - 'c':show_compilers, - 'e':edit_argv, - } - c_compiler_name = None - f_compiler_name = None - - while 1: - show_tasks(argv,c_compiler_name, f_compiler_name) - try: - task = raw_input('Choose a task (^D to quit, Enter to continue with setup): ').lower() - except EOFError: - print - task = 'quit' - if task=='': break - if task=='quit': sys.exit() - task_func = task_dict.get(task,None) - if task_func is None: - if task[0]=='c': - c_compiler_name = task[1:] - if c_compiler_name=='none': - c_compiler_name = None - continue - if task[0]=='f': - f_compiler_name = task[1:] - if f_compiler_name=='none': - f_compiler_name = None - continue - if task[0]=='2' and len(task)>1: - prefix = task[1:] - task = task[0] - else: - prefix = None - if task == '4': - argv[1:] = ['sdist','-f'] - continue - elif task in '01235': - cmd_opts = {'config':[],'config_fc':[], - 'build_ext':[],'build_src':[], - 'build_clib':[]} - if c_compiler_name is not None: - c = '--compiler=%s' % (c_compiler_name) - cmd_opts['config'].append(c) - if task != '0': - cmd_opts['build_ext'].append(c) - cmd_opts['build_clib'].append(c) - if f_compiler_name is not None: - c = '--fcompiler=%s' % (f_compiler_name) - cmd_opts['config_fc'].append(c) - if task != '0': - cmd_opts['build_ext'].append(c) - cmd_opts['build_clib'].append(c) - if task=='3': - cmd_opts['build_ext'].append('--inplace') - cmd_opts['build_src'].append('--inplace') - conf = [] - sorted_keys = ['config','config_fc','build_src', - 'build_clib','build_ext'] - for k in sorted_keys: - opts = cmd_opts[k] - if opts: conf.extend([k]+opts) - if task=='0': - if 'config' not in conf: - conf.append('config') - argv[1:] = conf - elif task=='1': - argv[1:] = conf+['build'] - elif task=='2': - if prefix is not None: - argv[1:] = conf+['install','--prefix=%s' % (prefix)] - else: - argv[1:] = conf+['install'] - elif task=='3': - argv[1:] = conf+['build'] - elif task=='5': - if sys.platform=='win32': - argv[1:] = conf+['bdist_wininst'] - else: - argv[1:] = conf+['bdist'] - else: - print 'Skipping unknown task:',`task` - else: - print '-'*68 - try: - task_func(argv,readline) - except Exception,msg: - print 'Failed running task %s: %s' % (task,msg) - break - print '-'*68 - print - - print '-'*72 - return argv Modified: branches/multicore/numpy/distutils/lib2def.py =================================================================== --- branches/multicore/numpy/distutils/lib2def.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/lib2def.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,7 +1,6 @@ import re import sys import os -import string __doc__ = """This module generates a DEF file from the symbols in an MSVC-compiled DLL import library. It correctly discriminates between @@ -21,8 +20,6 @@ __version__ = '0.1a' -import sys - py_ver = "%d%d" % tuple(sys.version_info[:2]) DEFAULT_NM = 'nm -Cs' Modified: branches/multicore/numpy/distutils/line_endings.py =================================================================== --- branches/multicore/numpy/distutils/line_endings.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/line_endings.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -71,5 +71,4 @@ return modified_files if __name__ == "__main__": - import sys dos2unix_dir(sys.argv[1]) Modified: branches/multicore/numpy/distutils/log.py =================================================================== --- branches/multicore/numpy/distutils/log.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/log.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -4,7 +4,7 @@ from distutils.log import * from distutils.log import Log as old_Log from distutils.log import _global_log -from misc_util import red_text, yellow_text, cyan_text, is_sequence, is_string +from misc_util import red_text, yellow_text, cyan_text, green_text, is_sequence, is_string def _fix_args(args,flag=1): @@ -22,8 +22,29 @@ else: print _global_color_map[level](msg) sys.stdout.flush() + + def good(self, msg, *args): + """If we'd log WARN messages, log this message as a 'nice' anti-warn + message. + """ + if WARN >= self.threshold: + if args: + print green_text(msg % _fix_args(args)) + else: + print green_text(msg) + sys.stdout.flush() _global_log.__class__ = Log +good = _global_log.good + +def set_threshold(level): + prev_level = _global_log.threshold + if prev_level > DEBUG: + # If we're running at DEBUG, don't change the threshold, as there's + # likely a good reason why we're running at this level. + _global_log.threshold = level + return prev_level + def set_verbosity(v): prev_level = _global_log.threshold if v < 0: @@ -44,4 +65,4 @@ FATAL:red_text } -set_verbosity(1) +set_verbosity(INFO) Modified: branches/multicore/numpy/distutils/misc_util.py =================================================================== --- branches/multicore/numpy/distutils/misc_util.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/misc_util.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -4,6 +4,8 @@ import imp import copy import glob +import atexit +import tempfile try: set @@ -27,7 +29,7 @@ return os.path.join(*splitted) def rel_path(path, parent_path): - """ Return path relative to parent_path. + """Return path relative to parent_path. """ pd = os.path.abspath(parent_path) apath = os.path.abspath(path) @@ -40,20 +42,25 @@ path = apath[len(pd)+1:] return path -def get_path(mod_name, parent_path=None): - """ Return path of the module. +def get_path_from_frame(frame, parent_path=None): + """Return path of the module given a frame object from the call stack. Returned path is relative to parent_path when given, otherwise it is absolute path. """ - if mod_name == '__builtin__': - #builtin if/then added by Pearu for use in core.run_setup. - d = os.path.dirname(os.path.abspath(sys.argv[0])) - else: - __import__(mod_name) - mod = sys.modules[mod_name] - if hasattr(mod,'__file__'): - filename = mod.__file__ + + # First, try to find if the file name is in the frame. + try: + caller_file = eval('__file__', frame.f_globals, frame.f_locals) + d = os.path.dirname(os.path.abspath(caller_file)) + except NameError: + # __file__ is not defined, so let's try __name__. We try this second + # because setuptools spoofs __name__ to be '__main__' even though + # sys.modules['__main__'] might be something else, like easy_install(1). + caller_name = eval('__name__', frame.f_globals, frame.f_locals) + __import__(caller_name) + mod = sys.modules[caller_name] + if hasattr(mod, '__file__'): d = os.path.dirname(os.path.abspath(mod.__file__)) else: # we're probably running setup.py as execfile("setup.py") @@ -63,10 +70,11 @@ if parent_path is not None: d = rel_path(d, parent_path) + return d or '.' def njoin(*path): - """ Join two or more pathname components + + """Join two or more pathname components + - convert a /-separated pathname to one using the OS's path separator. - resolve `..` and `.` from path. @@ -93,7 +101,7 @@ return minrelpath(joined) def get_mathlibs(path=None): - """ Return the MATHLIB line from config.h + """Return the MATHLIB line from config.h """ if path is None: path = get_numpy_include_dirs()[0] @@ -110,7 +118,7 @@ return mathlibs def minrelpath(path): - """ Resolve `..` and '.' from path. + """Resolve `..` and '.' from path. """ if not is_string(path): return path @@ -176,13 +184,38 @@ return map(minrelpath,new_paths) def gpaths(paths, local_path='', include_non_existing=True): - """ Apply glob to paths and prepend local_path if needed. + """Apply glob to paths and prepend local_path if needed. """ if is_string(paths): paths = (paths,) return _fix_paths(paths,local_path, include_non_existing) +_temporary_directory = None +def clean_up_temporary_directory(): + from numpy.distutils import log + global _temporary_directory + if not _temporary_directory: + return + log.debug('removing %s', _temporary_directory) + try: + os.rmdir(_temporary_directory) + except OSError: + pass + _temporary_directory = None + +def make_temp_file(suffix='', prefix='', text=True): + global _temporary_directory + if not _temporary_directory: + _temporary_directory = tempfile.mkdtemp() + atexit.register(clean_up_temporary_directory) + fid, name = tempfile.mkstemp(suffix=suffix, + prefix=prefix, + dir=_temporary_directory, + text=text) + fo = os.fdopen(fid, 'w') + return fo, name + # Hooks for colored terminal output. # See also http://www.livinglogic.de/Python/ansistyle def terminal_has_colors(): @@ -213,18 +246,37 @@ return 0 if terminal_has_colors(): - def red_text(s): return '\x1b[31m%s\x1b[0m'%s - def green_text(s): return '\x1b[32m%s\x1b[0m'%s - def yellow_text(s): return '\x1b[33m%s\x1b[0m'%s - def blue_text(s): return '\x1b[34m%s\x1b[0m'%s - def cyan_text(s): return '\x1b[35m%s\x1b[0m'%s + _colour_codes = dict(black=0, red=1, green=2, yellow=3, + blue=4, magenta=5, cyan=6, white=7) + def colour_text(s, fg=None, bg=None, bold=False): + seq = [] + if bold: + seq.append('1') + if fg: + fgcode = 30 + _colour_codes.get(fg.lower(), 0) + seq.append(str(fgcode)) + if bg: + bgcode = 40 + _colour_codes.get(fg.lower(), 7) + seq.append(str(bgcode)) + if seq: + return '\x1b[%sm%s\x1b[0m' % (';'.join(seq), s) + else: + return s else: - def red_text(s): return s - def green_text(s): return s - def yellow_text(s): return s - def cyan_text(s): return s - def blue_text(s): return s + def colour_text(s, fg=None, bg=None): + return s +def red_text(s): + return colour_text(s, 'red') +def green_text(s): + return colour_text(s, 'green') +def yellow_text(s): + return colour_text(s, 'yellow') +def cyan_text(s): + return colour_text(s, 'cyan') +def blue_text(s): + return colour_text(s, 'blue') + ######################### def cyg2win32(path): @@ -233,7 +285,7 @@ return path def mingw32(): - """ Return true when using mingw32 environment. + """Return true when using mingw32 environment. """ if sys.platform=='win32': if os.environ.get('OSTYPE','')=='msys': @@ -243,7 +295,7 @@ return False def msvc_runtime_library(): - "return name of MSVC runtime library if Python was built with MSVC >= 7" + "Return name of MSVC runtime library if Python was built with MSVC >= 7" msc_pos = sys.version.find('MSC v.') if msc_pos != -1: msc_ver = sys.version[msc_pos+6:msc_pos+10] @@ -263,7 +315,7 @@ f90_ext_match = re.compile(r'.*[.](f90|f95)\Z',re.I).match f90_module_name_match = re.compile(r'\s*module\s*(?P[\w_]+)',re.I).match def _get_f90_modules(source): - """ Return a list of Fortran f90 module names that + """Return a list of Fortran f90 module names that given source file defines. """ if not f90_ext_match(source): @@ -284,7 +336,7 @@ return isinstance(s, str) def all_strings(lst): - """ Return True if all items in lst are string objects. """ + """Return True if all items in lst are string objects. """ for item in lst: if not is_string(item): return False @@ -309,8 +361,9 @@ return [seq] def get_language(sources): - """ Determine language value (c,f77,f90) from sources """ - language = 'c' + # not used in numpy/scipy packages, use build_ext.detect_language instead + """Determine language value (c,f77,f90) from sources """ + language = None for source in sources: if isinstance(source, str): if f90_ext_match(source): @@ -321,21 +374,21 @@ return language def has_f_sources(sources): - """ Return True if sources contains Fortran files """ + """Return True if sources contains Fortran files """ for source in sources: if fortran_ext_match(source): return True return False def has_cxx_sources(sources): - """ Return True if sources contains C++ files """ + """Return True if sources contains C++ files """ for source in sources: if cxx_ext_match(source): return True return False def filter_sources(sources): - """ Return four lists of filenames containing + """Return four lists of filenames containing C, C++, Fortran, and Fortran 90 module sources, respectively. """ @@ -379,7 +432,7 @@ return _get_headers(_get_directories(sources)) def is_local_src_dir(directory): - """ Return true if directory is local directory. + """Return true if directory is local directory. """ if not is_string(directory): return False @@ -404,7 +457,7 @@ yield os.path.join(dirpath, f) def general_source_directories_files(top_path): - """ Return a directory name relative to top_path and + """Return a directory name relative to top_path and files contained. """ pruned_directories = ['CVS','.svn','build'] @@ -483,7 +536,7 @@ return '.'.join([a for a in args if a]) def get_frame(level=0): - """ Return frame object from call stack with given level. + """Return frame object from call stack with given level. """ try: return sys._getframe(level+1) @@ -511,7 +564,7 @@ package_path=None, caller_level=1, **attrs): - """ Construct configuration instance of a package. + """Construct configuration instance of a package. package_name -- name of the package Ex.: 'distutils' @@ -528,10 +581,11 @@ self.version = None caller_frame = get_frame(caller_level) - caller_name = eval('__name__',caller_frame.f_globals,caller_frame.f_locals) - self.local_path = get_path(caller_name, top_path) + self.local_path = get_path_from_frame(caller_frame, top_path) # local_path -- directory of a file (usually setup.py) that # defines a configuration() function. + # local_path -- directory of a file (usually setup.py) that + # defines a configuration() function. if top_path is None: top_path = self.local_path if package_path is None: @@ -597,7 +651,7 @@ self.set_options(**caller_instance.options) def todict(self): - """ Return configuration distionary suitable for passing + """Return configuration distionary suitable for passing to distutils.core.setup() function. """ self._optimize_data_files() @@ -617,7 +671,7 @@ print>>sys.stderr, blue_text('Warning: %s' % (message,)) def set_options(self, **options): - """ Configure Configuration instance. + """Configure Configuration instance. The following options are available: - ignore_setup_xxx_py @@ -632,18 +686,8 @@ raise ValueError,'Unknown option: '+key def get_distribution(self): - import distutils.core - dist = distutils.core._setup_distribution - # XXX Hack to get numpy installable with easy_install. - # The problem is easy_install runs it's own setup(), which - # sets up distutils.core._setup_distribution. However, - # when our setup() runs, that gets overwritten and lost. - # We can't use isinstance, as the DistributionWithoutHelpCommands - # class is local to a function in setuptools.command.easy_install - if dist is not None and \ - repr(dist).find('DistributionWithoutHelpCommands') != -1: - return None - return dist + from numpy.distutils.core import get_distribution + return get_distribution() def _wildcard_get_subpackage(self, subpackage_name, parent_name, @@ -705,7 +749,7 @@ subpackage_path=None, parent_name=None, caller_level = 1): - """ Return list of subpackage configurations. + """Return list of subpackage configurations. '*' in subpackage_name is handled as a wildcard. """ @@ -755,7 +799,7 @@ def add_subpackage(self,subpackage_name, subpackage_path=None, standalone = False): - """ Add subpackage to configuration. + """Add subpackage to configuration. """ if standalone: parent_name = None @@ -780,10 +824,9 @@ if dist is not None: self.warn('distutils distribution has been initialized,'\ ' it may be too late to add a subpackage '+ subpackage_name) - return def add_data_dir(self,data_path): - """ Recursively add files under data_path to data_files list. + """Recursively add files under data_path to data_files list. Argument can be either - 2-sequence (,) - path to data directory where python datadir suffix defaults @@ -854,7 +897,7 @@ assert not is_glob_pattern(d),`d` dist = self.get_distribution() - if dist is not None: + if dist is not None and dist.data_files is not None: data_files = dist.data_files else: data_files = self.data_files @@ -863,7 +906,6 @@ for d1,f in list(general_source_directories_files(path)): target_path = os.path.join(self.path_in_package,d,d1) data_files.append((target_path, f)) - return def _optimize_data_files(self): data_dict = {} @@ -872,10 +914,9 @@ data_dict[p] = set() map(data_dict[p].add,files) self.data_files[:] = [(p,list(files)) for p,files in data_dict.items()] - return def add_data_files(self,*files): - """ Add data files to configuration data_files. + """Add data files to configuration data_files. Argument(s) can be either - 2-sequence (,) - paths to data files where python datadir prefix defaults @@ -951,18 +992,17 @@ assert not is_glob_pattern(d),`d,filepat` dist = self.get_distribution() - if dist is not None: + if dist is not None and dist.data_files is not None: data_files = dist.data_files else: data_files = self.data_files data_files.append((os.path.join(self.path_in_package,d),paths)) - return ### XXX Implement add_py_modules def add_include_dirs(self,*paths): - """ Add paths to configuration include directories. + """Add paths to configuration include directories. """ include_dirs = self.paths(paths) dist = self.get_distribution() @@ -970,14 +1010,13 @@ dist.include_dirs.extend(include_dirs) else: self.include_dirs.extend(include_dirs) - return def add_numarray_include_dirs(self): import numpy.numarray.util as nnu self.add_include_dirs(*nnu.get_numarray_include_dirs()) def add_headers(self,*files): - """ Add installable headers to configuration. + """Add installable headers to configuration. Argument(s) can be either - 2-sequence (,) - path(s) to header file(s) where python includedir suffix will default @@ -996,10 +1035,9 @@ dist.headers.extend(headers) else: self.headers.extend(headers) - return def paths(self,*paths,**kws): - """ Apply glob to paths and prepend local_path if needed. + """Apply glob to paths and prepend local_path if needed. """ include_non_existing = kws.get('include_non_existing',True) return gpaths(paths, @@ -1013,10 +1051,9 @@ 'module_dirs','extra_objects']: new_v = self.paths(v) kw[k] = new_v - return def add_extension(self,name,sources,**kw): - """ Add extension to configuration. + """Add extension to configuration. Keywords: include_dirs, define_macros, undef_macros, @@ -1031,10 +1068,6 @@ ext_args['name'] = dot_join(self.name,name) ext_args['sources'] = sources - language = ext_args.get('language',None) - if language is None: - ext_args['language'] = get_language(sources) - if ext_args.has_key('extra_info'): extra_info = ext_args['extra_info'] del ext_args['extra_info'] @@ -1085,7 +1118,7 @@ return ext def add_library(self,name,sources,**build_info): - """ Add library to configuration. + """Add library to configuration. Valid keywords for build_info: depends @@ -1099,10 +1132,6 @@ name = name #+ '__OF__' + self.name build_info['sources'] = sources - language = build_info.get('language',None) - if language is None: - build_info['language'] = get_language(sources) - self._fix_paths_dict(build_info) self.libraries.append((name,build_info)) @@ -1111,10 +1140,9 @@ if dist is not None: self.warn('distutils distribution has been initialized,'\ ' it may be too late to add a library '+ name) - return def add_scripts(self,*files): - """ Add scripts to configuration. + """Add scripts to configuration. """ scripts = self.paths(files) dist = self.get_distribution() @@ -1122,7 +1150,6 @@ dist.scripts.extend(scripts) else: self.scripts.extend(scripts) - return def dict_append(self,**dict): for key in self.list_keys: @@ -1148,7 +1175,6 @@ pass else: raise ValueError, "Don't know about key=%r" % (key) - return def __str__(self): from pprint import pformat @@ -1180,7 +1206,7 @@ return cmd.build_temp def have_f77c(self): - """ Check for availability of Fortran 77 compiler. + """Check for availability of Fortran 77 compiler. Use it inside source generating function to ensure that setup distribution instance has been initialized. """ @@ -1193,7 +1219,7 @@ return flag def have_f90c(self): - """ Check for availability of Fortran 90 compiler. + """Check for availability of Fortran 90 compiler. Use it inside source generating function to ensure that setup distribution instance has been initialized. """ @@ -1206,7 +1232,7 @@ return flag def append_to(self, extlib): - """ Append libraries, include_dirs to extension or library item. + """Append libraries, include_dirs to extension or library item. """ if is_sequence(extlib): lib_name, build_info = extlib @@ -1218,10 +1244,9 @@ assert isinstance(extlib,Extension), repr(extlib) extlib.libraries.extend(self.libraries) extlib.include_dirs.extend(self.include_dirs) - return def _get_svn_revision(self,path): - """ Return path's SVN revision number. + """Return path's SVN revision number. """ revision = None m = None @@ -1252,7 +1277,7 @@ return revision def get_version(self, version_file=None, version_variable=None): - """ Try to get version string of a package. + """Try to get version string of a package. """ version = getattr(self,'version',None) if version is not None: @@ -1306,7 +1331,7 @@ return version def make_svn_version_py(self, delete=True): - """ Generate package __svn_version__.py file from SVN revision number, + """Generate package __svn_version__.py file from SVN revision number, it will be removed after python exits but will be available when sdist, etc commands are executed. @@ -1340,14 +1365,13 @@ self.add_data_files(('', generate_svn_version_py())) def make_config_py(self,name='__config__'): - """ Generate package __config__.py file containing system_info + """Generate package __config__.py file containing system_info information used during building the package. """ self.py_modules.append((self.name,name,generate_config_py)) - return def get_info(self,*names): - """ Get resources information. + """Get resources information. """ from system_info import get_info, dict_append info_dict = {} @@ -1380,7 +1404,7 @@ ######################### def default_config_dict(name = None, parent_name = None, local_path=None): - """ Return a configuration dictionary for usage in + """Return a configuration dictionary for usage in configuration() function defined in file setup_.py. """ import warnings @@ -1426,10 +1450,10 @@ return os.path.normpath(njoin(drive + prefix, subpath)) def generate_config_py(target): - """ Generate config.py file containing system_info information + """Generate config.py file containing system_info information used during building the package. - Usage:\ + Usage: config['py_modules'].append((packagename, '__config__',generate_config_py)) """ from numpy.distutils.system_info import system_info @@ -1441,20 +1465,23 @@ f.write('__all__ = ["get_info","show"]\n\n') for k, i in system_info.saved_results.items(): f.write('%s=%r\n' % (k, i)) - f.write('\ndef get_info(name):\n g=globals()\n return g.get(name,g.get(name+"_info",{}))\n') - f.write(''' + f.write(r''' +def get_info(name): + g = globals() + return g.get(name, g.get(name + "_info", {})) + def show(): for name,info_dict in globals().items(): - if name[0]=="_" or type(info_dict) is not type({}): continue - print name+":" + if name[0] == "_" or type(info_dict) is not type({}): continue + print name + ":" if not info_dict: print " NOT AVAILABLE" for k,v in info_dict.items(): v = str(v) - if k==\'sources\' and len(v)>200: v = v[:60]+\' ...\\n... \'+v[-60:] - print \' %s = %s\'%(k,v) + if k == "sources" and len(v) > 200: + v = v[:60] + " ...\n... " + v[-60:] + print " %s = %s" % (k,v) print - return ''') f.close() Modified: branches/multicore/numpy/distutils/system_info.py =================================================================== --- branches/multicore/numpy/distutils/system_info.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/system_info.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1107,7 +1107,7 @@ self.set_info(**info) atlas_version_c_text = r''' -/* This file is generated from numpy_distutils/system_info.py */ +/* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); @@ -1639,7 +1639,7 @@ def calc_info(self): config_exe = find_executable(self.get_config_exe()) - if not os.path.isfile(config_exe): + if not config_exe: log.warn('File not found: %s. Cannot determine %s info.' \ % (config_exe, self.section)) return Modified: branches/multicore/numpy/distutils/unixccompiler.py =================================================================== --- branches/multicore/numpy/distutils/unixccompiler.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/distutils/unixccompiler.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -3,13 +3,11 @@ """ import os -import sys import new -from distutils.errors import DistutilsExecError, LinkError, CompileError +from distutils.errors import DistutilsExecError, CompileError from distutils.unixccompiler import * - import log # Note that UnixCCompiler._compile appeared in Python 2.3 Modified: branches/multicore/numpy/f2py/f2py2e.py =================================================================== --- branches/multicore/numpy/f2py/f2py2e.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/f2py/f2py2e.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -64,7 +64,7 @@ Options: - --3g-numpy Use numpy.f2py.lib tool, the 3rd generation of F2PY, + --g3-numpy Use numpy.f2py.lib tool, the 3rd generation of F2PY, with NumPy support. --2d-numpy Use numpy.f2py tool with NumPy support. [DEFAULT] --2d-numeric Use f2py2e tool with Numeric support. Modified: branches/multicore/numpy/f2py/lib/main.py =================================================================== --- branches/multicore/numpy/f2py/lib/main.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/f2py/lib/main.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -21,7 +21,7 @@ except ImportError: numpy_version = 'N/A' -__all__ = ['main'] +__all__ = ['main', 'compile'] __usage__ = """ F2PY G3 --- The third generation of Fortran to Python Interface Generator @@ -100,7 +100,8 @@ import re import shutil import parser.api -from parser.api import parse, PythonModule, EndStatement, Module, Subroutine, Function +from parser.api import parse, PythonModule, EndStatement, Module, Subroutine, Function,\ + get_reader def get_values(sys_argv, prefix='', suffix='', strip_prefix=False, strip_suffix=False): """ @@ -200,9 +201,11 @@ if os.path.isfile(signature_output): overwrite = get_option(sys_argv, '--overwrite-signature', False) if not overwrite: - print >> sys.stderr, 'Signature file %r exists. Use --overwrite-signature to overwrite.' % (signature_output) + print >> sys.stderr, 'Signature file %r exists. '\ + 'Use --overwrite-signature to overwrite.' % (signature_output) sys.exit() - modulename = get_option_value(sys_argv,'-m',os.path.basename(name),os.path.basename(name)) + modulename = get_option_value(sys_argv,'-m',os.path.basename(name), + os.path.basename(name)) output_stream = open(signature_output,'w') flag = 'file' @@ -217,7 +220,8 @@ elif word==':': flag = 'file' elif word.startswith('--'): options.append(word) else: - {'file': file_names,'only': only_names, 'skip': skip_names}[flag].append(word) + {'file': file_names,'only': only_names, + 'skip': skip_names}[flag].append(word) if options: sys.stderr.write('Unused options: %s\n' % (', '.join(options))) @@ -286,7 +290,7 @@ f = open(f_fn,'w') f.write(f_code) f.close() - f_lib = '%s_f_wrappers_f2py' % (block.name) + #f_lib = '%s_f_wrappers_f2py' % (block.name) module_info = {'name':block.name, 'c_sources':[c_fn], 'f_sources':[f_fn], 'language':'f90'} module_infos.append(module_info) @@ -371,7 +375,7 @@ if sources_only: return - def configuration(parent_package='', top_path=None): + def configuration(parent_package='', top_path=None or ''): from numpy.distutils.misc_util import Configuration config = Configuration('',parent_package,top_path) flibname = modulename + '_fortran_f2py' @@ -403,10 +407,18 @@ return config old_sys_argv = sys.argv[:] - new_sys_argv = [sys.argv[0]] + ['build', - '--build-temp',build_dir, - '--build-base',build_dir, - '--build-platlib','.'] + build_dir_ext_temp = os.path.join(build_dir,'ext_temp') + build_dir_clib_temp = os.path.join(build_dir,'clib_temp') + build_dir_clib_clib = os.path.join(build_dir,'clib_clib') + new_sys_argv = [sys.argv[0]] + ['build_ext', + '--build-temp',build_dir_ext_temp, + '--build-lib',build_dir, + 'build_clib', + '--build-temp',build_dir_clib_temp, + '--build-clib',build_dir_clib_clib, + ] + temp_dirs = [build_dir_ext_temp, build_dir_clib_temp, build_dir_clib_clib] + if fc_flags: new_sys_argv += ['config_fc'] + fc_flags sys.argv[:] = new_sys_argv @@ -418,9 +430,11 @@ sys.argv[:] = old_sys_argv - if clean_build_dir and os.path.exists(build_dir): - sys.stderr.write('Removing build directory %s\n'%(build_dir)) - shutil.rmtree(build_dir) + if 1 or clean_build_dir: + for d in temp_dirs: + if os.path.exists(d): + sys.stderr.write('Removing build directory %s\n'%(d)) + shutil.rmtree(d) return def main(sys_argv = None): @@ -449,3 +463,72 @@ build_extension(sys_argv, sources_only = True) return + +def compile(source, + jobname = 'untitled', + extra_args = [], + source_ext = None, + modulenames = None + ): + """ + Build extension module from processing source with f2py. + + jobname - the name of compile job. For non-module source + this will be also the name of extension module. + modulenames - the list of extension module names that + the given compilation job should create. + extra_args - a list of extra arguments for numpy style + setup.py command line. + source_ext - extension of the Fortran source file: .f90 or .f + + Extension modules are saved to current working directory. + Returns a list of module objects according to modulenames + input. + """ + from nary import encode + tempdir = tempfile.gettempdir() + s = 'f2pyjob_%s_%s' % (jobname, encode(source)) + tmpdir = os.path.join(tempdir, s) + if source_ext is None: + reader = get_reader(source) + source_ext = {'free90':'.f90','fix90':'.f90','fix77':'.f','pyf':'.pyf'}[reader.mode] + + if modulenames is None: + modulenames = jobname, + if os.path.isdir(tmpdir): + sys.path.insert(0, tmpdir) + try: + modules = [] + for modulename in modulenames: + exec('import %s as m' % (modulename)) + modules.append(m) + sys.path.pop(0) + return modules + except ImportError: + pass + sys.path.pop(0) + else: + os.mkdir(tmpdir) + + fname = os.path.join(tmpdir,'%s_src%s' % (jobname, source_ext)) + + f = open(fname,'w') + f.write(source) + f.close() + + sys_argv = [] + sys_argv.extend(['--build-dir',tmpdir]) + #sys_argv.extend(['-DF2PY_DEBUG_PYOBJ_TOFROM']) + sys_argv.extend(['-m',jobname, fname]) + + build_extension(sys_argv + extra_args) + + sys.path.insert(0, tmpdir) + modules = [] + for modulename in modulenames: + exec('import %s as m' % (modulename)) + modules.append(m) + sys.path.pop(0) + return modules + +#EOF Copied: branches/multicore/numpy/f2py/lib/nary.py (from rev 3839, trunk/numpy/f2py/lib/nary.py) Modified: branches/multicore/numpy/f2py/lib/parser/doc.txt =================================================================== --- branches/multicore/numpy/f2py/lib/parser/doc.txt 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/f2py/lib/parser/doc.txt 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,3 +1,4 @@ +.. -*- rest -*- Created: September 2006 Author: Pearu Peterson @@ -18,6 +19,7 @@ tree of Fortran input. For example, :: + >>> from api import parse >>> code = """ ... c comment @@ -73,6 +75,7 @@ For example, :: + >>> from readfortran import * >>> import os >>> reader = FortranFileReader(os.path.expanduser('~/src/blas/daxpy.f')) @@ -92,17 +95,17 @@ * .line - contains Fortran code line * .span - a 2-tuple containing the span of line numbers containing - Fortran code in the original Fortran file + Fortran code in the original Fortran file * .label - the label of Fortran code line * .reader - the FortranReaderBase class instance * .strline - if not None then contains Fortran code line with parenthesis - content and string literal constants saved in .strlinemap dictionary. + content and string literal constants saved in .strlinemap dictionary. * .is_f2py_directive - True if line started with f2py directive comment. and the following methods: * .get_line() - returns .strline (also evalutes it if None). Also - handles Hollerith contstants in fixed F77 mode. + handles Hollerith contstants in fixed F77 mode. * .isempty() - returns True if Fortran line contains no code. * .copy(line=None, apply_map=False) - returns a Line instance with given .span, .label, .reader information but line content @@ -137,7 +140,7 @@ * .comment - comment string * .span - a 2-tuple containing the span of line numbers containing - Fortran comment in the original Fortran file + Fortran comment in the original Fortran file * .reader - the FortranReaderBase class instance and .isempty() method. @@ -152,7 +155,7 @@ * .block - a list of lines * .suffix - the content of * .span - a 2-tuple containing the span of line numbers containing - multiline syntax in the original Fortran file + multiline syntax in the original Fortran file * .reader - the FortranReaderBase class instance and .isempty() method. @@ -191,12 +194,12 @@ FortranReaderBase has the following attributes: * .source - a file-like object with .next() method to retrive - a source code line + a source code line * .source_lines - a list of read source lines * .reader - a FortranReaderBase instance for reading files - from INCLUDE statements. + from INCLUDE statements. * .include_dirs - a list of directories where INCLUDE files - are searched. Default is ['.']. + are searched. Default is ['.']. and the following methods: @@ -240,20 +243,20 @@ * .parent - it is either parent block-type statement or FortranParser instance. * .item - Line instance containing Fortran statement line information, see above. * .isvalid - when False then processing this Statement instance will be skipped, - for example, when the content of .item does not match with - the Statement class. + for example, when the content of .item does not match with + the Statement class. * .ignore - when True then the Statement instance will be ignored. * .modes - a list of Fortran format modes where the Statement instance is valid. and the following methods: * .info(message), .warning(message), .error(message) - to spit messages to - sys.stderr stream. + sys.stderr stream. * .get_variable(name) - get Variable instance by name that is defined in - current namespace. If name is not defined, then the corresponding - Variable instance is created. + current namespace. If name is not defined, then the corresponding + Variable instance is created. * .analyze() - calculate various information about the Statement, this information - is saved in .a attribute that is AttributeHolder instance. + is saved in .a attribute that is AttributeHolder instance. All statement classes are derived from Statement class. Block statements are derived from BeginStatement class and is assumed to end with EndStatement @@ -261,7 +264,7 @@ have the following attributes: * .name - name of the block, blocks without names use line label - as the name. + as the name. * .blocktype - type of the block (derived from class name) * .content - a list of Statement (or Line) instances. @@ -311,9 +314,9 @@ * .tostr() - return string representation of Fortran type declaration * .astypedecl() - pure type declaration instance, it has no .entity_decls - and .attrspec. + and .attrspec. * .analyze() - processes .entity_decls and .attsspec attributes and adds - Variable instance to .parent.a.variables dictionary. + Variable instance to .parent.a.variables dictionary. The following block statements are defined in block_statements.py: @@ -329,21 +332,21 @@ In summary, .a attribute may hold different information sets as follows: - BeginSource - .module, .external_subprogram, .blockdata - Module - .attributes, .implicit_rules, .use, .use_provides, .variables, - .type_decls, .module_subprogram, .module_data - PythonModule - .implicit_rules, .use, .use_provides - Program - .attributes, .implicit_rules, .use, .use_provides - BlockData - .implicit_rules, .use, .use_provides, .variables - Interface - .implicit_rules, .use, .use_provides, .module_procedures - Function, Subroutine - .implicit_rules, .attributes, .use, .use_statements, - .variables, .type_decls, .internal_subprogram - TypeDecl - .variables, .attributes + * BeginSource - .module, .external_subprogram, .blockdata + * Module - .attributes, .implicit_rules, .use, .use_provides, .variables, + .type_decls, .module_subprogram, .module_data + * PythonModule - .implicit_rules, .use, .use_provides + * Program - .attributes, .implicit_rules, .use, .use_provides + * BlockData - .implicit_rules, .use, .use_provides, .variables + * Interface - .implicit_rules, .use, .use_provides, .module_procedures + * Function, Subroutine - .implicit_rules, .attributes, .use, .use_statements, + .variables, .type_decls, .internal_subprogram + * TypeDecl - .variables, .attributes Block statements have the following methods: * .get_classes() - returns a list of Statement classes that are valid - as a content of given block statement. + as a content of given block statement. The following one line statements are defined: Deleted: branches/multicore/numpy/f2py/lib/test_derived_scalar.py =================================================================== --- branches/multicore/numpy/f2py/lib/test_derived_scalar.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/f2py/lib/test_derived_scalar.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,99 +0,0 @@ -#!/usr/bin/env python -""" -Tests for intent(in,out) derived type arguments in Fortran subroutine's. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * - -def build(fortran_code, rebuild=True): - modulename = os.path.splitext(os.path.basename(__file__))[0]+'_ext' - try: - exec ('import %s as m' % (modulename)) - if rebuild and os.stat(m.__file__)[8] < os.stat(__file__)[8]: - del sys.modules[m.__name__] # soft unload extension module - os.remove(m.__file__) - raise ImportError,'%s is newer than %s' % (__file__, m.__file__) - except ImportError,msg: - assert str(msg).startswith('No module named'),str(msg) - print msg, ', recompiling %s.' % (modulename) - import tempfile - fname = tempfile.mktemp() + '.f90' - f = open(fname,'w') - f.write(fortran_code) - f.close() - sys_argv = [] - sys_argv.extend(['--build-dir','tmp']) - #sys_argv.extend(['-DF2PY_DEBUG_PYOBJ_TOFROM']) - from main import build_extension - sys_argv.extend(['-m',modulename, fname]) - build_extension(sys_argv) - os.remove(fname) - status = os.system(' '.join([sys.executable] + sys.argv)) - sys.exit(status) - return m - -fortran_code = ''' -subroutine foo(a) - type myt - integer flag - end type myt - type(myt) a -!f2py intent(in,out) a - a % flag = a % flag + 1 -end -function foo2(a) - type myt - integer flag - end type myt - type(myt) a - type(myt) foo2 - foo2 % flag = a % flag + 2 -end -''' - -# tester note: set rebuild=True when changing fortan_code and for SVN -m = build(fortran_code, rebuild=True) - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_simple(self, level=1): - a = m.myt(2) - assert_equal(a.flag,2) - assert isinstance(a,m.myt),`a` - r = m.foo(a) - assert isinstance(r,m.myt),`r` - assert r is a - assert_equal(r.flag,3) - assert_equal(a.flag,3) - - a.flag = 5 - assert_equal(r.flag,5) - - #s = m.foo((5,)) - - def check_foo2_simple(self, level=1): - a = m.myt(2) - assert_equal(a.flag,2) - assert isinstance(a,m.myt),`a` - r = m.foo2(a) - assert isinstance(r,m.myt),`r` - assert r is not a - assert_equal(a.flag,2) - assert_equal(r.flag,4) - - -if __name__ == "__main__": - NumpyTest().run() Deleted: branches/multicore/numpy/f2py/lib/test_module_module.py =================================================================== --- branches/multicore/numpy/f2py/lib/test_module_module.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/f2py/lib/test_module_module.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,83 +0,0 @@ -#!/usr/bin/env python -""" -Tests for module with scalar derived types and subprograms. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * - -def build(fortran_code, rebuild=True, build_dir = 'tmp'): - modulename = os.path.splitext(os.path.basename(__file__))[0] + '_ext' - try: - exec ('import %s as m' % (modulename)) - if rebuild and os.stat(m.__file__)[8] < os.stat(__file__)[8]: - del sys.modules[m.__name__] # soft unload extension module - os.remove(m.__file__) - raise ImportError,'%s is newer than %s' % (__file__, m.__file__) - except ImportError,msg: - assert str(msg)==('No module named %s' % (modulename)) \ - or str(msg).startswith('%s is newer than' % (__file__)),str(msg) - print msg, ', recompiling %s.' % (modulename) - if not os.path.isdir(build_dir): os.makedirs(build_dir) - fname = os.path.join(build_dir, modulename + '_source.f90') - f = open(fname,'w') - f.write(fortran_code) - f.close() - sys_argv = [] - sys_argv.extend(['--build-dir',build_dir]) - #sys_argv.extend(['-DF2PY_DEBUG_PYOBJ_TOFROM']) - from main import build_extension - sys_argv.extend(['-m',modulename, fname]) - build_extension(sys_argv) - status = os.system(' '.join([sys.executable] + sys.argv)) - sys.exit(status) - return m - -fortran_code = ''' -module test_module_module_ext2 - type rat - integer n,d - end type rat - contains - subroutine foo2() - print*,"In foo2" - end subroutine foo2 -end module -module test_module_module_ext - contains - subroutine foo - use test_module_module_ext2 - print*,"In foo" - call foo2 - end subroutine foo - subroutine bar(a) - use test_module_module_ext2 - type(rat) a - print*,"In bar,a=",a - end subroutine bar -end module test_module_module_ext -''' - -# tester note: set rebuild=True when changing fortan_code and for SVN -m = build(fortran_code, rebuild=True) - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_simple(self, level=1): - foo = m.foo - foo() - -if __name__ == "__main__": - NumpyTest().run() Deleted: branches/multicore/numpy/f2py/lib/test_module_scalar.py =================================================================== --- branches/multicore/numpy/f2py/lib/test_module_scalar.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/f2py/lib/test_module_scalar.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,83 +0,0 @@ -#!/usr/bin/env python -""" -Tests for module with scalar derived types and subprograms. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * - -def build(fortran_code, rebuild=True): - modulename = os.path.splitext(os.path.basename(__file__))[0] + '_ext' - try: - exec ('import %s as m' % (modulename)) - if rebuild and os.stat(m.__file__)[8] < os.stat(__file__)[8]: - del sys.modules[m.__name__] # soft unload extension module - os.remove(m.__file__) - raise ImportError,'%s is newer than %s' % (__file__, m.__file__) - except ImportError,msg: - assert str(msg)==('No module named %s' % (modulename)),str(msg) - print msg, ', recompiling %s.' % (modulename) - import tempfile - fname = tempfile.mktemp() + '.f90' - f = open(fname,'w') - f.write(fortran_code) - f.close() - sys_argv = [] - sys_argv.extend(['--build-dir','tmp']) - #sys_argv.extend(['-DF2PY_DEBUG_PYOBJ_TOFROM']) - from main import build_extension - sys_argv.extend(['-m',modulename, fname]) - build_extension(sys_argv) - os.remove(fname) - status = os.system(' '.join([sys.executable] + sys.argv)) - sys.exit(status) - return m - -fortran_code = ''' -module test_module_scalar_ext - - contains - subroutine foo(a) - integer a -!f2py intent(in,out) a - a = a + 1 - end subroutine foo - function foo2(a) - integer a - integer foo2 - foo2 = a + 2 - end function foo2 -end module test_module_scalar_ext -''' - -# tester note: set rebuild=True when changing fortan_code and for SVN -m = build(fortran_code, rebuild=True) - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_simple(self, level=1): - foo = m.foo - r = foo(2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,3) - - def check_foo2_simple(self, level=1): - foo2 = m.foo2 - r = foo2(2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,4) - -if __name__ == "__main__": - NumpyTest().run() Deleted: branches/multicore/numpy/f2py/lib/test_scalar_function_in.py =================================================================== --- branches/multicore/numpy/f2py/lib/test_scalar_function_in.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/f2py/lib/test_scalar_function_in.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,554 +0,0 @@ -#!/usr/bin/env python -""" -Tests for intent(in) arguments in subroutine-wrapped Fortran functions. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * - -def build(fortran_code, rebuild=True): - modulename = os.path.splitext(os.path.basename(__file__))[0]+'_ext' - try: - exec ('import %s as m' % (modulename)) - if rebuild and os.stat(m.__file__)[8] < os.stat(__file__)[8]: - del sys.modules[m.__name__] # soft unload extension module - os.remove(m.__file__) - raise ImportError,'%s is newer than %s' % (__file__, m.__file__) - except ImportError,msg: - assert str(msg).startswith('No module named'),str(msg) - print msg, ', recompiling %s.' % (modulename) - import tempfile - fname = tempfile.mktemp() + '.f' - f = open(fname,'w') - f.write(fortran_code) - f.close() - sys_argv = ['--build-dir','tmp'] - #sys_argv.extend(['-DF2PY_DEBUG_PYOBJ_TOFROM']) - from main import build_extension - sys_argv.extend(['-m',modulename, fname]) - build_extension(sys_argv) - os.remove(fname) - os.system(' '.join([sys.executable] + sys.argv)) - sys.exit(0) - return m - -fortran_code = ''' - function fooint1(a) - integer*1 a - integer*1 fooint1 - fooint1 = a + 1 - end - function fooint2(a) - integer*2 a - integer*2 fooint2 - fooint2 = a + 1 - end - function fooint4(a) - integer*4 a - integer*4 fooint4 - fooint4 = a + 1 - end - function fooint8(a) - integer*8 a - integer*8 fooint8 - fooint8 = a + 1 - end - function foofloat4(a) - real*4 a - real*4 foofloat4 - foofloat4 = a + 1.0e0 - end - function foofloat8(a) - real*8 a - real*8 foofloat8 - foofloat8 = a + 1.0d0 - end - function foocomplex8(a) - complex*8 a - complex*8 foocomplex8 - foocomplex8 = a + 1.0e0 - end - function foocomplex16(a) - complex*16 a - complex*16 foocomplex16 - foocomplex16 = a + 1.0d0 - end - function foobool1(a) - logical*1 a - logical*1 foobool1 - foobool1 = .not. a - end - function foobool2(a) - logical*2 a - logical*2 foobool2 - foobool2 = .not. a - end - function foobool4(a) - logical*4 a - logical*4 foobool4 - foobool4 = .not. a - end - function foobool8(a) - logical*8 a - logical*8 foobool8 - foobool8 = .not. a - end - function foostring1(a) - character*1 a - character*1 foostring1 - foostring1 = "1" - end - function foostring5(a) - character*5 a - character*5 foostring5 - foostring5 = a - foostring5(1:2) = "12" - end -! function foostringstar(a) -! character*(*) a -! character*(*) foostringstar -! if (len(a).gt.0) then -! foostringstar = a -! foostringstar(1:1) = "1" -! endif -! end -''' - -# tester note: set rebuild=True when changing fortan_code and for SVN -m = build(fortran_code, rebuild=True) - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_integer1(self, level=1): - i = int8(2) - e = int8(3) - func = m.fooint1 - assert isinstance(i,int8),`type(i)` - r = func(i) - assert isinstance(r,int8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - for intx in [int64,int16,int32]: - r = func(intx(2)) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer2(self, level=1): - i = int16(2) - e = int16(3) - func = m.fooint2 - assert isinstance(i,int16),`type(i)` - r = func(i) - assert isinstance(r,int16),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - for intx in [int8,int64,int32]: - r = func(intx(2)) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer4(self, level=1): - i = int32(2) - e = int32(3) - func = m.fooint4 - assert isinstance(i,int32),`type(i)` - r = func(i) - assert isinstance(r,int32),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - for intx in [int8,int16,int64]: - r = func(intx(2)) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer8(self, level=1): - i = int64(2) - e = int64(3) - func = m.fooint8 - assert isinstance(i,int64),`type(i)` - r = func(i) - assert isinstance(r,int64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - for intx in [int8,int16,int32]: - r = func(intx(2)) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_real4(self, level=1): - i = float32(2) - e = float32(3) - func = m.foofloat4 - assert isinstance(i,float32),`type(i)` - r = func(i) - assert isinstance(r,float32),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e+float32(0.2)) - - r = func(float64(2.0)) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_real8(self, level=1): - i = float64(2) - e = float64(3) - func = m.foofloat8 - assert isinstance(i,float64),`type(i)` - r = func(i) - assert isinstance(r,float64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e+float64(0.2)) - - r = func(float32(2.0)) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_complex8(self, level=1): - i = complex64(2) - e = complex64(3) - func = m.foocomplex8 - assert isinstance(i,complex64),`type(i)` - r = func(i) - assert isinstance(r,complex64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(0.2)) - - r = func(2+1j) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(1j)) - - r = func(complex128(2.0)) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func([2,3]) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(3j)) - - self.assertRaises(TypeError,lambda :func([2,1,3])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_complex16(self, level=1): - i = complex128(2) - e = complex128(3) - func = m.foocomplex16 - assert isinstance(i,complex128),`type(i)` - r = func(i) - assert isinstance(r,complex128),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(0.2)) - - r = func(2+1j) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(1j)) - - r = func([2]) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func([2,3]) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(3j)) - - r = func(complex64(2.0)) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func([2,1,3])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_bool1(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool1 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool2(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool2 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool4(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool4 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool8(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool8 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_string1(self, level=1): - i = string0('a') - e = string0('1') - func = m.foostring1 - assert isinstance(i,string0),`type(i)` - r = func(i) - assert isinstance(r,string0),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func('ab') - assert isinstance(r,string0),`type(r)` - assert_equal(r,e) - - r = func('') - assert isinstance(r,string0),`type(r)` - assert_equal(r,e) - - def check_foo_string5(self, level=1): - i = string0('abcde') - e = string0('12cde') - func = m.foostring5 - assert isinstance(i,string0),`type(i)` - r = func(i) - assert isinstance(r,string0),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func('abc') - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12c ') - - r = func('abcdefghi') - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12cde') - - r = func([1]) - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12] ') - - def _check_foo_string0(self, level=1): - i = string0('abcde') - e = string0('12cde') - func = m.foostringstar - r = func('abcde') - assert_equal(r,'1bcde') - r = func('') - assert_equal(r,'') - -if __name__ == "__main__": - NumpyTest().run() Deleted: branches/multicore/numpy/f2py/lib/test_scalar_in_out.py =================================================================== --- branches/multicore/numpy/f2py/lib/test_scalar_in_out.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/f2py/lib/test_scalar_in_out.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,552 +0,0 @@ -#!/usr/bin/env python -""" -Tests for intent(in,out) arguments in Fortran subroutine's. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * - -def build(fortran_code, rebuild=True, build_dir='tmp'): - modulename = os.path.splitext(os.path.basename(__file__))[0]+'_ext' - try: - exec ('import %s as m' % (modulename)) - if rebuild and os.stat(m.__file__)[8] < os.stat(__file__)[8]: - del sys.modules[m.__name__] # soft unload extension module - os.remove(m.__file__) - raise ImportError,'%s is newer than %s' % (__file__, m.__file__) - except ImportError,msg: - assert str(msg)==('No module named %s' % (modulename)) \ - or str(msg).startswith('%s is newer than' % (__file__)),str(msg) - print msg, ', recompiling %s.' % (modulename) - if not os.path.isdir(build_dir): os.makedirs(build_dir) - fname = os.path.join(build_dir,'%s_source.f' % (modulename)) - f = open(fname,'w') - f.write(fortran_code) - f.close() - sys_argv = ['--build-dir',build_dir] - #sys_argv.extend(['-DF2PY_DEBUG_PYOBJ_TOFROM']) - from main import build_extension - sys_argv.extend(['-m',modulename, fname]) - build_extension(sys_argv) - status = os.system(' '.join([sys.executable] + sys.argv)) - sys.exit(status) - return m - -fortran_code = ''' - subroutine fooint1(a) - integer*1 a -!f2py intent(in,out) a - a = a + 1 - end - subroutine fooint2(a) - integer*2 a -!f2py intent(in,out) a - a = a + 1 - end - subroutine fooint4(a) - integer*4 a -!f2py intent(in,out) a - a = a + 1 - end - subroutine fooint8(a) - integer*8 a -!f2py intent(in,out) a - a = a + 1 - end - subroutine foofloat4(a) - real*4 a -!f2py intent(in,out) a - a = a + 1.0e0 - end - subroutine foofloat8(a) - real*8 a -!f2py intent(in,out) a - a = a + 1.0d0 - end - subroutine foocomplex8(a) - complex*8 a -!f2py intent(in,out) a - a = a + 1.0e0 - end - subroutine foocomplex16(a) - complex*16 a -!f2py intent(in,out) a - a = a + 1.0d0 - end - subroutine foobool1(a) - logical*1 a -!f2py intent(in,out) a - a = .not. a - end - subroutine foobool2(a) - logical*2 a -!f2py intent(in,out) a - a = .not. a - end - subroutine foobool4(a) - logical*4 a -!f2py intent(in,out) a - a = .not. a - end - subroutine foobool8(a) - logical*8 a -!f2py intent(in,out) a - a = .not. a - end - subroutine foostring1(a) - character*1 a -!f2py intent(in,out) a - a = "1" - end - subroutine foostring5(a) - character*5 a -!f2py intent(in,out) a - a(1:2) = "12" - end - subroutine foostringstar(a) - character*(*) a -!f2py intent(in,out) a - if (len(a).gt.0) then - a(1:1) = "1" - endif - end -''' - -# tester note: set rebuild=True when changing fortan_code and for SVN -m = build(fortran_code, rebuild=True) - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_integer1(self, level=1): - i = int8(2) - e = int8(3) - func = m.fooint1 - assert isinstance(i,int8),`type(i)` - r = func(i) - assert isinstance(r,int8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - for intx in [int64,int16,int32]: - r = func(intx(2)) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer2(self, level=1): - i = int16(2) - e = int16(3) - func = m.fooint2 - assert isinstance(i,int16),`type(i)` - r = func(i) - assert isinstance(r,int16),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - for intx in [int8,int64,int32]: - r = func(intx(2)) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer4(self, level=1): - i = int32(2) - e = int32(3) - func = m.fooint4 - assert isinstance(i,int32),`type(i)` - r = func(i) - assert isinstance(r,int32),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - for intx in [int8,int16,int64]: - r = func(intx(2)) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer8(self, level=1): - i = int64(2) - e = int64(3) - func = m.fooint8 - assert isinstance(i,int64),`type(i)` - r = func(i) - assert isinstance(r,int64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - for intx in [int8,int16,int32]: - r = func(intx(2)) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_real4(self, level=1): - i = float32(2) - e = float32(3) - func = m.foofloat4 - assert isinstance(i,float32),`type(i)` - r = func(i) - assert isinstance(r,float32),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e+float32(0.2)) - - r = func(float64(2.0)) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_real8(self, level=1): - i = float64(2) - e = float64(3) - func = m.foofloat8 - assert isinstance(i,float64),`type(i)` - r = func(i) - assert isinstance(r,float64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e+float64(0.2)) - - r = func(float32(2.0)) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_complex8(self, level=1): - i = complex64(2) - e = complex64(3) - func = m.foocomplex8 - assert isinstance(i,complex64),`type(i)` - r = func(i) - assert isinstance(r,complex64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(0.2)) - - r = func(2+1j) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(1j)) - - r = func(complex128(2.0)) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func([2,3]) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(3j)) - - self.assertRaises(TypeError,lambda :func([2,1,3])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_complex16(self, level=1): - i = complex128(2) - e = complex128(3) - func = m.foocomplex16 - assert isinstance(i,complex128),`type(i)` - r = func(i) - assert isinstance(r,complex128),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(0.2)) - - r = func(2+1j) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(1j)) - - r = func([2]) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func([2,3]) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(3j)) - - r = func(complex64(2.0)) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func([2,1,3])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_bool1(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool1 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool2(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool2 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool4(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool4 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool8(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool8 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_string1(self, level=1): - i = string0('a') - e = string0('1') - func = m.foostring1 - assert isinstance(i,string0),`type(i)` - r = func(i) - assert isinstance(r,string0),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func('ab') - assert isinstance(r,string0),`type(r)` - assert_equal(r,e) - - r = func('') - assert isinstance(r,string0),`type(r)` - assert_equal(r,e) - - def check_foo_string5(self, level=1): - i = string0('abcde') - e = string0('12cde') - func = m.foostring5 - assert isinstance(i,string0),`type(i)` - r = func(i) - assert isinstance(r,string0),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func('abc') - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12c ') - - r = func('abcdefghi') - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12cde') - - r = func([1]) - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12] ') - - def check_foo_string0(self, level=1): - i = string0('abcde') - e = string0('12cde') - func = m.foostringstar - r = func('abcde') - assert_equal(r,'1bcde') - r = func('') - assert_equal(r,'') - -if __name__ == "__main__": - NumpyTest().run() Copied: branches/multicore/numpy/f2py/lib/tests (from rev 3839, trunk/numpy/f2py/lib/tests) Deleted: branches/multicore/numpy/f2py/lib/tests/test_derived_scalar.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_derived_scalar.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/f2py/lib/tests/test_derived_scalar.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,74 +0,0 @@ -#!/usr/bin/env python -""" -Tests for intent(in,out) derived type arguments in Fortran subroutine's. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * -set_package_path() -from lib.main import build_extension, compile -restore_path() - -fortran_code = ''' -subroutine foo(a) - type myt - integer flag - end type myt - type(myt) a -!f2py intent(in,out) a - a % flag = a % flag + 1 -end -function foo2(a) - type myt - integer flag - end type myt - type(myt) a - type(myt) foo2 - foo2 % flag = a % flag + 2 -end -''' - -m, = compile(fortran_code, 'test_derived_scalar_ext') - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_simple(self, level=1): - a = m.myt(2) - assert_equal(a.flag,2) - assert isinstance(a,m.myt),`a` - r = m.foo(a) - assert isinstance(r,m.myt),`r` - assert r is a - assert_equal(r.flag,3) - assert_equal(a.flag,3) - - a.flag = 5 - assert_equal(r.flag,5) - - #s = m.foo((5,)) - - def check_foo2_simple(self, level=1): - a = m.myt(2) - assert_equal(a.flag,2) - assert isinstance(a,m.myt),`a` - r = m.foo2(a) - assert isinstance(r,m.myt),`r` - assert r is not a - assert_equal(a.flag,2) - assert_equal(r.flag,4) - - -if __name__ == "__main__": - NumpyTest().run() Copied: branches/multicore/numpy/f2py/lib/tests/test_derived_scalar.py (from rev 3839, trunk/numpy/f2py/lib/tests/test_derived_scalar.py) Deleted: branches/multicore/numpy/f2py/lib/tests/test_module_module.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_module_module.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/f2py/lib/tests/test_module_module.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,61 +0,0 @@ -#!/usr/bin/env python -""" -Tests for module with scalar derived types and subprograms. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * - -set_package_path() -from lib.main import build_extension, compile -restore_path() - -fortran_code = ''' -module test_module_module_ext2 - type rat - integer n,d - end type rat - contains - subroutine foo2() - print*,"In foo2" - end subroutine foo2 -end module -module test_module_module_ext - contains - subroutine foo - use test_module_module_ext2 - print*,"In foo" - call foo2 - end subroutine foo - subroutine bar(a) - use test_module_module_ext2 - type(rat) a - print*,"In bar,a=",a - end subroutine bar -end module test_module_module_ext -''' - -m,m2 = compile(fortran_code, modulenames=['test_module_module_ext', - 'test_module_module_ext2', - ]) - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_simple(self, level=1): - foo = m.foo - foo() - -if __name__ == "__main__": - NumpyTest().run() Copied: branches/multicore/numpy/f2py/lib/tests/test_module_module.py (from rev 3839, trunk/numpy/f2py/lib/tests/test_module_module.py) Deleted: branches/multicore/numpy/f2py/lib/tests/test_module_scalar.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_module_scalar.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/f2py/lib/tests/test_module_scalar.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,58 +0,0 @@ -#!/usr/bin/env python -""" -Tests for module with scalar derived types and subprograms. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * -set_package_path() -from lib.main import build_extension, compile -restore_path() - -fortran_code = ''' -module test_module_scalar_ext - - contains - subroutine foo(a) - integer a -!f2py intent(in,out) a - a = a + 1 - end subroutine foo - function foo2(a) - integer a - integer foo2 - foo2 = a + 2 - end function foo2 -end module test_module_scalar_ext -''' - -m, = compile(fortran_code, modulenames = ['test_module_scalar_ext']) - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_simple(self, level=1): - foo = m.foo - r = foo(2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,3) - - def check_foo2_simple(self, level=1): - foo2 = m.foo2 - r = foo2(2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,4) - -if __name__ == "__main__": - NumpyTest().run() Copied: branches/multicore/numpy/f2py/lib/tests/test_module_scalar.py (from rev 3839, trunk/numpy/f2py/lib/tests/test_module_scalar.py) Deleted: branches/multicore/numpy/f2py/lib/tests/test_scalar_function_in.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_scalar_function_in.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/f2py/lib/tests/test_scalar_function_in.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,532 +0,0 @@ -#!/usr/bin/env python -""" -Tests for intent(in) arguments in subroutine-wrapped Fortran functions. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * - -set_package_path() -from lib.main import build_extension, compile -restore_path() - -fortran_code = '''\ -! -*- f77 -*- - function fooint1(a) - integer*1 a - integer*1 fooint1 - fooint1 = a + 1 - end - function fooint2(a) - integer*2 a - integer*2 fooint2 - fooint2 = a + 1 - end - function fooint4(a) - integer*4 a - integer*4 fooint4 - fooint4 = a + 1 - end - function fooint8(a) - integer*8 a - integer*8 fooint8 - fooint8 = a + 1 - end - function foofloat4(a) - real*4 a - real*4 foofloat4 - foofloat4 = a + 1.0e0 - end - function foofloat8(a) - real*8 a - real*8 foofloat8 - foofloat8 = a + 1.0d0 - end - function foocomplex8(a) - complex*8 a - complex*8 foocomplex8 - foocomplex8 = a + 1.0e0 - end - function foocomplex16(a) - complex*16 a - complex*16 foocomplex16 - foocomplex16 = a + 1.0d0 - end - function foobool1(a) - logical*1 a - logical*1 foobool1 - foobool1 = .not. a - end - function foobool2(a) - logical*2 a - logical*2 foobool2 - foobool2 = .not. a - end - function foobool4(a) - logical*4 a - logical*4 foobool4 - foobool4 = .not. a - end - function foobool8(a) - logical*8 a - logical*8 foobool8 - foobool8 = .not. a - end - function foostring1(a) - character*1 a - character*1 foostring1 - foostring1 = "1" - end - function foostring5(a) - character*5 a - character*5 foostring5 - foostring5 = a - foostring5(1:2) = "12" - end -! function foostringstar(a) -! character*(*) a -! character*(*) foostringstar -! if (len(a).gt.0) then -! foostringstar = a -! foostringstar(1:1) = "1" -! endif -! end -''' - -m, = compile(fortran_code, 'test_scalar_function_in_ext') - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_integer1(self, level=1): - i = int8(2) - e = int8(3) - func = m.fooint1 - assert isinstance(i,int8),`type(i)` - r = func(i) - assert isinstance(r,int8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - for intx in [int64,int16,int32]: - r = func(intx(2)) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer2(self, level=1): - i = int16(2) - e = int16(3) - func = m.fooint2 - assert isinstance(i,int16),`type(i)` - r = func(i) - assert isinstance(r,int16),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - for intx in [int8,int64,int32]: - r = func(intx(2)) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer4(self, level=1): - i = int32(2) - e = int32(3) - func = m.fooint4 - assert isinstance(i,int32),`type(i)` - r = func(i) - assert isinstance(r,int32),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - for intx in [int8,int16,int64]: - r = func(intx(2)) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer8(self, level=1): - i = int64(2) - e = int64(3) - func = m.fooint8 - assert isinstance(i,int64),`type(i)` - r = func(i) - assert isinstance(r,int64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - for intx in [int8,int16,int32]: - r = func(intx(2)) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_real4(self, level=1): - i = float32(2) - e = float32(3) - func = m.foofloat4 - assert isinstance(i,float32),`type(i)` - r = func(i) - assert isinstance(r,float32),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e+float32(0.2)) - - r = func(float64(2.0)) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_real8(self, level=1): - i = float64(2) - e = float64(3) - func = m.foofloat8 - assert isinstance(i,float64),`type(i)` - r = func(i) - assert isinstance(r,float64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e+float64(0.2)) - - r = func(float32(2.0)) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_complex8(self, level=1): - i = complex64(2) - e = complex64(3) - func = m.foocomplex8 - assert isinstance(i,complex64),`type(i)` - r = func(i) - assert isinstance(r,complex64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(0.2)) - - r = func(2+1j) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(1j)) - - r = func(complex128(2.0)) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func([2,3]) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(3j)) - - self.assertRaises(TypeError,lambda :func([2,1,3])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_complex16(self, level=1): - i = complex128(2) - e = complex128(3) - func = m.foocomplex16 - assert isinstance(i,complex128),`type(i)` - r = func(i) - assert isinstance(r,complex128),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(0.2)) - - r = func(2+1j) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(1j)) - - r = func([2]) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func([2,3]) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(3j)) - - r = func(complex64(2.0)) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func([2,1,3])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_bool1(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool1 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool2(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool2 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool4(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool4 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool8(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool8 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_string1(self, level=1): - i = string0('a') - e = string0('1') - func = m.foostring1 - assert isinstance(i,string0),`type(i)` - r = func(i) - assert isinstance(r,string0),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func('ab') - assert isinstance(r,string0),`type(r)` - assert_equal(r,e) - - r = func('') - assert isinstance(r,string0),`type(r)` - assert_equal(r,e) - - def check_foo_string5(self, level=1): - i = string0('abcde') - e = string0('12cde') - func = m.foostring5 - assert isinstance(i,string0),`type(i)` - r = func(i) - assert isinstance(r,string0),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func('abc') - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12c ') - - r = func('abcdefghi') - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12cde') - - r = func([1]) - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12] ') - - def _check_foo_string0(self, level=1): - i = string0('abcde') - e = string0('12cde') - func = m.foostringstar - r = func('abcde') - assert_equal(r,'1bcde') - r = func('') - assert_equal(r,'') - -if __name__ == "__main__": - NumpyTest().run() Copied: branches/multicore/numpy/f2py/lib/tests/test_scalar_function_in.py (from rev 3839, trunk/numpy/f2py/lib/tests/test_scalar_function_in.py) Deleted: branches/multicore/numpy/f2py/lib/tests/test_scalar_in_out.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_scalar_in_out.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/f2py/lib/tests/test_scalar_in_out.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,529 +0,0 @@ -#!/usr/bin/env python -""" -Tests for intent(in,out) arguments in Fortran subroutine's. - ------ -Permission to use, modify, and distribute this software is given under the -terms of the NumPy License. See http://scipy.org. - -NO WARRANTY IS EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. -Author: Pearu Peterson -Created: Oct 2006 ------ -""" - -import os -import sys -from numpy.testing import * - -set_package_path() -from lib.main import build_extension, compile -restore_path() - -fortran_code = ''' - subroutine fooint1(a) - integer*1 a -!f2py intent(in,out) a - a = a + 1 - end - subroutine fooint2(a) - integer*2 a -!f2py intent(in,out) a - a = a + 1 - end - subroutine fooint4(a) - integer*4 a -!f2py intent(in,out) a - a = a + 1 - end - subroutine fooint8(a) - integer*8 a -!f2py intent(in,out) a - a = a + 1 - end - subroutine foofloat4(a) - real*4 a -!f2py intent(in,out) a - a = a + 1.0e0 - end - subroutine foofloat8(a) - real*8 a -!f2py intent(in,out) a - a = a + 1.0d0 - end - subroutine foocomplex8(a) - complex*8 a -!f2py intent(in,out) a - a = a + 1.0e0 - end - subroutine foocomplex16(a) - complex*16 a -!f2py intent(in,out) a - a = a + 1.0d0 - end - subroutine foobool1(a) - logical*1 a -!f2py intent(in,out) a - a = .not. a - end - subroutine foobool2(a) - logical*2 a -!f2py intent(in,out) a - a = .not. a - end - subroutine foobool4(a) - logical*4 a -!f2py intent(in,out) a - a = .not. a - end - subroutine foobool8(a) - logical*8 a -!f2py intent(in,out) a - a = .not. a - end - subroutine foostring1(a) - character*1 a -!f2py intent(in,out) a - a = "1" - end - subroutine foostring5(a) - character*5 a -!f2py intent(in,out) a - a(1:2) = "12" - end - subroutine foostringstar(a) - character*(*) a -!f2py intent(in,out) a - if (len(a).gt.0) then - a(1:1) = "1" - endif - end -''' - -m, = compile(fortran_code, 'test_scalar_in_out_ext', source_ext = '.f') - -from numpy import * - -class test_m(NumpyTestCase): - - def check_foo_integer1(self, level=1): - i = int8(2) - e = int8(3) - func = m.fooint1 - assert isinstance(i,int8),`type(i)` - r = func(i) - assert isinstance(r,int8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - for intx in [int64,int16,int32]: - r = func(intx(2)) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int8),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer2(self, level=1): - i = int16(2) - e = int16(3) - func = m.fooint2 - assert isinstance(i,int16),`type(i)` - r = func(i) - assert isinstance(r,int16),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - for intx in [int8,int64,int32]: - r = func(intx(2)) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int16),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer4(self, level=1): - i = int32(2) - e = int32(3) - func = m.fooint4 - assert isinstance(i,int32),`type(i)` - r = func(i) - assert isinstance(r,int32),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - for intx in [int8,int16,int64]: - r = func(intx(2)) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int32),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_integer8(self, level=1): - i = int64(2) - e = int64(3) - func = m.fooint8 - assert isinstance(i,int64),`type(i)` - r = func(i) - assert isinstance(r,int64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - for intx in [int8,int16,int32]: - r = func(intx(2)) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,int64),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_real4(self, level=1): - i = float32(2) - e = float32(3) - func = m.foofloat4 - assert isinstance(i,float32),`type(i)` - r = func(i) - assert isinstance(r,float32),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e+float32(0.2)) - - r = func(float64(2.0)) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,float32),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_real8(self, level=1): - i = float64(2) - e = float64(3) - func = m.foofloat8 - assert isinstance(i,float64),`type(i)` - r = func(i) - assert isinstance(r,float64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e+float64(0.2)) - - r = func(float32(2.0)) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,float64),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func(2.2j)) - self.assertRaises(TypeError,lambda :func([2,1])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_complex8(self, level=1): - i = complex64(2) - e = complex64(3) - func = m.foocomplex8 - assert isinstance(i,complex64),`type(i)` - r = func(i) - assert isinstance(r,complex64),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(0.2)) - - r = func(2+1j) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(1j)) - - r = func(complex128(2.0)) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func([2]) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e) - - r = func([2,3]) - assert isinstance(r,complex64),`type(r)` - assert_equal(r,e+complex64(3j)) - - self.assertRaises(TypeError,lambda :func([2,1,3])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_complex16(self, level=1): - i = complex128(2) - e = complex128(3) - func = m.foocomplex16 - assert isinstance(i,complex128),`type(i)` - r = func(i) - assert isinstance(r,complex128),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func(2) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func(2.0) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func(2.2) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(0.2)) - - r = func(2+1j) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(1j)) - - r = func([2]) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - r = func([2,3]) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e+complex128(3j)) - - r = func(complex64(2.0)) - assert isinstance(r,complex128),`type(r)` - assert_equal(r,e) - - self.assertRaises(TypeError,lambda :func([2,1,3])) - self.assertRaises(TypeError,lambda :func({})) - - def check_foo_bool1(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool1 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool2(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool2 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool4(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool4 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_bool8(self, level=1): - i = bool8(True) - e = bool8(False) - func = m.foobool8 - assert isinstance(i,bool8),`type(i)` - r = func(i) - assert isinstance(r,bool8),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - for tv in [1,2,2.1,-1j,[0],True]: - r = func(tv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,e) - - for fv in [0,0.0,0j,False,(),{},[]]: - r = func(fv) - assert isinstance(r,bool8),`type(r)` - assert_equal(r,not e) - - def check_foo_string1(self, level=1): - i = string0('a') - e = string0('1') - func = m.foostring1 - assert isinstance(i,string0),`type(i)` - r = func(i) - assert isinstance(r,string0),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func('ab') - assert isinstance(r,string0),`type(r)` - assert_equal(r,e) - - r = func('') - assert isinstance(r,string0),`type(r)` - assert_equal(r,e) - - def check_foo_string5(self, level=1): - i = string0('abcde') - e = string0('12cde') - func = m.foostring5 - assert isinstance(i,string0),`type(i)` - r = func(i) - assert isinstance(r,string0),`type(r)` - assert i is not r,`id(i),id(r)` - assert_equal(r,e) - - r = func('abc') - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12c ') - - r = func('abcdefghi') - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12cde') - - r = func([1]) - assert isinstance(r,string0),`type(r)` - assert_equal(r,'12] ') - - def check_foo_string0(self, level=1): - i = string0('abcde') - e = string0('12cde') - func = m.foostringstar - r = func('abcde') - assert_equal(r,'1bcde') - r = func('') - assert_equal(r,'') - -if __name__ == "__main__": - NumpyTest().run() Copied: branches/multicore/numpy/f2py/lib/tests/test_scalar_in_out.py (from rev 3839, trunk/numpy/f2py/lib/tests/test_scalar_in_out.py) Modified: branches/multicore/numpy/f2py/setup.py =================================================================== --- branches/multicore/numpy/f2py/setup.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/f2py/setup.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -21,6 +21,7 @@ import os import sys from distutils.dep_util import newer +from numpy.distutils import log from numpy.distutils.core import setup from numpy.distutils.misc_util import Configuration @@ -48,7 +49,7 @@ f2py_exe = f2py_exe + '.py' target = os.path.join(build_dir,f2py_exe) if newer(__file__,target): - print 'Creating',target + log.info('Creating %s', target) f = open(target,'w') f.write('''\ #!/usr/bin/env %s @@ -83,7 +84,7 @@ config.add_scripts(generate_f2py_py) - print 'F2PY Version',config.get_version() + log.info('F2PY Version %s', config.get_version()) return config Modified: branches/multicore/numpy/fft/fftpack_litemodule.c =================================================================== --- branches/multicore/numpy/fft/fftpack_litemodule.c 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/fft/fftpack_litemodule.c 2007-05-28 18:35:05 UTC (rev 3840) @@ -18,8 +18,8 @@ if(!PyArg_ParseTuple(args, "OO", &op1, &op2)) return NULL; data = (PyArrayObject *)PyArray_CopyFromObject(op1, PyArray_CDOUBLE, 1, 0); - if (data == NULL) return NULL; - if (PyArray_As1D(&op2, (char **)&wsave, &nsave, PyArray_DOUBLE) == -1) + if (data == NULL) return NULL; + if (PyArray_As1D(&op2, (char **)&wsave, &nsave, PyArray_DOUBLE) == -1) goto fail; if (data == NULL) goto fail; @@ -31,10 +31,12 @@ nrepeats = PyArray_SIZE(data)/npts; dptr = (double *)data->data; + NPY_SIGINT_ON for (i=0; idata; + NPY_SIGINT_ON for (i=0; idata); + NPY_SIGINT_OFF return (PyObject *)op; } @@ -115,12 +121,12 @@ if (data == NULL) return NULL; npts = data->dimensions[data->nd-1]; data->dimensions[data->nd-1] = npts/2+1; - ret = (PyArrayObject *)PyArray_Zeros(data->nd, data->dimensions, + ret = (PyArrayObject *)PyArray_Zeros(data->nd, data->dimensions, PyArray_DescrFromType(PyArray_CDOUBLE), 0); data->dimensions[data->nd-1] = npts; rstep = (ret->dimensions[ret->nd-1])*2; - if (PyArray_As1D(&op2, (char **)&wsave, &nsave, PyArray_DOUBLE) == -1) + if (PyArray_As1D(&op2, (char **)&wsave, &nsave, PyArray_DOUBLE) == -1) goto fail; if (data == NULL || ret == NULL) goto fail; @@ -132,7 +138,9 @@ nrepeats = PyArray_SIZE(data)/npts; rptr = (double *)ret->data; dptr = (double *)data->data; - + + + NPY_SIGINT_ON for (i=0; idimensions[data->nd-1]; - ret = (PyArrayObject *)PyArray_Zeros(data->nd, data->dimensions, + ret = (PyArrayObject *)PyArray_Zeros(data->nd, data->dimensions, PyArray_DescrFromType(PyArray_DOUBLE), 0); - if (PyArray_As1D(&op2, (char **)&wsave, &nsave, PyArray_DOUBLE) == -1) + if (PyArray_As1D(&op2, (char **)&wsave, &nsave, PyArray_DOUBLE) == -1) goto fail; if (data == NULL || ret == NULL) goto fail; @@ -181,7 +190,8 @@ nrepeats = PyArray_SIZE(ret)/npts; rptr = (double *)ret->data; dptr = (double *)data->data; - + + NPY_SIGINT_ON for (i=0; idata); + NPY_SIGINT_OFF return (PyObject *)op; } @@ -236,7 +249,7 @@ /* Initialization function for the module (*must* be called initfftpack) */ -static char fftpack_module_documentation[] = +static char fftpack_module_documentation[] = "" ; @@ -258,5 +271,5 @@ PyDict_SetItemString(d, "error", ErrorObject); /* XXXX Add constants here */ - + } Modified: branches/multicore/numpy/lib/function_base.py =================================================================== --- branches/multicore/numpy/lib/function_base.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/lib/function_base.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -71,29 +71,41 @@ def histogram(a, bins=10, range=None, normed=False): """Compute the histogram from a set of data. - :Parameters: - - `a` : array - The data to histogram. n-D arrays will be flattened. - - `bins` : int or sequence of floats, optional - If an int, then the number of equal-width bins in the given range. - Otherwise, a sequence of the lower bound of each bin. - - `range` : (float, float), optional - The lower and upper range of the bins. If not provided, then (a.min(), - a.max()) is used. Values outside of this range are allocated to the - closest bin. - - `normed` : bool, optional - If False, the result array will contain the number of samples in each bin. - If True, the result array is the value of the probability *density* - function at the bin normalized such that the *integral* over the range - is 1. Note that the sum of all of the histogram values will not usually - be 1; it is not a probability *mass* function. + Parameters: - :Returns: - - `hist` : array (n,) - The values of the histogram. See `normed` for a description of the - possible semantics. - - `lower_edges` : float array (n,) - The lower edges of each bin. + a : array + The data to histogram. n-D arrays will be flattened. + + bins : int or sequence of floats + If an int, then the number of equal-width bins in the given range. + Otherwise, a sequence of the lower bound of each bin. + + range : (float, float) + The lower and upper range of the bins. If not provided, then + (a.min(), a.max()) is used. Values outside of this range are + allocated to the closest bin. + + normed : bool + If False, the result array will contain the number of samples in + each bin. If True, the result array is the value of the + probability *density* function at the bin normalized such that the + *integral* over the range is 1. Note that the sum of all of the + histogram values will not usually be 1; it is not a probability + *mass* function. + + Returns: + + hist : array + The values of the histogram. See `normed` for a description of the + possible semantics. + + lower_edges : float array + The lower edges of each bin. + + SeeAlso: + + histogramdd + """ a = asarray(a).ravel() if not iterable(bins): @@ -120,38 +132,54 @@ return n, bins def histogramdd(sample, bins=10, range=None, normed=False, weights=None): - """histogramdd(sample, bins=10, range=None, normed=False, weights=None) + """histogramdd(sample, bins=10, range=None, normed=False, weights=None) - Return the D-dimensional histogram of the sample. + Return the N-dimensional histogram of the sample. - :Parameters: - - `sample` : A sequence of D arrays, or an NxD array. - - `bins` : A sequence of edge arrays, a sequence of bin number, - or a scalar (the number of bins for all dimensions.) - - `range` : A sequence of lower and upper bin edges (default: [min, max]). - - `normed` : Boolean, if False, return the number of samples in each bin, - if True, returns the density. - - `weights` : An array of weights. The weights are normed only if normed is True. - Should weights.sum() not equal N, the total bin count will - not be equal to the number of samples. + Parameters: - :Return: - - `hist` : Histogram array. - - `edges` : List of arrays defining the bin edges. - + sample : sequence or array + A sequence containing N arrays or an NxM array. Input data. - Example: - >>> x = random.randn(100,3) - >>> hist3d, edges = histogramdd(x, bins = (5, 6, 7)) + bins : sequence or scalar + A sequence of edge arrays, a sequence of bin counts, or a scalar + which is the bin count for all dimensions. Default is 10. - :SeeAlso: histogram + range : sequence + A sequence of lower and upper bin edges. Default is [min, max]. + normed : boolean + If False, return the number of samples in each bin, if True, + returns the density. + + weights : array + Array of weights. The weights are normed only if normed is True. + Should the sum of the weights not equal N, the total bin count will + not be equal to the number of samples. + + Returns: + + hist : array + Histogram array. + + edges : list + List of arrays defining the lower bin edges. + + SeeAlso: + + histogram + + Example + + >>> x = random.randn(100,3) + >>> hist3d, edges = histogramdd(x, bins = (5, 6, 7)) + """ - try: + try: # Sample is an ND-array. N, D = sample.shape - except (AttributeError, ValueError): + except (AttributeError, ValueError): # Sample is a sequence of 1D arrays. sample = atleast_2d(sample).T N, D = sample.shape @@ -161,7 +189,7 @@ dedges = D*[None] if weights is not None: weights = asarray(weights) - + try: M = len(bins) if M != D: @@ -172,14 +200,20 @@ # Select range for each dimension # Used only if number of bins is given. if range is None: - smin = atleast_1d(sample.min(0)) - smax = atleast_1d(sample.max(0)) + smin = atleast_1d(array(sample.min(0), float)) + smax = atleast_1d(array(sample.max(0), float)) else: smin = zeros(D) smax = zeros(D) for i in arange(D): smin[i], smax[i] = range[i] + # Make sure the bins have a finite width. + for i in arange(len(smin)): + if smin[i] == smax[i]: + smin[i] = smin[i] - .5 + smax[i] = smax[i] + .5 + # Create edge arrays for i in arange(D): if isscalar(bins[i]): @@ -189,14 +223,14 @@ edges[i] = asarray(bins[i], float) nbin[i] = len(edges[i])+1 # +1 for outlier bins dedges[i] = diff(edges[i]) - + nbin = asarray(nbin) - - # Compute the bin number each sample falls into. + + # Compute the bin number each sample falls into. Ncount = {} for i in arange(D): Ncount[i] = digitize(sample[:,i], edges[i]) - + # Using digitize, values that fall on an edge are put in the right bin. # For the rightmost bin, we want values equal to the right # edge to be counted in the last bin, and not as an outlier. @@ -206,7 +240,7 @@ decimal = int(-log10(dedges[i].min())) +6 # Find which points are on the rightmost edge. on_edge = where(around(sample[:,i], decimal) == around(edges[i][-1], decimal))[0] - # Shift these points one bin to the left. + # Shift these points one bin to the left. Ncount[i][on_edge] -= 1 # Flattened histogram matrix (1D) @@ -238,7 +272,7 @@ # Remove outliers (indices 0 and -1 for each dimension). core = D*[slice(1,-1)] hist = hist[core] - + # Normalize if normed is True if normed: s = hist.sum() @@ -252,9 +286,7 @@ def average(a, axis=None, weights=None, returned=False): - """average(a, axis=None weights=None, returned=False) - - Average the array over the given axis. If the axis is None, + """Average the array over the given axis. If the axis is None, average over all dimensions of the array. Equivalent to a.mean(axis) and to @@ -386,39 +418,37 @@ return y def select(condlist, choicelist, default=0): - """ Return an array composed of different elements of choicelist + """Return an array composed of different elements in choicelist, depending on the list of conditions. - condlist is a list of condition arrays containing ones or zeros + :Parameters: + condlist : list of N boolean arrays of length M + The conditions C_0 through C_(N-1) which determine + from which vector the output elements are taken. + choicelist : list of N arrays of length M + Th vectors V_0 through V_(N-1), from which the output + elements are chosen. - choicelist is a list of choice arrays (of the "same" size as the - arrays in condlist). The result array has the "same" size as the - arrays in choicelist. If condlist is [c0, ..., cN-1] then choicelist - must be of length N. The elements of the choicelist can then be - represented as [v0, ..., vN-1]. The default choice if none of the - conditions are met is given as the default argument. + :Returns: + output : 1-dimensional array of length M + The output at position m is the m-th element of the first + vector V_n for which C_n[m] is non-zero. Note that the + output depends on the order of conditions, since the + first satisfied condition is used. - The conditions are tested in order and the first one statisfied is - used to select the choice. In other words, the elements of the - output array are found from the following tree (notice the order of - the conditions matters): + Equivalent to: - if c0: v0 - elif c1: v1 - elif c2: v2 - ... - elif cN-1: vN-1 - else: default + output = [] + for m in range(M): + output += [V[m] for V,C in zip(values,cond) if C[m]] + or [default] - Note that one of the condition arrays must be large enough to handle - the largest array in the choice list. - """ n = len(condlist) n2 = len(choicelist) if n2 != n: raise ValueError, "list of cases must be same length as list of conditions" - choicelist.insert(0, default) + choicelist = [default] + choicelist S = 0 pfac = 1 for k in range(1, n+1): @@ -730,7 +760,7 @@ def nansum(a, axis=None): """Sum the array over the given axis, treating NaNs as 0. """ - y = array(a) + y = array(a,subok=True) if not issubclass(y.dtype.type, _nx.integer): y[isnan(a)] = 0 return y.sum(axis) @@ -738,7 +768,7 @@ def nanmin(a, axis=None): """Find the minimium over the given axis, ignoring NaNs. """ - y = array(a) + y = array(a,subok=True) if not issubclass(y.dtype.type, _nx.integer): y[isnan(a)] = _nx.inf return y.min(axis) @@ -746,7 +776,7 @@ def nanargmin(a, axis=None): """Find the indices of the minimium over the given axis ignoring NaNs. """ - y = array(a) + y = array(a, subok=True) if not issubclass(y.dtype.type, _nx.integer): y[isnan(a)] = _nx.inf return y.argmin(axis) @@ -754,7 +784,7 @@ def nanmax(a, axis=None): """Find the maximum over the given axis ignoring NaNs. """ - y = array(a) + y = array(a, subok=True) if not issubclass(y.dtype.type, _nx.integer): y[isnan(a)] = -_nx.inf return y.max(axis) @@ -762,7 +792,7 @@ def nanargmax(a, axis=None): """Find the maximum over the given axis ignoring NaNs. """ - y = array(a) + y = array(a,subok=True) if not issubclass(y.dtype.type, _nx.integer): y[isnan(a)] = -_nx.inf return y.argmax(axis) Modified: branches/multicore/numpy/lib/getlimits.py =================================================================== --- branches/multicore/numpy/lib/getlimits.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/lib/getlimits.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,14 +1,14 @@ """ Machine limits for Float32 and Float64 and (long double) if available... """ -__all__ = ['finfo'] +__all__ = ['finfo','iinfo'] from machar import MachAr import numpy.core.numeric as numeric import numpy.core.numerictypes as ntypes from numpy.core.numeric import array +import numpy as N - def _frz(a): """fix rank-0 --> rank-1""" if a.ndim == 0: a.shape = (1,) @@ -21,7 +21,16 @@ } class finfo(object): + """Machine limits for floating point types. + :Parameters: + dtype : floating point type or instance + + :SeeAlso: + - numpy.lib.machar.MachAr + + """ + _finfo_cache = {} def __new__(cls, dtype): @@ -106,6 +115,54 @@ --------------------------------------------------------------------- ''' % self.__dict__ + +class iinfo: + """Limits for integer types. + + :Parameters: + type : integer type or instance + + """ + + _min_vals = {} + _max_vals = {} + + def __init__(self, type): + self.dtype = N.dtype(type) + self.kind = self.dtype.kind + self.bits = self.dtype.itemsize * 8 + self.key = "%s%d" % (self.kind, self.bits) + if not self.kind in 'iu': + raise ValueError("Invalid integer data type.") + + def min(self): + """Minimum value of given dtype.""" + if self.kind == 'u': + return 0 + else: + try: + val = iinfo._min_vals[self.key] + except KeyError: + val = int(-(1L << (self.bits-1))) + iinfo._min_vals[self.key] = val + return val + + min = property(min) + + def max(self): + """Maximum value of given dtype.""" + try: + val = iinfo._max_vals[self.key] + except KeyError: + if self.kind == 'u': + val = int((1L << self.bits) - 1) + else: + val = int((1L << (self.bits-1)) - 1) + iinfo._max_vals[self.key] = val + return val + + max = property(max) + if __name__ == '__main__': f = finfo(ntypes.single) print 'single epsilon:',f.eps Modified: branches/multicore/numpy/lib/shape_base.py =================================================================== --- branches/multicore/numpy/lib/shape_base.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/lib/shape_base.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -184,7 +184,7 @@ """ Stack arrays in sequence vertically (row wise) Description: - Take a sequence of arrays and stack them veritcally + Take a sequence of arrays and stack them vertically to make a single array. All arrays in the sequence must have the same shape along all but the first axis. vstack will rebuild arrays divided by vsplit. Modified: branches/multicore/numpy/lib/tests/test_function_base.py =================================================================== --- branches/multicore/numpy/lib/tests/test_function_base.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/lib/tests/test_function_base.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -60,10 +60,30 @@ def check_weighted(self): y1 = array([[1,2,3], [4,5,6]]) - actual = average(y1,weights=[1,2],axis=0) + actual = average(y1,weights=[1,2],axis=0) desired = array([3.,4.,5.]) assert_array_equal(actual, desired) +class test_select(NumpyTestCase): + def _select(self,cond,values,default=0): + output = [] + for m in range(len(cond)): + output += [V[m] for V,C in zip(values,cond) if C[m]] or [default] + return output + + def check_basic(self): + choices = [array([1,2,3]), + array([4,5,6]), + array([7,8,9])] + conditions = [array([0,0,0]), + array([0,1,0]), + array([0,0,1])] + assert_array_equal(select(conditions,choices,default=15), + self._select(conditions,choices,default=15)) + + assert_equal(len(choices),3) + assert_equal(len(conditions),3) + class test_logspace(NumpyTestCase): def check_basic(self): y = logspace(0,6) @@ -394,12 +414,12 @@ Z[range(5), range(5), range(5)] = 1. H,edges = histogramdd([arange(5), arange(5), arange(5)], 5) assert_array_equal(H, Z) - + def check_shape(self): x = rand(100,3) hist3d, edges = histogramdd(x, bins = (5, 7, 6)) assert_array_equal(hist3d.shape, (5,7,6)) - + def check_weights(self): v = rand(100,2) hist, edges = histogramdd(v) @@ -410,8 +430,12 @@ assert_array_equal(w_hist, n_hist) w_hist, edges = histogramdd(v, weights=ones(100, int)*2) assert_array_equal(w_hist, 2*hist) - + def check_identical_samples(self): + x = zeros((10,2),int) + hist, edges = histogramdd(x, bins=2) + assert_array_equal(edges[0],array([-0.5, 0. , 0.5])) + class test_unique(NumpyTestCase): def check_simple(self): x = array([4,3,2,1,1,2,3,4, 0]) @@ -427,4 +451,4 @@ assert_array_equal(res[i],desired[i]) if __name__ == "__main__": - NumpyTest('numpy.lib.function_base').run() + NumpyTest().run() Modified: branches/multicore/numpy/lib/tests/test_getlimits.py =================================================================== --- branches/multicore/numpy/lib/tests/test_getlimits.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/lib/tests/test_getlimits.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -4,8 +4,9 @@ from numpy.testing import * set_package_path() import numpy.lib;reload(numpy.lib) -from numpy.lib.getlimits import finfo +from numpy.lib.getlimits import finfo, iinfo from numpy import single,double,longdouble +import numpy as N restore_path() ################################################## @@ -34,5 +35,21 @@ ftype2 = finfo(longdouble) assert_equal(id(ftype),id(ftype2)) +class test_iinfo(NumpyTestCase): + def check_basic(self): + dts = zip(['i1', 'i2', 'i4', 'i8', + 'u1', 'u2', 'u4', 'u8'], + [N.int8, N.int16, N.int32, N.int64, + N.uint8, N.uint16, N.uint32, N.uint64]) + for dt1, dt2 in dts: + assert_equal(iinfo(dt1).min, iinfo(dt2).min) + assert_equal(iinfo(dt1).max, iinfo(dt2).max) + self.assertRaises(ValueError, iinfo, 'f4') + + def check_unsigned_max(self): + types = N.sctypes['uint'] + for T in types: + assert_equal(iinfo(T).max, T(-1)) + if __name__ == "__main__": NumpyTest().run() Modified: branches/multicore/numpy/lib/ufunclike.py =================================================================== --- branches/multicore/numpy/lib/ufunclike.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/lib/ufunclike.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -29,6 +29,7 @@ If y is an array, the result replaces the contents of y. """ if y is None: + x = asarray(x) y = empty(x.shape, dtype=nx.bool_) umath.logical_and(isinf(x), ~signbit(x), y) return y @@ -39,6 +40,7 @@ If y is an array, the result replaces the contents of y. """ if y is None: + x = asarray(x) y = empty(x.shape, dtype=nx.bool_) umath.logical_and(isinf(x), signbit(x), y) return y Modified: branches/multicore/numpy/lib/utils.py =================================================================== --- branches/multicore/numpy/lib/utils.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/lib/utils.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -163,7 +163,7 @@ def who(vardict=None): - """Print the scipy arrays in the given dictionary (or globals() if None). + """Print the Numpy arrays in the given dictionary (or globals() if None). """ if vardict is None: frame = sys._getframe().f_back Modified: branches/multicore/numpy/linalg/linalg.py =================================================================== --- branches/multicore/numpy/linalg/linalg.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/linalg/linalg.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -344,7 +344,42 @@ # Eigenvalues + + def eigvals(a): + """Compute the eigenvalues of the general 2-d array a. + + A simple interface to the LAPACK routines dgeev and zgeev that sets the + flags to return only the eigenvalues of general real and complex arrays + respectively. + + :Parameters: + + a : 2-d array + A complex or real 2-d array whose eigenvalues and eigenvectors + will be computed. + + :Returns: + + w : 1-d double or complex array + The eigenvalues. The eigenvalues are not necessarily ordered, nor + are they necessarily real for real matrices. + + :SeeAlso: + + - eig : eigenvalues and right eigenvectors of general arrays + - eigvalsh : eigenvalues of symmetric or Hemitiean arrays. + - eigh : eigenvalues and eigenvectors of symmetric/Hermitean arrays. + + :Notes: + ------- + + The number w is an eigenvalue of a if there exists a vector v + satisfying the equation dot(a,v) = w*v. Alternately, if w is a root of + the characteristic equation det(a - w[i]*I) = 0, where det is the + determinant and I is the identity matrix. + + """ _assertRank2(a) _assertSquareness(a) _assertFinite(a) @@ -389,6 +424,44 @@ def eigvalsh(a, UPLO='L'): + """Compute the eigenvalues of the symmetric or Hermitean 2-d array a. + + A simple interface to the LAPACK routines dsyevd and zheevd that sets the + flags to return only the eigenvalues of real symmetric and complex + Hermetian arrays respectively. + + :Parameters: + + a : 2-d array + A complex or real 2-d array whose eigenvalues and eigenvectors + will be computed. + + UPLO : string + Specifies whether the pertinent array date is taken from the upper + or lower triangular part of a. Possible values are 'L', and 'U' for + upper and lower respectively. Default is 'L'. + + :Returns: + + w : 1-d double array + The eigenvalues. The eigenvalues are not necessarily ordered. + + :SeeAlso: + + - eigh : eigenvalues and eigenvectors of symmetric/Hermitean arrays. + - eigvals : eigenvalues of general real or complex arrays. + - eig : eigenvalues and eigenvectors of general real or complex arrays. + + :Notes: + ------- + + The number w is an eigenvalue of a if there exists a vector v + satisfying the equation dot(a,v) = w*v. Alternately, if w is a root of + the characteristic equation det(a - w[i]*I) = 0, where det is the + determinant and I is the identity matrix. The eigenvalues of real + symmetric or complex Hermitean matrices are always real. + + """ _assertRank2(a) _assertSquareness(a) t, result_t = _commonType(a) @@ -432,13 +505,58 @@ a = _fastCT(a.astype(t)) return a, t, result_t + # Eigenvectors + def eig(a): - """eig(a) returns u,v where u is the eigenvalues and -v is a matrix of eigenvectors with vector v[:,i] corresponds to -eigenvalue u[i]. Satisfies the equation dot(a, v[:,i]) = u[i]*v[:,i] -""" + """Eigenvalues and right eigenvectors of a general matrix. + + A simple interface to the LAPACK routines dgeev and zgeev that compute the + eigenvalues and eigenvectors of general real and complex arrays + respectively. + + :Parameters: + + a : 2-d array + A complex or real 2-d array whose eigenvalues and eigenvectors + will be computed. + + :Returns: + + w : 1-d double or complex array + The eigenvalues. The eigenvalues are not necessarily ordered, nor + are they necessarily real for real matrices. + + v : 2-d double or complex double array. + The normalized eigenvector corresponding to the eigenvalue w[i] is + the column v[:,i]. + + :SeeAlso: + + - eigvalsh : eigenvalues of symmetric or Hemitiean arrays. + - eig : eigenvalues and right eigenvectors for non-symmetric arrays + - eigvals : eigenvalues of non-symmetric array. + + :Notes: + ------- + + The number w is an eigenvalue of a if there exists a vector v + satisfying the equation dot(a,v) = w*v. Alternately, if w is a root of + the characteristic equation det(a - w[i]*I) = 0, where det is the + determinant and I is the identity matrix. The arrays a, w, and v + satisfy the equation dot(a,v[i]) = w[i]*v[:,i]. + + The array v of eigenvectors may not be of maximum rank, that is, some + of the columns may be dependent, although roundoff error may obscure + that fact. If the eigenvalues are all different, then theoretically the + eigenvectors are independent. Likewise, the matrix of eigenvectors is + unitary if the matrix a is normal, i.e., if dot(a, a.H) = dot(a.H, a). + + The left and right eigenvectors are not necessarily the (Hemitian) + transposes of each other. + + """ a, wrap = _makearray(a) _assertRank2(a) _assertSquareness(a) @@ -492,8 +610,51 @@ vt = v.transpose().astype(result_t) return w.astype(result_t), wrap(vt) + def eigh(a, UPLO='L'): """Compute eigenvalues for a Hermitian-symmetric matrix. + + A simple interface to the LAPACK routines dsyevd and zheevd that compute + the eigenvalues and eigenvectors of real symmetric and complex Hermitian + arrays respectively. + + :Parameters: + + a : 2-d array + A complex Hermitian or symmetric real 2-d array whose eigenvalues + and eigenvectors will be computed. + + UPLO : string + Specifies whether the pertinent array date is taken from the upper + or lower triangular part of a. Possible values are 'L', and 'U'. + Default is 'L'. + + :Returns: + + w : 1-d double array + The eigenvalues. The eigenvalues are not necessarily ordered. + + v : 2-d double or complex double array, depending on input array type + The normalized eigenvector corresponding to the eigenvalue w[i] is + the column v[:,i]. + + :SeeAlso: + + - eigvalsh : eigenvalues of symmetric or Hemitiean arrays. + - eig : eigenvalues and right eigenvectors for non-symmetric arrays + - eigvals : eigenvalues of non-symmetric array. + + :Notes: + ------- + + The number w is an eigenvalue of a if there exists a vector v + satisfying the equation dot(a,v) = w*v. Alternately, if w is a root of + the characteristic equation det(a - w[i]*I) = 0, where det is the + determinant and I is the identity matrix. The eigenvalues of real + symmetric or complex Hermitean matrices are always real. The array v + of eigenvectors is unitary and a, w, and v satisfy the equation + dot(a,v[i]) = w[i]*v[:,i]. + """ a, wrap = _makearray(a) _assertRank2(a) Modified: branches/multicore/numpy/numarray/_capi.c =================================================================== --- branches/multicore/numpy/numarray/_capi.c 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/numarray/_capi.c 2007-05-28 18:35:05 UTC (rev 3840) @@ -4,6 +4,13 @@ #include "numpy/libnumarray.h" #include +#if defined(__GLIBC__) || defined(__APPLE__) || defined(__MINGW32__) +#include +#elif defined(__CYGWIN__) +#include "numpy/fenv/fenv.h" +#include "numpy/fenv/fenv.c" +#endif + static PyObject *pCfuncClass; static PyTypeObject CfuncType; static PyObject *pHandleErrorFunc; @@ -225,11 +232,6 @@ /* Likewise for Integer overflows */ #if defined(__GLIBC__) || defined(__APPLE__) || defined(__CYGWIN__) || defined(__MINGW32__) -#if defined(__GLIBC__) || defined(__APPLE__) || defined(__MINGW32__) -#include -#elif defined(__CYGWIN__) -#include "numpy/fenv/fenv.c" -#endif static int int_overflow_error(Float64 value) { /* For x86_64 */ feraiseexcept(FE_OVERFLOW); return (int) value; @@ -2938,11 +2940,6 @@ } #elif defined(__GLIBC__) || defined(__APPLE__) || defined(__CYGWIN__) || defined(__MINGW32__) -#if defined(__GLIBC__) || defined(darwin) || defined(__MINGW32__) -#include -#elif defined(__CYGWIN__) -#include "numpy/fenv/fenv.h" -#endif static int NA_checkFPErrors(void) Modified: branches/multicore/numpy/numarray/setup.py =================================================================== --- branches/multicore/numpy/numarray/setup.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/numarray/setup.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -6,9 +6,8 @@ config.add_data_files('numpy/') - # Configure fftpack_lite config.add_extension('_capi', - sources=['_capi.c'] + sources=['_capi.c'], ) return config Modified: branches/multicore/numpy/random/setup.py =================================================================== --- branches/multicore/numpy/random/setup.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/random/setup.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -42,9 +42,8 @@ #ifdef _WIN32 return 0; #else -#error No _WIN32 + return 1; #endif - return -1; } """ Modified: branches/multicore/numpy/setup.py =================================================================== --- branches/multicore/numpy/setup.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/setup.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -19,11 +19,4 @@ return config if __name__ == '__main__': - # Remove current working directory from sys.path - # to avoid importing numpy.distutils as Python std. distutils: - import os, sys - for cwd in ['','.',os.getcwd()]: - while cwd in sys.path: sys.path.remove(cwd) - - from numpy.distutils.core import setup - setup(configuration=configuration) + print 'This is the wrong setup.py file to run' Modified: branches/multicore/numpy/testing/numpytest.py =================================================================== --- branches/multicore/numpy/testing/numpytest.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/testing/numpytest.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -195,7 +195,7 @@ class ScipyTestCase(NumpyTestCase): def __init__(self, package=None): warnings.warn("ScipyTestCase is now called NumpyTestCase; please update your code", - DeprecationWarning) + DeprecationWarning, stacklevel=2) NumpyTestCase.__init__(self, package) @@ -239,22 +239,22 @@ is package name or its module object. - Package is supposed to contain a directory tests/ - with test_*.py files where * refers to the names of submodules. - See .rename() method to redefine name mapping between test_*.py files - and names of submodules. Pattern test_*.py can be overwritten by - redefining .get_testfile() method. + Package is supposed to contain a directory tests/ with test_*.py + files where * refers to the names of submodules. See .rename() + method to redefine name mapping between test_*.py files and names of + submodules. Pattern test_*.py can be overwritten by redefining + .get_testfile() method. - test_*.py files are supposed to define a classes, derived - from NumpyTestCase or unittest.TestCase, with methods having - names starting with test or bench or check. The names of TestCase - classes must have a prefix test. This can be overwritten by - redefining .check_testcase_name() method. + test_*.py files are supposed to define a classes, derived from + NumpyTestCase or unittest.TestCase, with methods having names + starting with test or bench or check. The names of TestCase classes + must have a prefix test. This can be overwritten by redefining + .check_testcase_name() method. And that is it! No need to implement test or test_suite functions in each .py file. - Also old styled test_suite(level=1) hooks are supported. + Old-style test_suite(level=1) hooks are also supported. """ _check_testcase_name = re.compile(r'test.*').match def check_testcase_name(self, name): @@ -293,9 +293,14 @@ self._rename_map = {} def rename(self, **kws): - """ Apply renaming submodule test file test_.py to test_.py. - Usage: self.rename(name='newname') before calling self.test() method. - If 'newname' is None, then no tests will be executed for a given module. + """Apply renaming submodule test file test_.py to + test_.py. + + Usage: self.rename(name='newname') before calling the + self.test() method. + + If 'newname' is None, then no tests will be executed for a given + module. """ for k,v in kws.items(): self._rename_map[k] = v @@ -533,12 +538,12 @@ True --- run all test files (like self.testall()) False (default) --- only run test files associated with a module - It is assumed (when all=False) that package tests suite follows the - following convention: for each package module, there exists file - /tests/test_.py that defines TestCase classes - (with names having prefix 'test_') with methods (with names having - prefixes 'check_' or 'bench_'); each of these methods are called when - running unit tests. + It is assumed (when all=False) that package tests suite follows + the following convention: for each package module, there exists + file /tests/test_.py that defines + TestCase classes (with names having prefix 'test_') with methods + (with names having prefixes 'check_' or 'bench_'); each of these + methods are called when running unit tests. """ if level is None: # Do nothing. return Modified: branches/multicore/numpy/version.py =================================================================== --- branches/multicore/numpy/version.py 2007-05-28 18:00:22 UTC (rev 3839) +++ branches/multicore/numpy/version.py 2007-05-28 18:35:05 UTC (rev 3840) @@ -1,4 +1,4 @@ -version='1.0.3' +version='1.0.4' release=False if not release: From numpy-svn at scipy.org Tue May 29 06:36:36 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 29 May 2007 05:36:36 -0500 (CDT) Subject: [Numpy-svn] r3841 - in trunk/numpy/distutils: . fcompiler Message-ID: <20070529103636.EA2AB39C102@new.scipy.org> Author: cookedm Date: 2007-05-29 05:36:27 -0500 (Tue, 29 May 2007) New Revision: 3841 Modified: trunk/numpy/distutils/environment.py trunk/numpy/distutils/fcompiler/__init__.py Log: Add a convert procedure to the flag-getting logic in fcompiler and environment. Otherwise, flags (for instance) from environment variables or setup.cfg are are strings, where lists are expected. Modified: trunk/numpy/distutils/environment.py =================================================================== --- trunk/numpy/distutils/environment.py 2007-05-28 18:35:05 UTC (rev 3840) +++ trunk/numpy/distutils/environment.py 2007-05-29 10:36:27 UTC (rev 3841) @@ -28,12 +28,14 @@ return var def _get_var(self, name, conf_desc): - hook, envvar, confvar = conf_desc + hook, envvar, confvar, convert = conf_desc var = self._hook_handler(name, hook) if envvar is not None: var = os.environ.get(envvar, var) if confvar is not None and self._conf: var = self._conf.get(confvar, (None, var))[1] + if convert is not None: + var = convert(var) return var def clone(self, hook_handler): Modified: trunk/numpy/distutils/fcompiler/__init__.py =================================================================== --- trunk/numpy/distutils/fcompiler/__init__.py 2007-05-28 18:35:05 UTC (rev 3840) +++ trunk/numpy/distutils/fcompiler/__init__.py 2007-05-29 10:36:27 UTC (rev 3841) @@ -24,9 +24,9 @@ from numpy.distutils.ccompiler import CCompiler, gen_lib_options from numpy.distutils import log -from numpy.distutils.misc_util import is_string, make_temp_file +from numpy.distutils.misc_util import is_string, is_sequence, make_temp_file from numpy.distutils.environment import EnvironmentConfig -from numpy.distutils.exec_command import find_executable +from numpy.distutils.exec_command import find_executable, splitcmdline from distutils.spawn import _nt_quote_args __metaclass__ = type @@ -34,6 +34,17 @@ class CompilerNotFound(Exception): pass +def flaglist(s): + if is_string(s): + return splitcmdline(s) + else: + return s + +def str2bool(s): + if is_string(s): + return not (s == '0' or s.lower() == 'False') + return bool(s) + class FCompiler(CCompiler): """Abstract base class to define the interface that must be implemented by real Fortran compiler classes. @@ -67,51 +78,53 @@ # These are the environment variables and distutils keys used. # Each configuration descripition is - # (, , ) + # (, , , ) # The hook names are handled by the self._environment_hook method. # - names starting with 'self.' call methods in this class # - names starting with 'exe.' return the key in the executables dict - # - names like'flags.YYY' return self.get_flag_YYY() + # - names like 'flags.YYY' return self.get_flag_YYY() + # convert is either None or a function to convert a string to the + # appropiate type used. distutils_vars = EnvironmentConfig( - noopt = (None, None, 'noopt'), - noarch = (None, None, 'noarch'), - debug = (None, None, 'debug'), - verbose = (None, None, 'verbose'), + noopt = (None, None, 'noopt', str2bool), + noarch = (None, None, 'noarch', str2bool), + debug = (None, None, 'debug', str2bool), + verbose = (None, None, 'verbose', str2bool), ) command_vars = EnvironmentConfig( distutils_section='config_fc', - compiler_f77 = ('exe.compiler_f77', 'F77', 'f77exec'), - compiler_f90 = ('exe.compiler_f90', 'F90', 'f90exec'), - compiler_fix = ('exe.compiler_fix', 'F90', 'f90exec'), - version_cmd = ('self.get_version_cmd', None, None), - linker_so = ('self.get_linker_so', 'LDSHARED', 'ldshared'), - linker_exe = ('self.get_linker_exe', 'LD', 'ld'), - archiver = (None, 'AR', 'ar'), - ranlib = (None, 'RANLIB', 'ranlib'), + compiler_f77 = ('exe.compiler_f77', 'F77', 'f77exec', None), + compiler_f90 = ('exe.compiler_f90', 'F90', 'f90exec', None), + compiler_fix = ('exe.compiler_fix', 'F90', 'f90exec', None), + version_cmd = ('self.get_version_cmd', None, None, None), + linker_so = ('self.get_linker_so', 'LDSHARED', 'ldshared', None), + linker_exe = ('self.get_linker_exe', 'LD', 'ld', None), + archiver = (None, 'AR', 'ar', None), + ranlib = (None, 'RANLIB', 'ranlib', None), ) flag_vars = EnvironmentConfig( distutils_section='config_fc', - version = ('flags.version', None, None), - f77 = ('flags.f77', 'F77FLAGS', 'f77flags'), - f90 = ('flags.f90', 'F90FLAGS', 'f90flags'), - free = ('flags.free', 'FREEFLAGS', 'freeflags'), - fix = ('flags.fix', None, None), - opt = ('flags.opt', 'FOPT', 'opt'), - opt_f77 = ('flags.opt_f77', None, None), - opt_f90 = ('flags.opt_f90', None, None), - arch = ('flags.arch', 'FARCH', 'arch'), - arch_f77 = ('flags.arch_f77', None, None), - arch_f90 = ('flags.arch_f90', None, None), - debug = ('flags.debug', 'FDEBUG', None, None), - debug_f77 = ('flags.debug_f77', None, None), - debug_f90 = ('flags.debug_f90', None, None), - flags = ('self.get_flags', 'FFLAGS', 'fflags'), - linker_so = ('flags.linker_so', 'LDFLAGS', 'ldflags'), - linker_exe = ('flags.linker_exe', 'LDFLAGS', 'ldflags'), - ar = ('flags.ar', 'ARFLAGS', 'arflags'), + version = ('flags.version', None, None, None), + f77 = ('flags.f77', 'F77FLAGS', 'f77flags', flaglist), + f90 = ('flags.f90', 'F90FLAGS', 'f90flags', flaglist), + free = ('flags.free', 'FREEFLAGS', 'freeflags', flaglist), + fix = ('flags.fix', None, None, flaglist), + opt = ('flags.opt', 'FOPT', 'opt', flaglist), + opt_f77 = ('flags.opt_f77', None, None, flaglist), + opt_f90 = ('flags.opt_f90', None, None, flaglist), + arch = ('flags.arch', 'FARCH', 'arch', flaglist), + arch_f77 = ('flags.arch_f77', None, None, flaglist), + arch_f90 = ('flags.arch_f90', None, None, flaglist), + debug = ('flags.debug', 'FDEBUG', None, None, flaglist), + debug_f77 = ('flags.debug_f77', None, None, flaglist), + debug_f90 = ('flags.debug_f90', None, None, flaglist), + flags = ('self.get_flags', 'FFLAGS', 'fflags', flaglist), + linker_so = ('flags.linker_so', 'LDFLAGS', 'ldflags', flaglist), + linker_exe = ('flags.linker_exe', 'LDFLAGS', 'ldflags', flaglist), + ar = ('flags.ar', 'ARFLAGS', 'arflags', flaglist), ) language_map = {'.f':'f77', From numpy-svn at scipy.org Tue May 29 16:43:42 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 29 May 2007 15:43:42 -0500 (CDT) Subject: [Numpy-svn] r3842 - trunk/numpy/core/include/numpy Message-ID: <20070529204342.2258939C013@new.scipy.org> Author: oliphant Date: 2007-05-29 15:43:37 -0500 (Tue, 29 May 2007) New Revision: 3842 Modified: trunk/numpy/core/include/numpy/ndarrayobject.h Log: Make changeset 3830 in the right place. Modified: trunk/numpy/core/include/numpy/ndarrayobject.h =================================================================== --- trunk/numpy/core/include/numpy/ndarrayobject.h 2007-05-29 10:36:27 UTC (rev 3841) +++ trunk/numpy/core/include/numpy/ndarrayobject.h 2007-05-29 20:43:37 UTC (rev 3842) @@ -1923,23 +1923,23 @@ inline the constants inside a for loop making it a moot point */ -#define PyArray_GETPTR1(obj, i) (void *)(PyArray_BYTES(obj) + \ - (i)*PyArray_STRIDES(obj)[0]) +#define PyArray_GETPTR1(obj, i) ((void *)(PyArray_BYTES(obj) + \ + (i)*PyArray_STRIDES(obj)[0])) -#define PyArray_GETPTR2(obj, i, j) (void *)(PyArray_BYTES(obj) + \ +#define PyArray_GETPTR2(obj, i, j) ((void *)(PyArray_BYTES(obj) + \ (i)*PyArray_STRIDES(obj)[0] + \ - (j)*PyArray_STRIDES(obj)[1]) + (j)*PyArray_STRIDES(obj)[1])) -#define PyArray_GETPTR3(obj, i, j, k) (void *)(PyArray_BYTES(obj) + \ +#define PyArray_GETPTR3(obj, i, j, k) ((void *)(PyArray_BYTES(obj) + \ (i)*PyArray_STRIDES(obj)[0] + \ (j)*PyArray_STRIDES(obj)[1] + \ - (k)*PyArray_STRIDES(obj)[2]) \ + (k)*PyArray_STRIDES(obj)[2])) -#define PyArray_GETPTR4(obj, i, j, k, l) (void *)(PyArray_BYTES(obj) + \ +#define PyArray_GETPTR4(obj, i, j, k, l) ((void *)(PyArray_BYTES(obj) + \ (i)*PyArray_STRIDES(obj)[0] + \ (j)*PyArray_STRIDES(obj)[1] + \ (k)*PyArray_STRIDES(obj)[2] + \ - (l)*PyArray_STRIDES(obj)[3]) + (l)*PyArray_STRIDES(obj)[3])) #define PyArray_XDECREF_ERR(obj) \ if (obj && (PyArray_FLAGS(obj) & NPY_UPDATEIFCOPY)) { \ From numpy-svn at scipy.org Wed May 30 20:05:17 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 30 May 2007 19:05:17 -0500 (CDT) Subject: [Numpy-svn] r3843 - trunk/numpy/distutils/fcompiler Message-ID: <20070531000517.8EA0939C019@new.scipy.org> Author: cookedm Date: 2007-05-30 19:05:15 -0500 (Wed, 30 May 2007) New Revision: 3843 Modified: trunk/numpy/distutils/fcompiler/__init__.py Log: new_fcompiler returns None when it can't match the platform. Hopefully clears up Windows compiling problem. Modified: trunk/numpy/distutils/fcompiler/__init__.py =================================================================== --- trunk/numpy/distutils/fcompiler/__init__.py 2007-05-29 20:43:37 UTC (rev 3842) +++ trunk/numpy/distutils/fcompiler/__init__.py 2007-05-31 00:05:15 UTC (rev 3843) @@ -785,7 +785,8 @@ msg = msg + " with '%s' compiler." % compiler msg = msg + " Supported compilers are: %s)" \ % (','.join(fcompiler_class.keys())) - raise DistutilsPlatformError, msg + log.warn(msg) + return None compiler = klass(verbose=verbose, dry_run=dry_run, force=force) return compiler From numpy-svn at scipy.org Wed May 30 20:51:51 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 30 May 2007 19:51:51 -0500 (CDT) Subject: [Numpy-svn] r3844 - trunk/numpy/distutils/command Message-ID: <20070531005151.398AA39C019@new.scipy.org> Author: cookedm Date: 2007-05-30 19:51:48 -0500 (Wed, 30 May 2007) New Revision: 3844 Modified: trunk/numpy/distutils/command/build_clib.py trunk/numpy/distutils/command/build_ext.py trunk/numpy/distutils/command/config.py Log: do an appropiate behaviour in the distutils commands when new_fcompiler returns None Modified: trunk/numpy/distutils/command/build_clib.py =================================================================== --- trunk/numpy/distutils/command/build_clib.py 2007-05-31 00:05:15 UTC (rev 3843) +++ trunk/numpy/distutils/command/build_clib.py 2007-05-31 00:51:48 UTC (rev 3844) @@ -78,14 +78,15 @@ dry_run=self.dry_run, force=self.force, requiref90='f90' in languages) - self.fcompiler.customize(self.distribution) + if self.compiler is not None: + self.fcompiler.customize(self.distribution) - libraries = self.libraries - self.libraries = None - self.fcompiler.customize_cmd(self) - self.libraries = libraries + libraries = self.libraries + self.libraries = None + self.fcompiler.customize_cmd(self) + self.libraries = libraries - self.fcompiler.show_customization() + self.fcompiler.show_customization() self.build_libraries(self.libraries) @@ -143,10 +144,11 @@ dry_run=self.dry_run, force=self.force, requiref90=requiref90) - dist = self.distribution - base_config_fc = dist.get_option_dict('config_fc').copy() - base_config_fc.update(config_fc) - fcompiler.customize(base_config_fc) + if fcompiler is not None: + dist = self.distribution + base_config_fc = dist.get_option_dict('config_fc').copy() + base_config_fc.update(config_fc) + fcompiler.customize(base_config_fc) # check availability of Fortran compilers if (f_sources or fmodule_sources) and fcompiler is None: Modified: trunk/numpy/distutils/command/build_ext.py =================================================================== --- trunk/numpy/distutils/command/build_ext.py 2007-05-31 00:05:15 UTC (rev 3843) +++ trunk/numpy/distutils/command/build_ext.py 2007-05-31 00:51:48 UTC (rev 3844) @@ -175,7 +175,7 @@ force=self.force, requiref90=False) fcompiler = self._f77_compiler - if fcompiler.get_version(): + if fcompiler and fcompiler.get_version(): fcompiler.customize(self.distribution) fcompiler.customize_cmd(self) fcompiler.show_customization() @@ -194,7 +194,7 @@ force=self.force, requiref90=True) fcompiler = self._f90_compiler - if fcompiler.get_version(): + if fcompiler and fcompiler.get_version(): fcompiler.customize(self.distribution) fcompiler.customize_cmd(self) fcompiler.show_customization() Modified: trunk/numpy/distutils/command/config.py =================================================================== --- trunk/numpy/distutils/command/config.py 2007-05-31 00:05:15 UTC (rev 3843) +++ trunk/numpy/distutils/command/config.py 2007-05-31 00:51:48 UTC (rev 3844) @@ -28,9 +28,10 @@ if not isinstance(self.fcompiler, FCompiler): self.fcompiler = new_fcompiler(compiler=self.fcompiler, dry_run=self.dry_run, force=1) - self.fcompiler.customize(self.distribution) - self.fcompiler.customize_cmd(self) - self.fcompiler.show_customization() + if self.fcompiler is not None: + self.fcompiler.customize(self.distribution) + self.fcompiler.customize_cmd(self) + self.fcompiler.show_customization() def _wrap_method(self,mth,lang,args): from distutils.ccompiler import CompileError From numpy-svn at scipy.org Thu May 31 00:02:06 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 30 May 2007 23:02:06 -0500 (CDT) Subject: [Numpy-svn] r3845 - in tags/1.0.3: . numpy/distutils Message-ID: <20070531040206.8822539C030@new.scipy.org> Author: oliphant Date: 2007-05-30 23:01:55 -0500 (Wed, 30 May 2007) New Revision: 3845 Modified: tags/1.0.3/numpy/distutils/misc_util.py tags/1.0.3/setup.py Log: Fix-up problem with data-files in top-level package directory. Modified: tags/1.0.3/numpy/distutils/misc_util.py =================================================================== --- tags/1.0.3/numpy/distutils/misc_util.py 2007-05-31 00:51:48 UTC (rev 3844) +++ tags/1.0.3/numpy/distutils/misc_util.py 2007-05-31 04:01:55 UTC (rev 3845) @@ -1009,7 +1009,7 @@ """ include_non_existing = kws.get('include_non_existing',True) return gpaths(paths, - local_path = self.local_path, + local_path = self.path_in_package, include_non_existing=include_non_existing) def _fix_paths_dict(self,kw): Modified: tags/1.0.3/setup.py =================================================================== --- tags/1.0.3/setup.py 2007-05-31 00:51:48 UTC (rev 3844) +++ tags/1.0.3/setup.py 2007-05-31 04:01:55 UTC (rev 3845) @@ -6,7 +6,8 @@ records without sacrificing too much speed for small multi-dimensional arrays. NumPy is built on the Numeric code base and adds features introduced by numarray as well as an extended C-API and the ability to -create arrays of arbitrary type. +create arrays of arbitrary type which makes NumPy suitable for +interfacing with general purpose data-base applications. There are also basic facilities for discrete fourier transform, basic linear algebra and random number generation. @@ -47,9 +48,10 @@ config.add_subpackage('numpy') - config.add_data_files(('numpy',['*.txt','COMPATIBILITY', - 'scipy_compatibility'])) - + config.add_data_files(('numpy','*.txt')) + config.add_data_files(('.','COMPATIBILITY'),('.','scipy_compatibility'), + ('.','site.cfg.example')) + config.get_version('numpy/version.py') # sets config.version return config From numpy-svn at scipy.org Thu May 31 00:57:06 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 30 May 2007 23:57:06 -0500 (CDT) Subject: [Numpy-svn] r3846 - in trunk: . numpy/distutils Message-ID: <20070531045706.9D5E139C019@new.scipy.org> Author: oliphant Date: 2007-05-30 23:57:01 -0500 (Wed, 30 May 2007) New Revision: 3846 Modified: trunk/numpy/distutils/misc_util.py trunk/setup.py Log: Fix some problems with data-files not being added in top-level and extra version information added to the name of development distributions. Modified: trunk/numpy/distutils/misc_util.py =================================================================== --- trunk/numpy/distutils/misc_util.py 2007-05-31 04:01:55 UTC (rev 3845) +++ trunk/numpy/distutils/misc_util.py 2007-05-31 04:57:01 UTC (rev 3846) @@ -1041,7 +1041,7 @@ """ include_non_existing = kws.get('include_non_existing',True) return gpaths(paths, - local_path = self.local_path, + local_path = self.path_in_package, include_non_existing=include_non_existing) def _fix_paths_dict(self,kw): Modified: trunk/setup.py =================================================================== --- trunk/setup.py 2007-05-31 04:01:55 UTC (rev 3845) +++ trunk/setup.py 2007-05-31 04:57:01 UTC (rev 3846) @@ -6,7 +6,8 @@ records without sacrificing too much speed for small multi-dimensional arrays. NumPy is built on the Numeric code base and adds features introduced by numarray as well as an extended C-API and the ability to -create arrays of arbitrary type. +create arrays of arbitrary type which also makes NumPy suitable for +interfacing with general-purpose data-base applications. There are also basic facilities for discrete fourier transform, basic linear algebra and random number generation. @@ -47,11 +48,12 @@ config.add_subpackage('numpy') - config.add_data_files(('numpy',['*.txt','COMPATIBILITY', - 'scipy_compatibility'])) + config.add_data_files(('numpy','*.txt'), ('.','COMPATIBILITY'), + ('.','scipy_compatibility'), + ('.','site.cfg.example')) config.get_version('numpy/version.py') # sets config.version - + return config def setup_package(): @@ -64,10 +66,8 @@ sys.path.insert(0,local_path) try: - from numpy.version import version setup( name = 'numpy', - version = version, # will be overwritten by configuration version maintainer = "NumPy Developers", maintainer_email = "numpy-discussion at lists.sourceforge.net", description = DOCLINES[0], From numpy-svn at scipy.org Thu May 31 03:37:16 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 31 May 2007 02:37:16 -0500 (CDT) Subject: [Numpy-svn] r3847 - trunk/numpy/distutils Message-ID: <20070531073716.6569639C019@new.scipy.org> Author: pearu Date: 2007-05-31 02:37:04 -0500 (Thu, 31 May 2007) New Revision: 3847 Modified: trunk/numpy/distutils/misc_util.py Log: Undo change in 3845. Modified: trunk/numpy/distutils/misc_util.py =================================================================== --- trunk/numpy/distutils/misc_util.py 2007-05-31 04:57:01 UTC (rev 3846) +++ trunk/numpy/distutils/misc_util.py 2007-05-31 07:37:04 UTC (rev 3847) @@ -1041,7 +1041,7 @@ """ include_non_existing = kws.get('include_non_existing',True) return gpaths(paths, - local_path = self.path_in_package, + local_path = self.local_path, include_non_existing=include_non_existing) def _fix_paths_dict(self,kw): From numpy-svn at scipy.org Thu May 31 04:16:29 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 31 May 2007 03:16:29 -0500 (CDT) Subject: [Numpy-svn] r3848 - trunk/numpy/distutils Message-ID: <20070531081629.C801039C098@new.scipy.org> Author: pearu Date: 2007-05-31 03:16:15 -0500 (Thu, 31 May 2007) New Revision: 3848 Modified: trunk/numpy/distutils/misc_util.py Log: Resolved issues in changeset 3846. Modified: trunk/numpy/distutils/misc_util.py =================================================================== --- trunk/numpy/distutils/misc_util.py 2007-05-31 07:37:04 UTC (rev 3847) +++ trunk/numpy/distutils/misc_util.py 2007-05-31 08:16:15 UTC (rev 3848) @@ -588,11 +588,12 @@ # defines a configuration() function. if top_path is None: top_path = self.local_path + self.local_path = '' if package_path is None: package_path = self.local_path elif os.path.isdir(njoin(self.local_path,package_path)): package_path = njoin(self.local_path,package_path) - if not os.path.isdir(package_path): + if not os.path.isdir(package_path or '.'): raise ValueError("%r is not a directory" % (package_path,)) self.top_path = top_path self.package_path = package_path From numpy-svn at scipy.org Thu May 31 04:45:41 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 31 May 2007 03:45:41 -0500 (CDT) Subject: [Numpy-svn] r3849 - in trunk/numpy/distutils: . command Message-ID: <20070531084541.2541239C098@new.scipy.org> Author: pearu Date: 2007-05-31 03:45:30 -0500 (Thu, 31 May 2007) New Revision: 3849 Modified: trunk/numpy/distutils/ccompiler.py trunk/numpy/distutils/command/build_ext.py trunk/numpy/distutils/command/config.py Log: Fix issues with undetected Fortran compilers. Modified: trunk/numpy/distutils/ccompiler.py =================================================================== --- trunk/numpy/distutils/ccompiler.py 2007-05-31 08:16:15 UTC (rev 3848) +++ trunk/numpy/distutils/ccompiler.py 2007-05-31 08:45:30 UTC (rev 3849) @@ -259,7 +259,7 @@ version_cmd = self.version_cmd except AttributeError: return None - if not version_cmd or not version_cmd[0]: + if not version_cmd or not version_cmd[0] or None in version_cmd: return None cmd = ' '.join(version_cmd) try: Modified: trunk/numpy/distutils/command/build_ext.py =================================================================== --- trunk/numpy/distutils/command/build_ext.py 2007-05-31 08:16:15 UTC (rev 3848) +++ trunk/numpy/distutils/command/build_ext.py 2007-05-31 08:45:30 UTC (rev 3849) @@ -169,38 +169,44 @@ # Initialize Fortran 77 compiler: if need_f77_compiler: + ctype = self.fcompiler self._f77_compiler = new_fcompiler(compiler=self.fcompiler, verbose=self.verbose, dry_run=self.dry_run, force=self.force, requiref90=False) fcompiler = self._f77_compiler + if fcompiler: + ctype = fcompiler.compiler_type if fcompiler and fcompiler.get_version(): fcompiler.customize(self.distribution) fcompiler.customize_cmd(self) fcompiler.show_customization() else: self.warn('f77_compiler=%s is not available.' % - (fcompiler.compiler_type)) + (ctype)) self._f77_compiler = None else: self._f77_compiler = None # Initialize Fortran 90 compiler: if need_f90_compiler: + ctype = self.fcompiler self._f90_compiler = new_fcompiler(compiler=self.fcompiler, verbose=self.verbose, dry_run=self.dry_run, force=self.force, requiref90=True) fcompiler = self._f90_compiler + if fcompiler: + ctype = fcompiler.compiler_type if fcompiler and fcompiler.get_version(): fcompiler.customize(self.distribution) fcompiler.customize_cmd(self) fcompiler.show_customization() else: self.warn('f90_compiler=%s is not available.' % - (fcompiler.compiler_type)) + (ctype)) self._f90_compiler = None else: self._f90_compiler = None Modified: trunk/numpy/distutils/command/config.py =================================================================== --- trunk/numpy/distutils/command/config.py 2007-05-31 08:16:15 UTC (rev 3848) +++ trunk/numpy/distutils/command/config.py 2007-05-31 08:45:30 UTC (rev 3849) @@ -28,7 +28,7 @@ if not isinstance(self.fcompiler, FCompiler): self.fcompiler = new_fcompiler(compiler=self.fcompiler, dry_run=self.dry_run, force=1) - if self.fcompiler is not None: + if self.fcompiler is not None and self.fcompiler.get_version(): self.fcompiler.customize(self.distribution) self.fcompiler.customize_cmd(self) self.fcompiler.show_customization() From numpy-svn at scipy.org Thu May 31 13:12:18 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 31 May 2007 12:12:18 -0500 (CDT) Subject: [Numpy-svn] r3850 - tags/1.0.3/numpy/linalg Message-ID: <20070531171218.AC0F339C116@new.scipy.org> Author: oliphant Date: 2007-05-31 12:12:15 -0500 (Thu, 31 May 2007) New Revision: 3850 Modified: tags/1.0.3/numpy/linalg/lapack_litemodule.c Log: Fix 64-bit zgeqrf Modified: tags/1.0.3/numpy/linalg/lapack_litemodule.c =================================================================== --- tags/1.0.3/numpy/linalg/lapack_litemodule.c 2007-05-31 08:45:30 UTC (rev 3849) +++ tags/1.0.3/numpy/linalg/lapack_litemodule.c 2007-05-31 17:12:15 UTC (rev 3850) @@ -755,7 +755,7 @@ int lda; int info; - TRY(PyArg_ParseTuple(args,"llOlOOll",&m,&n,&a,&lda,&tau,&work,&lwork,&info)); + TRY(PyArg_ParseTuple(args,"iiOiOOii",&m,&n,&a,&lda,&tau,&work,&lwork,&info)); /* check objects and convert to right storage order */ TRY(check_object(a,PyArray_CDOUBLE,"a","PyArray_CDOUBLE","zgeqrf")); From numpy-svn at scipy.org Thu May 31 13:26:34 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 31 May 2007 12:26:34 -0500 (CDT) Subject: [Numpy-svn] r3851 - trunk/numpy/linalg Message-ID: <20070531172634.B57AA39C116@new.scipy.org> Author: oliphant Date: 2007-05-31 12:26:31 -0500 (Thu, 31 May 2007) New Revision: 3851 Modified: trunk/numpy/linalg/lapack_litemodule.c Log: Fix 64-bit zgeqrf on trunk. Modified: trunk/numpy/linalg/lapack_litemodule.c =================================================================== --- trunk/numpy/linalg/lapack_litemodule.c 2007-05-31 17:12:15 UTC (rev 3850) +++ trunk/numpy/linalg/lapack_litemodule.c 2007-05-31 17:26:31 UTC (rev 3851) @@ -755,7 +755,7 @@ int lda; int info; - TRY(PyArg_ParseTuple(args,"llOlOOll",&m,&n,&a,&lda,&tau,&work,&lwork,&info)); + TRY(PyArg_ParseTuple(args,"iiOiOOii",&m,&n,&a,&lda,&tau,&work,&lwork,&info)); /* check objects and convert to right storage order */ TRY(check_object(a,PyArray_CDOUBLE,"a","PyArray_CDOUBLE","zgeqrf")); From numpy-svn at scipy.org Thu May 31 18:11:26 2007 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 31 May 2007 17:11:26 -0500 (CDT) Subject: [Numpy-svn] r3852 - trunk/numpy/lib Message-ID: <20070531221126.DB6B439C116@new.scipy.org> Author: cookedm Date: 2007-05-31 17:11:24 -0500 (Thu, 31 May 2007) New Revision: 3852 Modified: trunk/numpy/lib/polynomial.py Log: Add __iter__ method to poly1d so that list() on a poly1d doesn't go into an infinite loop Modified: trunk/numpy/lib/polynomial.py =================================================================== --- trunk/numpy/lib/polynomial.py 2007-05-31 17:26:31 UTC (rev 3851) +++ trunk/numpy/lib/polynomial.py 2007-05-31 22:11:24 UTC (rev 3852) @@ -641,6 +641,9 @@ self.__dict__['coeffs'][ind] = val return + def __iter__(self): + return iter(self.coeffs) + def integ(self, m=1, k=0): """Return the mth analytical integral of this polynomial. See the documentation for polyint.