From numpy-svn at scipy.org Tue Jun 3 04:50:32 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 3 Jun 2008 03:50:32 -0500 (CDT) Subject: [Numpy-svn] r5247 - trunk/numpy/doc Message-ID: <20080603085032.31B2739C9E9@scipy.org> Author: stefan Date: 2008-06-03 03:50:08 -0500 (Tue, 03 Jun 2008) New Revision: 5247 Modified: trunk/numpy/doc/HOWTO_BUILD_DOCS.txt trunk/numpy/doc/HOWTO_DOCUMENT.txt trunk/numpy/doc/example.py Log: Update documentation standard. Modified: trunk/numpy/doc/HOWTO_BUILD_DOCS.txt =================================================================== --- trunk/numpy/doc/HOWTO_BUILD_DOCS.txt 2008-06-01 00:53:50 UTC (rev 5246) +++ trunk/numpy/doc/HOWTO_BUILD_DOCS.txt 2008-06-03 08:50:08 UTC (rev 5247) @@ -2,6 +2,12 @@ Building the NumPy API and reference docs ========================================= +Using Sphinx_ +------------- +`Download `_ +the builder. Follow the instructions in ``README.txt``. + + Using Epydoc_ ------------- @@ -58,3 +64,8 @@ The output is placed in ``./html``, and may be viewed by loading the ``index.html`` file into your browser. + + + +.. _epydoc: http://epydoc.sourceforge.net/ +.. _sphinx: http://sphinx.pocoo.org Modified: trunk/numpy/doc/HOWTO_DOCUMENT.txt =================================================================== --- trunk/numpy/doc/HOWTO_DOCUMENT.txt 2008-06-01 00:53:50 UTC (rev 5246) +++ trunk/numpy/doc/HOWTO_DOCUMENT.txt 2008-06-03 08:50:08 UTC (rev 5247) @@ -25,15 +25,24 @@ * `pyflakes` easy_install pyflakes * `pep8.py `_ -For documentation purposes, use unabbreviated module names. If you -prefer the use of abbreviated module names in code (*not* the -docstrings), we suggest the import conventions used by NumPy itself:: +The following import conventions are used throughout the NumPy source +and documentation:: import numpy as np import scipy as sp import matplotlib as mpl import matplotlib.pyplot as plt +It is not necessary to do ``import numpy as np`` at the beginning of +an example. However, some sub-modules, such as ``fft``, are not +imported by default, and you have to include them explicitly:: + + import numpy.fft + +after which you may use it:: + + np.fft.fft2(...) + Docstring Standard ------------------ A documentation string (docstring) is a string that describes a module, @@ -65,8 +74,8 @@ A guiding principle is that human readers of the text are given precedence over contorting docstrings so our tools produce nice output. Rather than sacrificing the readability of the docstrings, we -have chosen to write pre-processors to assist tools like epydoc_ or -sphinx_ in their task. +have written pre-processors to assist tools like epydoc_ and sphinx_ in +their task. Status ------ @@ -177,11 +186,31 @@ See Also -------- - numpy.average : Weighted average - - Preferably, use the full namespace prefixes. For targets in the same - module as the documented object, the prefix can be omitted. + average : Weighted average + When referring to functions in the same sub-module, no prefix is + needed, and the tree is searched upwards for a match. + + Prefix functions from other sub-modules appropriately. E.g., + whilst documenting the ``random`` module, refer to a function in + ``fft`` by + + :: + + fft.fft2 : 2-D fast discrete Fourier transform + + When referring to an entirely different module:: + + scipy.random.norm : Random variates, PDFs, etc. + + Functions may be listed without descriptions:: + + See Also + -------- + func_a : Function a with its description. + func_b, func_c_, func_d + func_e + 8. **Notes** An optional section that provides additional information about the @@ -206,7 +235,7 @@ :: - The value of :math:`omega` is larger than 5. + The value of :math:`\omega` is larger than 5. Note that LaTeX is not particularly easy to read, so use equations sparingly. @@ -329,8 +358,9 @@ `An example `_ of the format shown here is available. Refer to `How to Build API/Reference -Documentation `_ on how to use epydoc_ or sphinx_ to -construct a manual and web page. +Documentation +`_ +on how to use epydoc_ or sphinx_ to construct a manual and web page. This document itself was written in ReStructuredText, and may be converted to HTML using:: Modified: trunk/numpy/doc/example.py =================================================================== --- trunk/numpy/doc/example.py 2008-06-01 00:53:50 UTC (rev 5246) +++ trunk/numpy/doc/example.py 2008-06-03 08:50:08 UTC (rev 5247) @@ -80,7 +80,9 @@ See Also -------- otherfunc : relationship (optional) - newfunc : relationship (optional) + newfunc : Relationship (optional), which could be fairly long, in which + case the line wraps here. + thirdfunc, fourthfunc, fifthfunc Notes ----- From numpy-svn at scipy.org Tue Jun 3 17:23:21 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 3 Jun 2008 16:23:21 -0500 (CDT) Subject: [Numpy-svn] r5248 - in trunk/numpy/ma: . tests Message-ID: <20080603212321.AC31539C053@scipy.org> Author: pierregm Date: 2008-06-03 16:23:15 -0500 (Tue, 03 Jun 2008) New Revision: 5248 Modified: trunk/numpy/ma/core.py trunk/numpy/ma/mrecords.py trunk/numpy/ma/tests/test_core.py trunk/numpy/ma/testutils.py Log: core: * use the "import numpy as np" convention * use np.function instead of (from)numeric.function * CHANGE : when using named fields, the fill_value is now a void-ndarray (and no longer a tuple) * _check_fill_value now checks that an existing fill_value is compatible with a new dtype (bug #806) * fix_invalid now accepts the mask keyword * MaskedArray.__new__ doesn't run _check_fill_value when the fill_value is None * add the astype method, to support the conversion of fill_value when needed. * arange/empty/empty_like/ones/zeros are now available through _convert2ma test_core: * modified test_filled_value to reflect that fill_value is a void-ndrecord when using named fields * added test_check_fill_value/test_check_fill_value_with_records testutils: * use the "import numpy as np" convention * assert_equal_records now uses getitem instead of getattr * assert_array_compare now calls numpy.testing.utils.assert_array_compare on filled data * the assert_xxx functions now accept the verbose keyword mrecords: * MaskedRecords inherit get_fill_value and set_fill_value from MaskedArray * In filled, force the filling value to be a void-ndarray Modified: trunk/numpy/ma/core.py =================================================================== --- trunk/numpy/ma/core.py 2008-06-03 08:50:08 UTC (rev 5247) +++ trunk/numpy/ma/core.py 2008-06-03 21:23:15 UTC (rev 5248) @@ -61,24 +61,23 @@ import cPickle import operator -import numpy -from numpy.core import bool_, complex_, float_, int_, object_, str_ +import numpy as np +from numpy import ndarray, dtype, typecodes, amax, amin, iscomplexobj,\ + bool_, complex_, float_, int_, object_, str_ +from numpy import array as narray + import numpy.core.umath as umath -import numpy.core.fromnumeric as fromnumeric -import numpy.core.numeric as numeric import numpy.core.numerictypes as ntypes -from numpy import bool_, dtype, typecodes, amax, amin, ndarray, iscomplexobj from numpy import expand_dims as n_expand_dims -from numpy import array as narray import warnings -MaskType = bool_ +MaskType = np.bool_ nomask = MaskType(0) divide_tolerance = 1.e-35 -numpy.seterr(all='ignore') +np.seterr(all='ignore') def doc_note(note): return "\nNotes\n-----\n%s" % note @@ -90,7 +89,7 @@ "Class for MA related errors." def __init__ (self, args=None): "Creates an exception." - Exception.__init__(self,args) + Exception.__init__(self, args) self.args = args def __str__(self): "Calculates the string representation." @@ -111,12 +110,12 @@ 'V' : '???', } max_filler = ntypes._minvals -max_filler.update([(k,-numpy.inf) for k in [numpy.float32, numpy.float64]]) +max_filler.update([(k, -np.inf) for k in [np.float32, np.float64]]) min_filler = ntypes._maxvals -min_filler.update([(k,numpy.inf) for k in [numpy.float32, numpy.float64]]) +min_filler.update([(k, +np.inf) for k in [np.float32, np.float64]]) if 'float128' in ntypes.typeDict: - max_filler.update([(numpy.float128,-numpy.inf)]) - min_filler.update([(numpy.float128, numpy.inf)]) + max_filler.update([(np.float128, -np.inf)]) + min_filler.update([(np.float128, +np.inf)]) def default_fill_value(obj): """Calculate the default fill value for the argument object. @@ -124,7 +123,7 @@ """ if hasattr(obj,'dtype'): defval = default_filler[obj.dtype.kind] - elif isinstance(obj, numeric.dtype): + elif isinstance(obj, np.dtype): defval = default_filler[obj.kind] elif isinstance(obj, float): defval = default_filler['f'] @@ -155,7 +154,7 @@ return min_filler[ntypes.typeDict['int_']] elif isinstance(obj, long): return min_filler[ntypes.typeDict['uint']] - elif isinstance(obj, numeric.dtype): + elif isinstance(obj, np.dtype): return min_filler[obj] else: raise TypeError, 'Unsuitable type for calculating minimum.' @@ -177,28 +176,43 @@ return max_filler[ntypes.typeDict['int_']] elif isinstance(obj, long): return max_filler[ntypes.typeDict['uint']] - elif isinstance(obj, numeric.dtype): + elif isinstance(obj, np.dtype): return max_filler[obj] else: raise TypeError, 'Unsuitable type for calculating minimum.' -def _check_fill_value(fill_value, dtype): - descr = numpy.dtype(dtype).descr +def _check_fill_value(fill_value, ndtype): + ndtype = np.dtype(ndtype) + nbfields = len(ndtype) if fill_value is None: - if len(descr) > 1: - fill_value = [default_fill_value(numeric.dtype(d[1])) - for d in descr] + if nbfields >= 1: + fill_value = np.array(tuple([default_fill_value(np.dtype(d)) + for (_, d) in ndtype.descr]), + dtype=ndtype) else: - fill_value = default_fill_value(dtype) + fill_value = default_fill_value(ndtype) + elif nbfields >= 1: + if isinstance(fill_value, ndarray): + try: + fill_value = np.array(fill_value, copy=False, dtype=ndtype) + except ValueError: + err_msg = "Unable to transform %s to dtype %s" + raise ValueError(err_msg % (fill_value,ndtype)) + else: + fval = np.resize(fill_value, nbfields) + fill_value = tuple([np.asarray(f).astype(d).item() + for (f, (_, d)) in zip(fval, ndtype.descr)]) + fill_value = np.array(fill_value, copy=False, dtype=ndtype) else: - fill_value = narray(fill_value).tolist() - fval = numpy.resize(fill_value, len(descr)) - if len(descr) > 1: - fill_value = [numpy.asarray(f).astype(d[1]).item() - for (f,d) in zip(fval, descr)] + if isinstance(fill_value, basestring) and (ndtype.char not in 'SV'): + fill_value = default_fill_value(ndtype) else: - fill_value = narray(fval, copy=False, dtype=dtype).item() + # In case we want to convert 1e+20 to int... + try: + fill_value = np.array(fill_value, copy=False, dtype=ndtype).item() + except OverflowError: + fill_value = default_fill_value(ndtype) return fill_value @@ -302,13 +316,13 @@ return a subclass of ndarray if approriate (True). """ - data = getattr(a, '_data', numpy.array(a, subok=subok)) + data = getattr(a, '_data', np.array(a, subok=subok)) if not subok: return data.view(ndarray) return data getdata = get_data -def fix_invalid(a, copy=True, fill_value=None): +def fix_invalid(a, mask=nomask, copy=True, fill_value=None): """Return (a copy of) a where invalid data (nan/inf) are masked and replaced by fill_value. @@ -329,9 +343,9 @@ b : MaskedArray """ - a = masked_array(a, copy=copy, subok=True) + a = masked_array(a, copy=copy, mask=mask, subok=True) #invalid = (numpy.isnan(a._data) | numpy.isinf(a._data)) - invalid = numpy.logical_not(numpy.isfinite(a._data)) + invalid = np.logical_not(np.isfinite(a._data)) if not invalid.any(): return a a._mask |= invalid @@ -439,16 +453,16 @@ "Execute the call behavior." # m = getmask(a) - d1 = get_data(a) + d1 = getdata(a) # if self.domain is not None: - dm = narray(self.domain(d1), copy=False) - m = numpy.logical_or(m, dm) + dm = np.array(self.domain(d1), copy=False) + m = np.logical_or(m, dm) # The following two lines control the domain filling methods. d1 = d1.copy() # We could use smart indexing : d1[dm] = self.fill ... - # ... but numpy.putmask looks more efficient, despite the copy. - numpy.putmask(d1, dm, self.fill) + # ... but np.putmask looks more efficient, despite the copy. + np.putmask(d1, dm, self.fill) # Take care of the masked singletong first ... if not m.ndim and m: return masked @@ -500,14 +514,14 @@ "Execute the call behavior." m = mask_or(getmask(a), getmask(b)) (d1, d2) = (get_data(a), get_data(b)) - result = self.f(d1, d2, *args, **kwargs).view(get_masked_subclass(a,b)) + result = self.f(d1, d2, *args, **kwargs).view(get_masked_subclass(a, b)) if result.size > 1: if m is not nomask: result._mask = make_mask_none(result.shape) result._mask.flat = m - if isinstance(a,MaskedArray): + if isinstance(a, MaskedArray): result._update_from(a) - if isinstance(b,MaskedArray): + if isinstance(b, MaskedArray): result._update_from(b) elif m: return masked @@ -554,7 +568,7 @@ m = umath.logical_or.outer(ma, mb) if (not m.ndim) and m: return masked - rcls = get_masked_subclass(a,b) + rcls = get_masked_subclass(a, b) # We could fill the arguments first, butis it useful ? # d = self.f.outer(filled(a, self.fillx), filled(b, self.filly)).view(rcls) d = self.f.outer(getdata(a), getdata(b)).view(rcls) @@ -614,16 +628,16 @@ if t.any(None): mb = mask_or(mb, t) # The following line controls the domain filling - d2 = numpy.where(t,self.filly,d2) + d2 = np.where(t,self.filly,d2) m = mask_or(ma, mb) if (not m.ndim) and m: return masked - result = self.f(d1, d2).view(get_masked_subclass(a,b)) + result = self.f(d1, d2).view(get_masked_subclass(a, b)) if result.ndim > 0: result._mask = m - if isinstance(a,MaskedArray): + if isinstance(a, MaskedArray): result._update_from(a) - if isinstance(b,MaskedArray): + if isinstance(b, MaskedArray): result._update_from(b) return result @@ -647,7 +661,7 @@ negative = _MaskedUnaryOperation(umath.negative) floor = _MaskedUnaryOperation(umath.floor) ceil = _MaskedUnaryOperation(umath.ceil) -around = _MaskedUnaryOperation(fromnumeric.round_) +around = _MaskedUnaryOperation(np.round_) logical_not = _MaskedUnaryOperation(umath.logical_not) # Domained unary ufuncs ....................................................... sqrt = _MaskedUnaryOperation(umath.sqrt, 0.0, @@ -723,7 +737,7 @@ """ m = getmask(a) if m is nomask: - m = make_mask_none(fromnumeric.shape(a)) + m = make_mask_none(np.shape(a)) return m def is_mask(m): @@ -776,16 +790,18 @@ else: return result -def make_mask_none(s): +def make_mask_none(newshape): """Return a mask of shape s, filled with False. Parameters ---------- - s : tuple + news : tuple A tuple indicating the shape of the final mask. + fieldnames: {None, string sequence}, optional + A list of field names, if needed. """ - result = numeric.zeros(s, dtype=MaskType) + result = np.zeros(newshape, dtype=MaskType) return result def mask_or (m1, m2, copy=False, shrink=True): @@ -834,7 +850,7 @@ """ cond = make_mask(condition) - a = narray(a, copy=copy, subok=True) + a = np.array(a, copy=copy, subok=True) if hasattr(a, '_mask'): cond = mask_or(cond, a._mask) cls = type(a) @@ -925,7 +941,7 @@ condition = umath.equal(x._data, value) mask = x._mask else: - condition = umath.equal(fromnumeric.asarray(x), value) + condition = umath.equal(np.asarray(x), value) mask = nomask mask = mask_or(mask, make_mask(condition, shrink=True)) return masked_array(x, mask=mask, copy=copy, fill_value=value) @@ -955,7 +971,7 @@ """ abs = umath.absolute xnew = filled(x, value) - if issubclass(xnew.dtype.type, numeric.floating): + if issubclass(xnew.dtype.type, np.floating): condition = umath.less_equal(abs(xnew-value), atol+rtol*abs(value)) mask = getattr(x, '_mask', nomask) else: @@ -969,8 +985,8 @@ preexisting mask is conserved. """ - a = narray(a, copy=copy, subok=True) - condition = ~(numpy.isfinite(a)) + a = np.array(a, copy=copy, subok=True) + condition = ~(np.isfinite(a)) if hasattr(a, '_mask'): condition = mask_or(condition, a._mask) cls = type(a) @@ -1054,7 +1070,7 @@ def getdoc(self): "Return the doc of the function (from the doc of the method)." methdoc = getattr(ndarray, self._name, None) - methdoc = getattr(numpy, self._name, methdoc) + methdoc = getattr(np, self._name, methdoc) if methdoc is not None: return methdoc.__doc__ # @@ -1084,7 +1100,7 @@ "Define an interator." def __init__(self, ma): self.ma = ma - self.ma_iter = numpy.asarray(ma).flat + self.ma_iter = np.asarray(ma).flat if ma._mask is nomask: self.maskiter = None @@ -1106,7 +1122,7 @@ return d -class MaskedArray(numeric.ndarray): +class MaskedArray(ndarray): """Arrays with possibly masked values. Masked values of True exclude the corresponding element from any computation. @@ -1151,11 +1167,11 @@ __array_priority__ = 15 _defaultmask = nomask _defaulthardmask = False - _baseclass = numeric.ndarray + _baseclass = ndarray def __new__(cls, data=None, mask=nomask, dtype=None, copy=False, subok=True, ndmin=0, fill_value=None, - keep_mask=True, hard_mask=False, flag=None,shrink=True, + keep_mask=True, hard_mask=False, flag=None, shrink=True, **options): """Create a new masked array from scratch. @@ -1181,29 +1197,34 @@ _sharedmask = True # Process mask ........... if mask is nomask: + # Erase the current mask ? if not keep_mask: + # With a reduced version if shrink: _data._mask = nomask + # With full version else: - _data._mask = make_mask_none(_data) + _data._mask = np.zeros(_data.shape, dtype=MaskType) if copy: _data._mask = _data._mask.copy() _data._sharedmask = False else: _data._sharedmask = True + # Case 2. : With a mask in input ........ else: - mask = narray(mask, dtype=MaskType, copy=copy) + mask = np.array(mask, dtype=MaskType, copy=copy) if mask.shape != _data.shape: (nd, nm) = (_data.size, mask.size) if nm == 1: - mask = numeric.resize(mask, _data.shape) + mask = np.resize(mask, _data.shape) elif nm == nd: - mask = fromnumeric.reshape(mask, _data.shape) + mask = np.reshape(mask, _data.shape) else: msg = "Mask and data not compatible: data size is %i, "+\ "mask size is %i." raise MAError, msg % (nd, nm) copy = True + # Set the mask to the new value if _data._mask is nomask: _data._mask = mask _data._sharedmask = not copy @@ -1216,8 +1237,10 @@ _data._sharedmask = False # Update fill_value....... if fill_value is None: - fill_value = getattr(data,'_fill_value', None) - _data._fill_value = _check_fill_value(fill_value, _data.dtype) + fill_value = getattr(data, '_fill_value', None) + # But don't run the check unless we have something to check.... + if fill_value is not None: + _data._fill_value = _check_fill_value(fill_value, _data.dtype) # Process extra options .. _data._hardmask = hard_mask _data._baseclass = _baseclass @@ -1227,15 +1250,15 @@ def _update_from(self, obj): """Copies some attributes of obj to self. """ - if obj is not None and isinstance(obj,ndarray): + if obj is not None and isinstance(obj, ndarray): _baseclass = type(obj) else: _baseclass = ndarray - _basedict = getattr(obj,'_basedict',getattr(obj,'__dict__',{})) + _basedict = getattr(obj, '_basedict', getattr(obj, '__dict__',{})) _dict = dict(_fill_value=getattr(obj, '_fill_value', None), _hardmask=getattr(obj, '_hardmask', False), _sharedmask=getattr(obj, '_sharedmask', False), - _baseclass=getattr(obj,'_baseclass',_baseclass), + _baseclass=getattr(obj,'_baseclass', _baseclass), _basedict=_basedict,) self.__dict__.update(_dict) self.__dict__.update(_basedict) @@ -1281,7 +1304,7 @@ # Domain not recognized, use fill_value instead fill_value = self.fill_value result = result.copy() - numpy.putmask(result, d, fill_value) + np.putmask(result, d, fill_value) # Update the mask if m is nomask: if d is not nomask: @@ -1297,6 +1320,24 @@ #.... return result #............................................. + def astype(self, newtype): + """Returns a copy of the array cast to newtype.""" + newtype = np.dtype(newtype) + output = self._data.astype(newtype).view(type(self)) + output._update_from(self) + names = output.dtype.names + if names is None: + output._mask = self._mask.astype(bool) + else: + if self._mask is nomask: + output._mask = nomask + else: + output._mask = self._mask.astype([(n,bool) for n in names]) + # Don't check _fill_value if it's None, that'll speed things up + if self._fill_value is not None: + output._fill_value = _check_fill_value(self._fill_value, newtype) + return output + #............................................. def __getitem__(self, indx): """x.__getitem__(y) <==> x[y] @@ -1312,10 +1353,10 @@ # But then we would have to modify __array_finalize__ to prevent the # mask of being reshaped if it hasn't been set up properly yet... # So it's easier to stick to the current version - m = self._mask + _mask = self._mask if not getattr(dout,'ndim', False): # Just a scalar............ - if m is not nomask and m[indx]: + if _mask is not nomask and _mask[indx]: return masked else: # Force dout to MA ........ @@ -1324,20 +1365,15 @@ dout._update_from(self) # Check the fill_value .... if isinstance(indx, basestring): - fvindx = list(self.dtype.names).index(indx) - dout._fill_value = self.fill_value[fvindx] + if self._fill_value is not None: + dout._fill_value = self._fill_value[indx] # Update the mask if needed - if m is not nomask: + if _mask is not nomask: if isinstance(indx, basestring): - dout._mask = m.reshape(dout.shape) + dout._mask = _mask.reshape(dout.shape) else: - dout._mask = ndarray.__getitem__(m, indx).reshape(dout.shape) + dout._mask = ndarray.__getitem__(_mask, indx).reshape(dout.shape) # Note: Don't try to check for m.any(), that'll take too long... -# mask = ndarray.__getitem__(m, indx).reshape(dout.shape) -# if self._shrinkmask and not m.any(): -# dout._mask = nomask -# else: -# dout._mask = mask return dout #........................ def __setitem__(self, indx, value): @@ -1354,7 +1390,7 @@ # msg = "Masked arrays must be filled before they can be used as indices!" # raise IndexError, msg if isinstance(indx, basestring): - ndarray.__setitem__(self._data,indx, getdata(value)) + ndarray.__setitem__(self._data, indx, getdata(value)) warnings.warn("MaskedArray.__setitem__ on fields: "\ "The mask is NOT affected!") return @@ -1362,19 +1398,19 @@ if value is masked: m = self._mask if m is nomask: - m = numpy.zeros(self.shape, dtype=MaskType) + m = np.zeros(self.shape, dtype=MaskType) m[indx] = True self._mask = m self._sharedmask = False return #.... - dval = narray(value, copy=False, dtype=self.dtype) + dval = np.array(value, copy=False, dtype=self.dtype) valmask = getmask(value) if self._mask is nomask: # Set the data, then the mask ndarray.__setitem__(self._data,indx,dval) if valmask is not nomask: - self._mask = numpy.zeros(self.shape, dtype=MaskType) + self._mask = np.zeros(self.shape, dtype=MaskType) self._mask[indx] = valmask elif not self._hardmask: # Unshare the mask if necessary to avoid propagation @@ -1392,7 +1428,7 @@ dindx[~mindx] = dval elif mindx is nomask: dindx = dval - ndarray.__setitem__(self._data,indx,dindx) + ndarray.__setitem__(self._data, indx, dindx) self._mask[indx] = mindx #............................................ def __getslice__(self, i, j): @@ -1438,7 +1474,7 @@ self.unshare_mask() self._mask.flat = mask if self._mask.shape: - self._mask = numeric.reshape(self._mask, self.shape) + self._mask = np.reshape(self._mask, self.shape) _set_mask = __setmask__ #.... def _get_mask(self): @@ -1527,7 +1563,7 @@ If value is None, use a default based on the data type. """ - self._fill_value = _check_fill_value(value,self.dtype) + self._fill_value = _check_fill_value(value, self.dtype) fill_value = property(fget=get_fill_value, fset=set_fill_value, doc="Filling value.") @@ -1560,21 +1596,21 @@ fill_value = self.fill_value # if self is masked_singleton: - result = numeric.asanyarray(fill_value) + result = np.asanyarray(fill_value) else: result = self._data.copy() try: - numpy.putmask(result, m, fill_value) + np.putmask(result, m, fill_value) except (TypeError, AttributeError): fill_value = narray(fill_value, dtype=object) d = result.astype(object) - result = fromnumeric.choose(m, (d, fill_value)) + result = np.choose(m, (d, fill_value)) except IndexError: #ok, if scalar if self._data.shape: raise elif m: - result = narray(fill_value, dtype=self.dtype) + result = np.array(fill_value, dtype=self.dtype) else: result = self._data return result @@ -1585,7 +1621,7 @@ """ data = ndarray.ravel(self._data) if self._mask is not nomask: - data = data.compress(numpy.logical_not(ndarray.ravel(self._mask))) + data = data.compress(np.logical_not(ndarray.ravel(self._mask))) return data @@ -1606,7 +1642,7 @@ # Get the basic components (_data, _mask) = (self._data, self._mask) # Force the condition to a regular ndarray (forget the missing values...) - condition = narray(condition, copy=False, subok=False) + condition = np.array(condition, copy=False, subok=False) # _new = _data.compress(condition, axis=axis, out=out).view(type(self)) _new._update_from(self) @@ -1741,7 +1777,7 @@ # The following 3 lines control the domain filling if dom_mask.any(): other_data = other_data.copy() - numpy.putmask(other_data, dom_mask, 1) + np.putmask(other_data, dom_mask, 1) ndarray.__idiv__(self._data, other_data) self._mask = mask_or(self._mask, new_mask) return self @@ -1752,28 +1788,28 @@ other_data = getdata(other) other_mask = getmask(other) ndarray.__ipow__(_data, other_data) - invalid = numpy.logical_not(numpy.isfinite(_data)) - new_mask = mask_or(other_mask,invalid) + invalid = np.logical_not(np.isfinite(_data)) + new_mask = mask_or(other_mask, invalid) self._mask = mask_or(self._mask, new_mask) # The following line is potentially problematic, as we change _data... - numpy.putmask(self._data,invalid,self.fill_value) + np.putmask(self._data,invalid,self.fill_value) return self #............................................ def __float__(self): "Convert to float." if self.size > 1: - raise TypeError,\ - "Only length-1 arrays can be converted to Python scalars" + raise TypeError("Only length-1 arrays can be converted "\ + "to Python scalars") elif self._mask: warnings.warn("Warning: converting a masked element to nan.") - return numpy.nan + return np.nan return float(self.item()) def __int__(self): "Convert to int." if self.size > 1: - raise TypeError,\ - "Only length-1 arrays can be converted to Python scalars" + raise TypeError("Only length-1 arrays can be converted "\ + "to Python scalars") elif self._mask: raise MAError, 'Cannot convert masked element to a Python int.' return int(self.item()) @@ -1823,9 +1859,9 @@ n = s[axis] t = list(s) del t[axis] - return numeric.ones(t) * n - n1 = numpy.size(m, axis) - n2 = m.astype(int_).sum(axis) + return np.ones(t) * n + n1 = np.size(m, axis) + n2 = m.astype(int).sum(axis) if axis is None: return (n1-n2) else: @@ -2419,7 +2455,7 @@ filler = maximum_fill_value(self) else: filler = fill_value - idx = numpy.indices(self.shape) + idx = np.indices(self.shape) idx[axis] = self.filled(filler).argsort(axis=axis,kind=kind,order=order) idx_l = idx.tolist() tmp_mask = self._mask[idx_l].flat @@ -2659,7 +2695,7 @@ isMA = isMaskedArray #backward compatibility # We define the masked singleton as a float for higher precedence... # Note that it can be tricky sometimes w/ type comparison -masked_singleton = MaskedArray(0, dtype=float_, mask=True) +masked_singleton = MaskedArray(0, dtype=np.float_, mask=True) masked = masked_singleton masked_array = MaskedArray @@ -2817,7 +2853,7 @@ try: return getattr(MaskedArray, self._methodname).__doc__ except: - return getattr(numpy, self._methodname).__doc__ + return getattr(np, self._methodname).__doc__ def __call__(self, a, *args, **params): if isinstance(a, MaskedArray): return getattr(a, self._methodname).__call__(*args, **params) @@ -2831,7 +2867,7 @@ try: return method(*args, **params) except SystemError: - return getattr(numpy,self._methodname).__call__(a, *args, **params) + return getattr(np,self._methodname).__call__(a, *args, **params) all = _frommethod('all') anomalies = anom = _frommethod('anom') @@ -2878,13 +2914,13 @@ # Get the result and view it as a (subclass of) MaskedArray result = umath.power(fa,fb).view(basetype) # Find where we're in trouble w/ NaNs and Infs - invalid = numpy.logical_not(numpy.isfinite(result.view(ndarray))) + invalid = np.logical_not(np.isfinite(result.view(ndarray))) # Retrieve some extra attributes if needed if isinstance(result,MaskedArray): result._update_from(a) # Add the initial mask if m is not nomask: - if numpy.isscalar(result): + if np.isscalar(result): return masked result._mask = m # Fix the invalid parts @@ -2905,7 +2941,7 @@ # if m.all(): # fa.flat = 1 # else: -# numpy.putmask(fa,m,1) +# np.putmask(fa,m,1) # return masked_array(umath.power(fa, fb), m) #.............................................................................. @@ -2953,7 +2989,7 @@ else: filler = fill_value # return - indx = numpy.indices(a.shape).tolist() + indx = np.indices(a.shape).tolist() indx[axis] = filled(a,filler).argsort(axis=axis,kind=kind,order=order) return a[indx] sort.__doc__ = MaskedArray.sort.__doc__ @@ -2962,13 +2998,13 @@ def compressed(x): """Return a 1-D array of all the non-masked data.""" if getmask(x) is nomask: - return numpy.asanyarray(x) + return np.asanyarray(x) else: return x.compressed() def concatenate(arrays, axis=0): "Concatenate the arrays along the given axis." - d = numpy.concatenate([getdata(a) for a in arrays], axis) + d = np.concatenate([getdata(a) for a in arrays], axis) rcls = get_masked_subclass(*arrays) data = d.view(rcls) # Check whether one of the arrays has a non-empty mask... @@ -2978,7 +3014,7 @@ else: return data # OK, so we have to concatenate the masks - dm = numpy.concatenate([getmaskarray(a) for a in arrays], axis) + dm = np.concatenate([getmaskarray(a) for a in arrays], axis) # If we decide to keep a '_shrinkmask' option, we want to check that ... # ... all of them are True, and then check for dm.any() # shrink = numpy.logical_or.reduce([getattr(a,'_shrinkmask',True) for a in arrays]) @@ -3060,21 +3096,21 @@ if getmask(a) is nomask: if valmask is not nomask: a._sharedmask = True - a.mask = numpy.zeros(a.shape, dtype=bool_) - numpy.putmask(a._mask, mask, valmask) + a.mask = np.zeros(a.shape, dtype=bool_) + np.putmask(a._mask, mask, valmask) elif a._hardmask: if valmask is not nomask: m = a._mask.copy() - numpy.putmask(m, mask, valmask) + np.putmask(m, mask, valmask) a.mask |= m else: if valmask is nomask: valmask = getmaskarray(values) - numpy.putmask(a._mask, mask, valmask) - numpy.putmask(a._data, mask, valdata) + np.putmask(a._mask, mask, valmask) + np.putmask(a._data, mask, valdata) return -def transpose(a,axes=None): +def transpose(a, axes=None): """Return a view of the array with dimensions permuted according to axes, as a masked array. @@ -3108,8 +3144,8 @@ # We can't use _frommethods here, as N.resize is notoriously whiny. m = getmask(x) if m is not nomask: - m = numpy.resize(m, new_shape) - result = numpy.resize(x, new_shape).view(get_masked_subclass(x)) + m = np.resize(m, new_shape) + result = np.resize(x, new_shape).view(get_masked_subclass(x)) if result.ndim: result._mask = m return result @@ -3118,18 +3154,18 @@ #................................................ def rank(obj): "maskedarray version of the numpy function." - return fromnumeric.rank(getdata(obj)) -rank.__doc__ = numpy.rank.__doc__ + return np.rank(getdata(obj)) +rank.__doc__ = np.rank.__doc__ # def shape(obj): "maskedarray version of the numpy function." - return fromnumeric.shape(getdata(obj)) -shape.__doc__ = numpy.shape.__doc__ + return np.shape(getdata(obj)) +shape.__doc__ = np.shape.__doc__ # def size(obj, axis=None): "maskedarray version of the numpy function." - return fromnumeric.size(getdata(obj), axis) -size.__doc__ = numpy.size.__doc__ + return np.size(getdata(obj), axis) +size.__doc__ = np.size.__doc__ #................................................ #####-------------------------------------------------------------------------- @@ -3159,26 +3195,26 @@ elif x is None or y is None: raise ValueError, "Either both or neither x and y should be given." # Get the condition ............... - fc = filled(condition, 0).astype(bool_) - notfc = numpy.logical_not(fc) + fc = filled(condition, 0).astype(MaskType) + notfc = np.logical_not(fc) # Get the data ...................................... xv = getdata(x) yv = getdata(y) if x is masked: ndtype = yv.dtype - xm = numpy.ones(fc.shape, dtype=MaskType) + xm = np.ones(fc.shape, dtype=MaskType) elif y is masked: ndtype = xv.dtype - ym = numpy.ones(fc.shape, dtype=MaskType) + ym = np.ones(fc.shape, dtype=MaskType) else: - ndtype = numpy.max([xv.dtype, yv.dtype]) + ndtype = np.max([xv.dtype, yv.dtype]) xm = getmask(x) - d = numpy.empty(fc.shape, dtype=ndtype).view(MaskedArray) - numpy.putmask(d._data, fc, xv.astype(ndtype)) - numpy.putmask(d._data, notfc, yv.astype(ndtype)) - d._mask = numpy.zeros(fc.shape, dtype=MaskType) - numpy.putmask(d._mask, fc, getmask(x)) - numpy.putmask(d._mask, notfc, getmask(y)) + d = np.empty(fc.shape, dtype=ndtype).view(MaskedArray) + np.putmask(d._data, fc, xv.astype(ndtype)) + np.putmask(d._data, notfc, yv.astype(ndtype)) + d._mask = np.zeros(fc.shape, dtype=MaskType) + np.putmask(d._mask, fc, getmask(x)) + np.putmask(d._mask, notfc, getmask(y)) d._mask |= getmaskarray(condition) if not d._mask.any(): d._mask = nomask @@ -3203,8 +3239,8 @@ c = filled(indices, 0) masks = [nmask(x) for x in t] a = [fmask(x) for x in t] - d = numpy.choose(c, a) - m = numpy.choose(c, masks) + d = np.choose(c, a) + m = np.choose(c, masks) m = make_mask(mask_or(m, getmask(indices)), copy=0, shrink=True) return masked_array(d, mask=m) @@ -3232,17 +3268,13 @@ """ if out is None: - return numpy.round_(a, decimals, out) + return np.round_(a, decimals, out) else: - numpy.round_(getdata(a), decimals, out) + np.round_(getdata(a), decimals, out) if hasattr(out, '_mask'): out._mask = getmask(a) return out -def arange(stop, start=None, step=1, dtype=None): - "maskedarray version of the numpy function." - return numpy.arange(stop, start, step, dtype).view(MaskedArray) -arange.__doc__ = numpy.arange.__doc__ def inner(a, b): "maskedarray version of the numpy function." @@ -3252,8 +3284,8 @@ fa.shape = (1,) if len(fb.shape) == 0: fb.shape = (1,) - return numpy.inner(fa, fb).view(MaskedArray) -inner.__doc__ = numpy.inner.__doc__ + return np.inner(fa, fb).view(MaskedArray) +inner.__doc__ = np.inner.__doc__ inner.__doc__ += doc_note("Masked values are replaced by 0.") innerproduct = inner @@ -3261,16 +3293,16 @@ "maskedarray version of the numpy function." fa = filled(a, 0).ravel() fb = filled(b, 0).ravel() - d = numeric.outer(fa, fb) + d = np.outer(fa, fb) ma = getmask(a) mb = getmask(b) if ma is nomask and mb is nomask: return masked_array(d) ma = getmaskarray(a) mb = getmaskarray(b) - m = make_mask(1-numeric.outer(1-ma, 1-mb), copy=0) + m = make_mask(1-np.outer(1-ma, 1-mb), copy=0) return masked_array(d, mask=m) -outer.__doc__ = numpy.outer.__doc__ +outer.__doc__ = np.outer.__doc__ outer.__doc__ += doc_note("Masked values are replaced by 0.") outerproduct = outer @@ -3311,7 +3343,7 @@ x = filled(array(d1, copy=0, mask=m), fill_value).astype(float) y = filled(array(d2, copy=0, mask=m), 1).astype(float) d = umath.less_equal(umath.absolute(x-y), atol + rtol * umath.absolute(y)) - return fromnumeric.alltrue(fromnumeric.ravel(d)) + return np.alltrue(np.ravel(d)) #.............................................................................. def asarray(a, dtype=None): @@ -3337,26 +3369,6 @@ return masked_array(a, dtype=dtype, copy=False, keep_mask=True, subok=True) -def empty(new_shape, dtype=float): - "maskedarray version of the numpy function." - return numpy.empty(new_shape, dtype).view(MaskedArray) -empty.__doc__ = numpy.empty.__doc__ - -def empty_like(a): - "maskedarray version of the numpy function." - return numpy.empty_like(a).view(MaskedArray) -empty_like.__doc__ = numpy.empty_like.__doc__ - -def ones(new_shape, dtype=float): - "maskedarray version of the numpy function." - return numpy.ones(new_shape, dtype).view(MaskedArray) -ones.__doc__ = numpy.ones.__doc__ - -def zeros(new_shape, dtype=float): - "maskedarray version of the numpy function." - return numpy.zeros(new_shape, dtype).view(MaskedArray) -zeros.__doc__ = numpy.zeros.__doc__ - #####-------------------------------------------------------------------------- #---- --- Pickling --- #####-------------------------------------------------------------------------- @@ -3406,7 +3418,7 @@ """ __doc__ = None def __init__(self, funcname): - self._func = getattr(numpy, funcname) + self._func = getattr(np, funcname) self.__doc__ = self.getdoc() def getdoc(self): "Return the doc of the function (from the doc of the method)." @@ -3414,10 +3426,15 @@ def __call__(self, a, *args, **params): return self._func.__call__(a, *args, **params).view(MaskedArray) +arange = _convert2ma('arange') +clip = np.clip +empty = _convert2ma('empty') +empty_like = _convert2ma('empty_like') frombuffer = _convert2ma('frombuffer') fromfunction = _convert2ma('fromfunction') identity = _convert2ma('identity') -indices = numpy.indices -clip = numpy.clip +indices = np.indices +ones = _convert2ma('ones') +zeros = _convert2ma('zeros') ############################################################################### Modified: trunk/numpy/ma/mrecords.py =================================================================== --- trunk/numpy/ma/mrecords.py 2008-06-03 08:50:08 UTC (rev 5247) +++ trunk/numpy/ma/mrecords.py 2008-06-03 21:23:15 UTC (rev 5248) @@ -217,29 +217,6 @@ return self._fieldmask.view((bool_, len(self.dtype))).all() mask = _mask = property(fget=_getmask, fset=_setmask) #...................................................... - def get_fill_value(self): - """Return the filling value. - - """ - if self._fill_value is None: - ddtype = self.dtype - fillval = _check_fill_value(None, ddtype) - self._fill_value = np.array(tuple(fillval), dtype=ddtype) - return self._fill_value - - def set_fill_value(self, value=None): - """Set the filling value to value. - - If value is None, use a default based on the data type. - - """ - ddtype = self.dtype - fillval = _check_fill_value(value, ddtype) - self._fill_value = np.array(tuple(fillval), dtype=ddtype) - - fill_value = property(fget=get_fill_value, fset=set_fill_value, - doc="Filling value.") - #...................................................... def __len__(self): "Returns the length" # We have more than one record @@ -306,9 +283,14 @@ # Case #1.: Basic field ............ base_fmask = self._fieldmask _names = self.dtype.names or [] + _localdict = self.__dict__ if attr in _names: if val is masked: - fval = self.fill_value[attr] + _fill_value = _localdict['_fill_value'] + if _fill_value is not None: + fval = _fill_value[attr] + else: + fval = None mval = True else: fval = filled(val) @@ -327,13 +309,16 @@ _data = self._data # We want a field ........ if isinstance(indx, basestring): - #NB: Make sure _sharedmask is True to propagate back to _fieldmask - #NB: Don't use _set_mask, there are some copies being made that break propagation - #NB: Don't force the mask to nomask, that wrecks easy masking + #!!!: Make sure _sharedmask is True to propagate back to _fieldmask + #!!!: Don't use _set_mask, there are some copies being made... + #!!!: ...that break propagation + #!!!: Don't force the mask to nomask, that wrecks easy masking obj = _data[indx].view(MaskedArray) -# obj._set_mask(_fieldmask[indx]) obj._mask = _fieldmask[indx] obj._sharedmask = True + fval = _localdict['_fill_value'] + if fval is not None: + obj._fill_value = fval[indx] # Force to masked if the mask is True if not obj.ndim and obj._mask: return masked @@ -429,18 +414,19 @@ return d # if fill_value is None: - value = _check_fill_value(_localdict['_fill_value'],self.dtype) + value = _check_fill_value(_localdict['_fill_value'], d.dtype) else: value = fill_value if np.size(value) == 1: - value = [value,] * len(self.dtype) + value = np.array(tuple([value,] * len(d.dtype)), + dtype=d.dtype) # if self is masked: result = np.asanyarray(value) else: result = d.copy() - for (n, v) in zip(d.dtype.names, value): - np.putmask(np.asarray(result[n]), np.asarray(fm[n]), v) + for n in d.dtype.names: + np.putmask(np.asarray(result[n]), np.asarray(fm[n]), value[n]) return result #...................................................... def harden_mask(self): Modified: trunk/numpy/ma/tests/test_core.py =================================================================== --- trunk/numpy/ma/tests/test_core.py 2008-06-03 08:50:08 UTC (rev 5247) +++ trunk/numpy/ma/tests/test_core.py 2008-06-03 21:23:15 UTC (rev 5248) @@ -795,12 +795,12 @@ mtype = [('f',float_),('s','|S3')] x = array([(1,'a'),(2,'b'),(numpy.pi,'pi')], dtype=mtype) x.fill_value=999 - assert_equal(x.fill_value,[999.,'999']) + assert_equal(x.fill_value.item(),[999.,'999']) assert_equal(x['f'].fill_value, 999) assert_equal(x['s'].fill_value, '999') # x.fill_value=(9,'???') - assert_equal(x.fill_value, (9,'???')) + assert_equal(x.fill_value.item(), (9,'???')) assert_equal(x['f'].fill_value, 9) assert_equal(x['s'].fill_value, '???') # @@ -865,9 +865,90 @@ assert_equal(xf.dtype, float_) assert_equal(xs.data, ['A', 'b', 'pi']) assert_equal(xs.dtype, '|S3') + # + def test_check_fill_value(self): + "Test _check_fill_value" + _check_fill_value = numpy.ma.core._check_fill_value + # + fval = _check_fill_value(0,int) + assert_equal(fval, 0) + fval = _check_fill_value(None,int) + assert_equal(fval, default_fill_value(0)) + # + fval = _check_fill_value(0,"|S3") + assert_equal(fval, "0") + fval = _check_fill_value(None,"|S3") + assert_equal(fval, default_fill_value("|S3")) + # + fval = _check_fill_value(1e+20,int) + assert_equal(fval, default_fill_value(0)) + + def test_check_fill_value_with_fields(self): + "Tests _check_fill_value with records" + _check_fill_value = numpy.ma.core._check_fill_value + # + ndtype = [('a',int),('b',float),('c',"|S3")] + fval = _check_fill_value([-999,-999.9,"???"], ndtype) + assert(isinstance(fval,ndarray)) + assert_equal(fval.item(), [-999,-999.9,"???"]) + # + fval = _check_fill_value(None, ndtype) + assert(isinstance(fval,ndarray)) + assert_equal(fval.item(), [default_fill_value(0), + default_fill_value(0.), + default_fill_value("0")]) + # + fill_val = np.array((-999,-999.9,"???"),dtype=ndtype) + fval = _check_fill_value(fill_val, ndtype) + assert(isinstance(fval,ndarray)) + assert_equal(fval.item(), [-999,-999.9,"???"]) + # + fill_val = np.array((-999,-999.9,"???"), + dtype=[("A",int),("B",float),("C","|S3")]) + fval = _check_fill_value(fill_val, ndtype) + assert(isinstance(fval,ndarray)) + assert_equal(fval.item(), [-999,-999.9,"???"]) + # + fill_value = np.array((-999,-999.9,"???"), dtype=object) + fval = _check_fill_value(fill_val, ndtype) + assert(isinstance(fval,ndarray)) + assert_equal(fval.item(), [-999,-999.9,"???"]) + # + fill_value = np.array((-999,-999.9,"???")) + fval = _check_fill_value(fill_val, ndtype) + assert(isinstance(fval,ndarray)) + assert_equal(fval.item(), [-999,-999.9,"???"]) + # + ndtype = [("a",int)] + fval = _check_fill_value(-999, ndtype) + assert(isinstance(fval,ndarray)) + assert_equal(fval.item(), (-999,)) + # + def test_fillvalue_conversion(self): + "Tests the behavior of fill_value during conversion" + # We had a tailored comment to make sure special attributes are properly + # dealt with + a = array(['3', '4', '5']) + a._basedict.update(comment="updated!") + # + b = array(a, dtype=int) + assert_equal(b._data, [3,4,5]) + assert_equal(b.fill_value, default_fill_value(0)) + # + b = array(a, dtype=float) + assert_equal(b._data, [3,4,5]) + assert_equal(b.fill_value, default_fill_value(0.)) + # + b = a.astype(int) + assert_equal(b._data, [3,4,5]) + assert_equal(b.fill_value, default_fill_value(0)) + assert_equal(b._basedict['comment'], "updated!") + # + b = a.astype([('a','|S3')]) + assert_equal(b['a']._data, a._data) + assert_equal(b['a'].fill_value, a.fill_value) - #............................................................................... class TestUfuncs(NumpyTestCase): @@ -1478,7 +1559,7 @@ datatype = [('a',int_),('b',float_),('c','|S8')] a = masked_array([(1,1.1,'1.1'),(2,2.2,'2.2'),(3,3.3,'3.3')], dtype=datatype) - assert_equal(len(a.fill_value), len(datatype)) + assert_equal(len(a.fill_value.item()), len(datatype)) # b = empty_like(a) assert_equal(b.shape, a.shape) @@ -1590,4 +1671,4 @@ ############################################################################### #------------------------------------------------------------------------------ if __name__ == "__main__": - NumpyTest('numpy.ma.core').run() + NumpyTest().run() Modified: trunk/numpy/ma/testutils.py =================================================================== --- trunk/numpy/ma/testutils.py 2008-06-03 08:50:08 UTC (rev 5247) +++ trunk/numpy/ma/testutils.py 2008-06-03 21:23:15 UTC (rev 5248) @@ -10,16 +10,18 @@ __date__ = "$Date: 2007-11-13 10:01:14 +0200 (Tue, 13 Nov 2007) $" -import numpy as N -from numpy.core import ndarray -from numpy.core.numerictypes import float_ +import operator + +import numpy as np +from numpy import ndarray, float_ import numpy.core.umath as umath from numpy.testing import NumpyTest, NumpyTestCase +import numpy.testing.utils as utils from numpy.testing.utils import build_err_msg, rand import core from core import mask_or, getmask, getmaskarray, masked_array, nomask, masked -from core import filled, equal, less +from core import fix_invalid, filled, equal, less #------------------------------------------------------------------------------ def approx (a, b, fill_value=True, rtol=1.e-5, atol=1.e-8): @@ -35,12 +37,13 @@ d1 = filled(a) d2 = filled(b) if d1.dtype.char == "O" or d2.dtype.char == "O": - return N.equal(d1,d2).ravel() + return np.equal(d1,d2).ravel() x = filled(masked_array(d1, copy=False, mask=m), fill_value).astype(float_) y = filled(masked_array(d2, copy=False, mask=m), 1).astype(float_) - d = N.less_equal(umath.absolute(x-y), atol + rtol * umath.absolute(y)) + d = np.less_equal(umath.absolute(x-y), atol + rtol * umath.absolute(y)) return d.ravel() + def almost(a, b, decimal=6, fill_value=True): """Returns True if a and b are equal up to decimal places. If fill_value is True, masked values considered equal. Otherwise, masked values @@ -50,10 +53,10 @@ d1 = filled(a) d2 = filled(b) if d1.dtype.char == "O" or d2.dtype.char == "O": - return N.equal(d1,d2).ravel() + return np.equal(d1,d2).ravel() x = filled(masked_array(d1, copy=False, mask=m), fill_value).astype(float_) y = filled(masked_array(d2, copy=False, mask=m), 1).astype(float_) - d = N.around(N.abs(x-y),decimal) <= 10.0**(-decimal) + d = np.around(np.abs(x-y),decimal) <= 10.0**(-decimal) return d.ravel() @@ -69,11 +72,12 @@ """Asserts that two records are equal. Pretty crude for now.""" assert_equal(a.dtype, b.dtype) for f in a.dtype.names: - (af, bf) = (getattr(a,f), getattr(b,f)) + (af, bf) = (operator.getitem(a,f), operator.getitem(b,f)) if not (af is masked) and not (bf is masked): - assert_equal(getattr(a,f), getattr(b,f)) + assert_equal(operator.getitem(a,f), operator.getitem(b,f)) return + def assert_equal(actual,desired,err_msg=''): """Asserts that two items are equal. """ @@ -95,16 +99,18 @@ # Case #4. arrays or equivalent if ((actual is masked) and not (desired is masked)) or \ ((desired is masked) and not (actual is masked)): - msg = build_err_msg([actual, desired], err_msg, header='', names=('x', 'y')) + msg = build_err_msg([actual, desired], + err_msg, header='', names=('x', 'y')) raise ValueError(msg) - actual = N.array(actual, copy=False, subok=True) - desired = N.array(desired, copy=False, subok=True) - if actual.dtype.char in "OS" and desired.dtype.char in "OS": + actual = np.array(actual, copy=False, subok=True) + desired = np.array(desired, copy=False, subok=True) + if actual.dtype.char in "OSV" and desired.dtype.char in "OSV": return _assert_equal_on_sequences(actual.tolist(), desired.tolist(), err_msg='') return assert_array_equal(actual, desired, err_msg) -#............................. + + def fail_if_equal(actual,desired,err_msg='',): """Raises an assertion error if two items are equal. """ @@ -120,119 +126,91 @@ for k in range(len(desired)): fail_if_equal(actual[k], desired[k], 'item=%r\n%s' % (k,err_msg)) return - if isinstance(actual, N.ndarray) or isinstance(desired, N.ndarray): + if isinstance(actual, np.ndarray) or isinstance(desired, np.ndarray): return fail_if_array_equal(actual, desired, err_msg) msg = build_err_msg([actual, desired], err_msg) assert desired != actual, msg assert_not_equal = fail_if_equal -#............................ -def assert_almost_equal(actual,desired,decimal=7,err_msg=''): + + +def assert_almost_equal(actual, desired, decimal=7, err_msg='', verbose=True): """Asserts that two items are almost equal. The test is equivalent to abs(desired-actual) < 0.5 * 10**(-decimal) """ - if isinstance(actual, N.ndarray) or isinstance(desired, N.ndarray): - return assert_array_almost_equal(actual, desired, decimal, err_msg) - msg = build_err_msg([actual, desired], err_msg) + if isinstance(actual, np.ndarray) or isinstance(desired, np.ndarray): + return assert_array_almost_equal(actual, desired, decimal=decimal, + err_msg=err_msg, verbose=verbose) + msg = build_err_msg([actual, desired], + err_msg=err_msg, verbose=verbose) assert round(abs(desired - actual),decimal) == 0, msg -#............................ -def assert_array_compare(comparison, x, y, err_msg='', header='', + + +assert_close = assert_almost_equal + + +def assert_array_compare(comparison, x, y, err_msg='', verbose=True, header='', fill_value=True): """Asserts that a comparison relation between two masked arrays is satisfied elementwise.""" + # Fill the data first xf = filled(x) yf = filled(y) + # Allocate a common mask and refill m = mask_or(getmask(x), getmask(y)) - - x = masked_array(xf, copy=False, subok=False, mask=m).filled(fill_value) - y = masked_array(yf, copy=False, subok=False, mask=m).filled(fill_value) - + x = masked_array(xf, copy=False, mask=m) + y = masked_array(yf, copy=False, mask=m) if ((x is masked) and not (y is masked)) or \ ((y is masked) and not (x is masked)): - msg = build_err_msg([x, y], err_msg, header=header, names=('x', 'y')) + msg = build_err_msg([x, y], err_msg=err_msg, verbose=verbose, + header=header, names=('x', 'y')) raise ValueError(msg) + # OK, now run the basic tests on filled versions + return utils.assert_array_compare(comparison, + x.filled(fill_value), y.filled(fill_value), + err_msg=err_msg, + verbose=verbose, header=header) - if (x.dtype.char != "O") and (x.dtype.char != "S"): - x = x.astype(float_) - if isinstance(x, N.ndarray) and x.size > 1: - x[N.isnan(x)] = 0 - elif N.isnan(x): - x = 0 - if (y.dtype.char != "O") and (y.dtype.char != "S"): - y = y.astype(float_) - if isinstance(y, N.ndarray) and y.size > 1: - y[N.isnan(y)] = 0 - elif N.isnan(y): - y = 0 - try: - cond = (x.shape==() or y.shape==()) or x.shape == y.shape - if not cond: - msg = build_err_msg([x, y], - err_msg - + '\n(shapes %s, %s mismatch)' % (x.shape, - y.shape), - header=header, - names=('x', 'y')) - assert cond, msg - val = comparison(x,y) - if m is not nomask and fill_value: - val = masked_array(val, mask=m, copy=False) - if isinstance(val, bool): - cond = val - reduced = [0] - else: - reduced = val.ravel() - cond = reduced.all() - reduced = reduced.tolist() - if not cond: - match = 100-100.0*reduced.count(1)/len(reduced) - msg = build_err_msg([x, y], - err_msg - + '\n(mismatch %s%%)' % (match,), - header=header, - names=('x', 'y')) - assert cond, msg - except ValueError: - msg = build_err_msg([x, y], err_msg, header=header, names=('x', 'y')) - raise ValueError(msg) -#............................ -def assert_array_equal(x, y, err_msg=''): + +def assert_array_equal(x, y, err_msg='', verbose=True): """Checks the elementwise equality of two masked arrays.""" - assert_array_compare(equal, x, y, err_msg=err_msg, + assert_array_compare(equal, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not equal') -##............................ -def fail_if_array_equal(x, y, err_msg=''): + + +def fail_if_array_equal(x, y, err_msg='', verbose=True): "Raises an assertion error if two masked arrays are not equal (elementwise)." def compare(x,y): - - return (not N.alltrue(approx(x, y))) - assert_array_compare(compare, x, y, err_msg=err_msg, + return (not np.alltrue(approx(x, y))) + assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not equal') -#............................ -def assert_array_approx_equal(x, y, decimal=6, err_msg=''): + + +def assert_array_approx_equal(x, y, decimal=6, err_msg='', verbose=True): """Checks the elementwise equality of two masked arrays, up to a given number of decimals.""" def compare(x, y): "Returns the result of the loose comparison between x and y)." return approx(x,y, rtol=10.**-decimal) - assert_array_compare(compare, x, y, err_msg=err_msg, + assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not almost equal') -#............................ -def assert_array_almost_equal(x, y, decimal=6, err_msg=''): + + +def assert_array_almost_equal(x, y, decimal=6, err_msg='', verbose=True): """Checks the elementwise equality of two masked arrays, up to a given number of decimals.""" def compare(x, y): "Returns the result of the loose comparison between x and y)." return almost(x,y,decimal) - assert_array_compare(compare, x, y, err_msg=err_msg, + assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not almost equal') -#............................ -def assert_array_less(x, y, err_msg=''): + + +def assert_array_less(x, y, err_msg='', verbose=True): "Checks that x is smaller than y elementwise." - assert_array_compare(less, x, y, err_msg=err_msg, + assert_array_compare(less, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not less-ordered') -#............................ -assert_close = assert_almost_equal -#............................ + + def assert_mask_equal(m1, m2): """Asserts the equality of two masks.""" if m1 is nomask: @@ -240,6 +218,3 @@ if m2 is nomask: assert(m1 is nomask) assert_array_equal(m1, m2) - -if __name__ == '__main__': - pass From numpy-svn at scipy.org Tue Jun 3 17:24:29 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 3 Jun 2008 16:24:29 -0500 (CDT) Subject: [Numpy-svn] r5249 - trunk/numpy/ma Message-ID: <20080603212429.87FD039C053@scipy.org> Author: pierregm Date: 2008-06-03 16:24:24 -0500 (Tue, 03 Jun 2008) New Revision: 5249 Modified: trunk/numpy/ma/ Log: Property changes on: trunk/numpy/ma ___________________________________________________________________ Name: svn:ignore - core_tmp.py ma_old.py + core_tmp.py ma_old.py core_mod.py core_new.py mrecords_new.py From numpy-svn at scipy.org Tue Jun 3 18:36:24 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 3 Jun 2008 17:36:24 -0500 (CDT) Subject: [Numpy-svn] r5250 - trunk/numpy/doc Message-ID: <20080603223624.6125C39C053@scipy.org> Author: stefan Date: 2008-06-03 17:36:09 -0500 (Tue, 03 Jun 2008) New Revision: 5250 Modified: trunk/numpy/doc/HOWTO_DOCUMENT.txt Log: Update examples section. Modified: trunk/numpy/doc/HOWTO_DOCUMENT.txt =================================================================== --- trunk/numpy/doc/HOWTO_DOCUMENT.txt 2008-06-03 21:24:24 UTC (rev 5249) +++ trunk/numpy/doc/HOWTO_DOCUMENT.txt 2008-06-03 22:36:09 UTC (rev 5250) @@ -309,9 +309,10 @@ b - The examples may assume that ``import numpy`` is executed before - the example code in *numpy*, and ``import scipy`` in *scipy*, but - other modules used should be explicitly imported. + The examples may assume that ``import numpy as np`` is executed before + the example code in *numpy*, and ``import scipy as sp`` in *scipy*. + Additional examples may make use of *matplotlib* for plotting, but should + import it explicitly, e.g., ``import matplotlib.pyplot as plt``. 11. **Indexing tags*** From numpy-svn at scipy.org Tue Jun 3 19:14:19 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 3 Jun 2008 18:14:19 -0500 (CDT) Subject: [Numpy-svn] r5251 - in trunk/numpy/ma: . tests Message-ID: <20080603231419.464FC39C018@scipy.org> Author: pierregm Date: 2008-06-03 18:14:16 -0500 (Tue, 03 Jun 2008) New Revision: 5251 Modified: trunk/numpy/ma/core.py trunk/numpy/ma/tests/test_core.py Log: core * masked_values now accept a shrink argument * fixed the divide_tolerance to numpy.finfo(float).tiny (bug #807) * in MaskedArray.__idiv__, use np.where instead of np.putmask to mask the denominator Modified: trunk/numpy/ma/core.py =================================================================== --- trunk/numpy/ma/core.py 2008-06-03 22:36:09 UTC (rev 5250) +++ trunk/numpy/ma/core.py 2008-06-03 23:14:16 UTC (rev 5251) @@ -76,7 +76,7 @@ MaskType = np.bool_ nomask = MaskType(0) -divide_tolerance = 1.e-35 +divide_tolerance = np.finfo(float).tiny np.seterr(all='ignore') def doc_note(note): @@ -926,15 +926,22 @@ return masked_where(condition, x, copy=copy) # -def masked_object(x, value, copy=True): +def masked_object(x, value, copy=True, shrink=True): """Mask the array x where the data are exactly equal to value. This function is suitable only for object arrays: for floating point, please use ``masked_values`` instead. - Notes - ----- - The mask is set to `nomask` if posible. + Parameters + ---------- + x : array-like + Array to mask + value : var + Comparison value + copy : {True, False}, optional + Whether to return a copy of x. + shrink : {True, False}, optional + Whether to collapse a mask full of False to nomask """ if isMaskedArray(x): @@ -943,10 +950,10 @@ else: condition = umath.equal(np.asarray(x), value) mask = nomask - mask = mask_or(mask, make_mask(condition, shrink=True)) + mask = mask_or(mask, make_mask(condition, shrink=shrink)) return masked_array(x, mask=mask, copy=copy, fill_value=value) -def masked_values(x, value, rtol=1.e-5, atol=1.e-8, copy=True): +def masked_values(x, value, rtol=1.e-5, atol=1.e-8, copy=True, shrink=True): """Mask the array x where the data are approximately equal in value, i.e. @@ -961,12 +968,14 @@ Array to fill. value : float Masking value. - rtol : float + rtol : {float}, optional Tolerance parameter. - atol : float + atol : {float}, optional Tolerance parameter (1e-8). - copy : bool + copy : {True, False}, optional Whether to return a copy of x. + shrink : {True, False}, optional + Whether to collapse a mask full of False to nomask """ abs = umath.absolute @@ -977,7 +986,7 @@ else: condition = umath.equal(xnew, value) mask = nomask - mask = mask_or(mask, make_mask(condition, shrink=True)) + mask = mask_or(mask, make_mask(condition, shrink=shrink)) return masked_array(xnew, mask=mask, copy=copy, fill_value=value) def masked_invalid(a, copy=True): @@ -1776,8 +1785,8 @@ new_mask = mask_or(other_mask, dom_mask) # The following 3 lines control the domain filling if dom_mask.any(): - other_data = other_data.copy() - np.putmask(other_data, dom_mask, 1) + (_, fval) = ufunc_fills[np.divide] + other_data = np.where(dom_mask, fval, other_data) ndarray.__idiv__(self._data, other_data) self._mask = mask_or(self._mask, new_mask) return self Modified: trunk/numpy/ma/tests/test_core.py =================================================================== --- trunk/numpy/ma/tests/test_core.py 2008-06-03 22:36:09 UTC (rev 5250) +++ trunk/numpy/ma/tests/test_core.py 2008-06-03 23:14:16 UTC (rev 5251) @@ -242,6 +242,11 @@ assert_equal(xm._mask, [1,1,1,0,0,1,1,0,0,0,1,1]) assert_equal(xm._data, [1/5.,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) + def test_inplace_arithmetixx(self): + tiny = numpy.finfo(float).tiny + a = array([tiny, 1./tiny, 0.]) + assert_equal(getmaskarray(a/2), [0,0,0]) + assert_equal(getmaskarray(2/a), [1,0,1]) #.......................... def test_scalararithmetic(self): From numpy-svn at scipy.org Tue Jun 3 19:41:22 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 3 Jun 2008 18:41:22 -0500 (CDT) Subject: [Numpy-svn] r5252 - trunk/numpy/ma/tests Message-ID: <20080603234122.9E90F39C02D@scipy.org> Author: pierregm Date: 2008-06-03 18:41:20 -0500 (Tue, 03 Jun 2008) New Revision: 5252 Modified: trunk/numpy/ma/tests/test_mrecords.py Log: use tempfile.mkstemp for the creation of temporary files Modified: trunk/numpy/ma/tests/test_mrecords.py =================================================================== --- trunk/numpy/ma/tests/test_mrecords.py 2008-06-03 23:14:16 UTC (rev 5251) +++ trunk/numpy/ma/tests/test_mrecords.py 2008-06-03 23:41:20 UTC (rev 5252) @@ -375,12 +375,12 @@ """ import os from datetime import datetime - fname = 'tmp%s' % datetime.now().strftime("%y%m%d%H%M%S%s") - f = open(fname, 'w') - f.write(fcontent) - f.close() - mrectxt = fromtextfile(fname,delimitor=',',varnames='ABCDEFG') - os.remove(fname) + import tempfile + (tmp_fd,tmp_fl) = tempfile.mkstemp() + os.write(tmp_fd, fcontent) + os.close(tmp_fd) + mrectxt = fromtextfile(tmp_fl, delimitor=',',varnames='ABCDEFG') + os.remove(tmp_fl) # assert(isinstance(mrectxt, MaskedRecords)) assert_equal(mrectxt.F, [1,1,1,1]) From numpy-svn at scipy.org Wed Jun 4 11:55:15 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 4 Jun 2008 10:55:15 -0500 (CDT) Subject: [Numpy-svn] r5253 - in trunk/numpy/ma: . tests Message-ID: <20080604155515.A7AA1C7C015@scipy.org> Author: pierregm Date: 2008-06-04 10:54:28 -0500 (Wed, 04 Jun 2008) New Revision: 5253 Modified: trunk/numpy/ma/core.py trunk/numpy/ma/tests/test_core.py Log: simplified MaskedArray.__setitem__ to fix setting object-ndarray elements Modified: trunk/numpy/ma/core.py =================================================================== --- trunk/numpy/ma/core.py 2008-06-03 23:41:20 UTC (rev 5252) +++ trunk/numpy/ma/core.py 2008-06-04 15:54:28 UTC (rev 5253) @@ -1413,25 +1413,26 @@ self._sharedmask = False return #.... - dval = np.array(value, copy=False, dtype=self.dtype) - valmask = getmask(value) +# dval = np.array(value, copy=False, dtype=self.dtype) + dval = value + mval = getmask(value) if self._mask is nomask: # Set the data, then the mask ndarray.__setitem__(self._data,indx,dval) - if valmask is not nomask: + if mval is not nomask: self._mask = np.zeros(self.shape, dtype=MaskType) - self._mask[indx] = valmask + self._mask[indx] = mval elif not self._hardmask: # Unshare the mask if necessary to avoid propagation self.unshare_mask() # Set the data, then the mask - ndarray.__setitem__(self._data,indx,dval) - self._mask[indx] = valmask + ndarray.__setitem__(self._data, indx, dval) + ndarray.__setitem__(self._mask, indx, mval) elif hasattr(indx, 'dtype') and (indx.dtype==bool_): indx = indx * umath.logical_not(self._mask) - ndarray.__setitem__(self._data,indx,dval) + ndarray.__setitem__(self._data, indx, dval) else: - mindx = mask_or(self._mask[indx], valmask, copy=True) + mindx = mask_or(self._mask[indx], mval, copy=True) dindx = self._data[indx] if dindx.size > 1: dindx[~mindx] = dval @@ -2469,7 +2470,7 @@ idx_l = idx.tolist() tmp_mask = self._mask[idx_l].flat tmp_data = self._data[idx_l].flat - self.flat = tmp_data + self._data.flat = tmp_data self._mask.flat = tmp_mask return Modified: trunk/numpy/ma/tests/test_core.py =================================================================== --- trunk/numpy/ma/tests/test_core.py 2008-06-03 23:41:20 UTC (rev 5252) +++ trunk/numpy/ma/tests/test_core.py 2008-06-04 15:54:28 UTC (rev 5253) @@ -631,6 +631,19 @@ ctest = masked_where(btest,atest) assert_equal(atest,ctest) #........................ + def test_set_oddities(self): + """Tests setting elements with object""" + a = empty(1,dtype=object) + x = (1,2,3,4,5) + a[0] = x + assert_equal(a[0], x) + assert(a[0] is x) + # + import datetime + dt = datetime.datetime.now() + a[0] = dt + assert(a[0] is dt) + #........................ def test_maskingfunctions(self): "Tests masking functions." x = array([1.,2.,3.,4.,5.]) From numpy-svn at scipy.org Thu Jun 5 13:40:18 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 5 Jun 2008 12:40:18 -0500 (CDT) Subject: [Numpy-svn] r5254 - trunk/numpy/testing Message-ID: <20080605174018.813D539C6AF@scipy.org> Author: dhuard Date: 2008-06-05 12:40:15 -0500 (Thu, 05 Jun 2008) New Revision: 5254 Modified: trunk/numpy/testing/utils.py Log: added verbose argument to assert_array_equal in assert_equal. Fixes ticket #810. Modified: trunk/numpy/testing/utils.py =================================================================== --- trunk/numpy/testing/utils.py 2008-06-04 15:54:28 UTC (rev 5253) +++ trunk/numpy/testing/utils.py 2008-06-05 17:40:15 UTC (rev 5254) @@ -140,7 +140,7 @@ return from numpy.core import ndarray if isinstance(actual, ndarray) or isinstance(desired, ndarray): - return assert_array_equal(actual, desired, err_msg) + return assert_array_equal(actual, desired, err_msg, verbose) msg = build_err_msg([actual, desired], err_msg, verbose=verbose) assert desired == actual, msg From numpy-svn at scipy.org Thu Jun 5 19:27:56 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 5 Jun 2008 18:27:56 -0500 (CDT) Subject: [Numpy-svn] r5255 - in trunk/numpy/core: src tests Message-ID: <20080605232756.0C29339C5F8@scipy.org> Author: oliphant Date: 2008-06-05 18:27:52 -0500 (Thu, 05 Jun 2008) New Revision: 5255 Modified: trunk/numpy/core/src/arrayobject.c trunk/numpy/core/src/multiarraymodule.c trunk/numpy/core/tests/test_regression.py Log: Fix more in ticket #791. Modified: trunk/numpy/core/src/arrayobject.c =================================================================== --- trunk/numpy/core/src/arrayobject.c 2008-06-05 17:40:15 UTC (rev 5254) +++ trunk/numpy/core/src/arrayobject.c 2008-06-05 23:27:52 UTC (rev 5255) @@ -6641,6 +6641,26 @@ } } + +static int +_zerofill(PyArrayObject *ret) +{ + intp n; + + if (PyDataType_REFCHK(ret->descr)) { + PyObject *zero = PyInt_FromLong(0); + PyArray_FillObjectArray(ret, zero); + Py_DECREF(zero); + if (PyErr_Occurred()) {Py_DECREF(ret); return -1;} + } + else { + n = PyArray_NBYTES(ret); + memset(ret->data, 0, n); + return 0; + } +} + + /* Create a view of a complex array with an equivalent data-type except it is real instead of complex. */ @@ -6722,29 +6742,26 @@ array_imag_get(PyArrayObject *self) { PyArrayObject *ret; - PyArray_Descr *type; if (PyArray_ISCOMPLEX(self)) { ret = _get_part(self, 1); - return (PyObject *) ret; } else { - type = self->descr; - Py_INCREF(type); - ret = (PyArrayObject *)PyArray_Zeros(self->nd, - self->dimensions, - type, - PyArray_ISFORTRAN(self)); + Py_INCREF(self->descr); + ret = (PyArrayObject *)PyArray_NewFromDescr(self->ob_type, + self->descr, + self->nd, + self->dimensions, + NULL, NULL, + PyArray_ISFORTRAN(self), + (PyObject *)self); + if (ret == NULL) return NULL; + + if (_zerofill(ret) < 0) return NULL; + ret->flags &= ~WRITEABLE; - if (PyArray_CheckExact(self)) - return (PyObject *)ret; - else { - PyObject *newret; - newret = PyArray_View(ret, NULL, self->ob_type); - Py_DECREF(ret); - return newret; - } } + return (PyObject *) ret; } static int Modified: trunk/numpy/core/src/multiarraymodule.c =================================================================== --- trunk/numpy/core/src/multiarraymodule.c 2008-06-05 17:40:15 UTC (rev 5254) +++ trunk/numpy/core/src/multiarraymodule.c 2008-06-05 23:27:52 UTC (rev 5255) @@ -1345,7 +1345,7 @@ self->dimensions, NULL, NULL, PyArray_ISFORTRAN(self), - NULL); + (PyObject *)self); if (out == NULL) goto fail; outgood = 1; } @@ -5886,7 +5886,6 @@ return ret; } - /* steal a reference */ /* accepts NULL type */ /*NUMPY_API @@ -5896,7 +5895,6 @@ PyArray_Zeros(int nd, intp *dims, PyArray_Descr *type, int fortran) { PyArrayObject *ret; - intp n; if (!type) type = PyArray_DescrFromType(PyArray_DEFAULT); ret = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type, @@ -5906,16 +5904,7 @@ fortran, NULL); if (ret == NULL) return NULL; - if (PyDataType_REFCHK(type)) { - PyObject *zero = PyInt_FromLong(0); - PyArray_FillObjectArray(ret, zero); - Py_DECREF(zero); - if (PyErr_Occurred()) {Py_DECREF(ret); return NULL;} - } - else { - n = PyArray_NBYTES(ret); - memset(ret->data, 0, n); - } + if (_zerofill(ret) < 0) return NULL; return (PyObject *)ret; } Modified: trunk/numpy/core/tests/test_regression.py =================================================================== --- trunk/numpy/core/tests/test_regression.py 2008-06-05 17:40:15 UTC (rev 5254) +++ trunk/numpy/core/tests/test_regression.py 2008-06-05 23:27:52 UTC (rev 5255) @@ -1053,6 +1053,25 @@ except TypeError: pass + def check_attributes(self, level=rlevel): + """Ticket #791 + """ + import numpy as np + class TestArray(np.ndarray): + def __new__(cls, data, info): + result = np.array(data) + result = result.view(cls) + result.info = info + return result + def __array_finalize__(self, obj): + self.info = getattr(obj, 'info', '') + dat = TestArray([[1,2,3,4],[5,6,7,8]],'jubba') + assert dat.info == 'jubba' + assert dat.mean(1).info == 'jubba' + assert dat.std(1).info == 'jubba' + assert dat.clip(2,7).info == 'jubba' + assert dat.imag.info == 'jubba' + def check_recarray_tolist(self, level=rlevel): """Ticket #793, changeset r5215 From numpy-svn at scipy.org Fri Jun 6 22:17:22 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 6 Jun 2008 21:17:22 -0500 (CDT) Subject: [Numpy-svn] r5256 - in trunk/numpy/ma: . tests Message-ID: <20080607021722.699D239C5B9@scipy.org> Author: pierregm Date: 2008-06-06 21:17:17 -0500 (Fri, 06 Jun 2008) New Revision: 5256 Modified: trunk/numpy/ma/core.py trunk/numpy/ma/tests/test_core.py Log: * revamped choose to accept the out and mode keywords * revamped argmin/argmax to accept the out keyword * revamped all/any to accept the out keyword Modified: trunk/numpy/ma/core.py =================================================================== --- trunk/numpy/ma/core.py 2008-06-05 23:27:52 UTC (rev 5255) +++ trunk/numpy/ma/core.py 2008-06-07 02:17:17 UTC (rev 5256) @@ -1131,6 +1131,20 @@ return d +def _fill_output_mask(output, mask): + """Fills the mask of output (if any) with mask. + Private functions used for the methods accepting out as an argument.""" + if isinstance(output,MaskedArray): + outmask = getattr(output, '_mask', nomask) + if (outmask is nomask): + if mask is not nomask: + outmask = output._mask = make_mask_none(output.shape) + outmask.flat = mask + else: + outmask.flat = mask + return output + + class MaskedArray(ndarray): """Arrays with possibly masked values. Masked values of True exclude the corresponding element from any computation. @@ -1972,84 +1986,87 @@ return (self.ctypes.data, self._mask.ctypes.data) #............................................ def all(self, axis=None, out=None): - """Return True if all entries along the given axis are True, - False otherwise. Masked values are considered as True during - computation. + """a.all(axis=None, out=None) + + Check if all of the elements of `a` are true. - Parameter - ---------- - axis : int, optional - Axis along which the operation is performed. If None, - the operation is performed on a flatten array - out : {MaskedArray}, optional - Alternate optional output. If not None, out should be - a valid MaskedArray of the same shape as the output of - self._data.all(axis). + Performs a logical_and over the given axis and returns the result. + Masked values are considered as True during computation. + For convenience, the output array is masked where ALL the values along the + current axis are masked: if the output would have been a scalar and that + all the values are masked, then the output is `masked`. - Returns A masked array, where the mask is True if all data along - ------- - the axis are masked. + Parameters + ---------- + axis : {None, integer} + Axis to perform the operation over. + If None, perform over flattened array. + out : {None, array}, optional + Array into which the result can be placed. Its type is preserved + and it must be of the right shape to hold the output. - Notes - ----- - An exception is raised if ``out`` is not None and not of the - same type as self. + See Also + -------- + all : equivalent function + + Example + ------- + >>> array([1,2,3]).all() + True + >>> a = array([1,2,3], mask=True) + >>> (a.all() is masked) + True """ + mask = self._mask.all(axis) if out is None: d = self.filled(True).all(axis=axis).view(type(self)) - if d.ndim > 0: - d.__setmask__(self._mask.all(axis)) + if d.ndim: + d.__setmask__(mask) + elif mask: + return masked return d - elif type(out) is not type(self): - raise TypeError("The external array should have " \ - "a type %s (got %s instead)" %\ - (type(self), type(out))) self.filled(True).all(axis=axis, out=out) - if out.ndim: - out.__setmask__(self._mask.all(axis)) + if isinstance(out, MaskedArray): + if out.ndim or mask: + out.__setmask__(mask) return out def any(self, axis=None, out=None): - """Returns True if at least one entry along the given axis is - True. + """a.any(axis=None, out=None) - Returns False if all entries are False. - Masked values are considered as True during computation. + Check if any of the elements of `a` are true. - Parameter - ---------- - axis : int, optional - Axis along which the operation is performed. - If None, the operation is performed on a flatten array - out : {MaskedArray}, optional - Alternate optional output. If not None, out should be - a valid MaskedArray of the same shape as the output of - self._data.all(axis). + Performs a logical_or over the given axis and returns the result. + Masked values are considered as False during computation. - Returns A masked array, where the mask is True if all data along - ------- - the axis are masked. + Parameters + ---------- + axis : {None, integer} + Axis to perform the operation over. + If None, perform over flattened array and return a scalar. + out : {None, array}, optional + Array into which the result can be placed. Its type is preserved + and it must be of the right shape to hold the output. - Notes - ----- - An exception is raised if ``out`` is not None and not of the - same type as self. + See Also + -------- + any : equivalent function """ + mask = self._mask.all(axis) if out is None: d = self.filled(False).any(axis=axis).view(type(self)) - if d.ndim > 0: - d.__setmask__(self._mask.all(axis)) + if d.ndim: + d.__setmask__(mask) + elif mask: + d = masked return d - elif type(out) is not type(self): - raise TypeError("The external array should have a type %s "\ - "(got %s instead)" %\ - (type(self), type(out))) self.filled(False).any(axis=axis, out=out) - if out.ndim: - out.__setmask__(self._mask.all(axis)) + if isinstance(out, MaskedArray): + if out.ndim or mask: + out.__setmask__(mask) return out @@ -2088,30 +2105,44 @@ D = self.diagonal(offset=offset, axis1=axis1, axis2=axis2) return D.astype(dtype).filled(0).sum(axis=None) #............................................ - def sum(self, axis=None, dtype=None): - """Sum the array over the given axis. + def sum(self, axis=None, dtype=None, out=None): + """Return the sum of the array elements over the given axis. +Masked elements are set to 0 internally. - Masked elements are set to 0 internally. + Parameters + ---------- + axis : {None, -1, int}, optional + Axis along which the sum is computed. The default + (`axis` = None) is to compute over the flattened array. + dtype : {None, dtype}, optional + Determines the type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and + the type of a is an integer type of precision less than the default + platform integer, then the default platform integer precision is + used. Otherwise, the dtype is the same as that of a. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : dtype, optional - Datatype for the intermediary computation. If not given, - the current dtype is used instead. """ - if self._mask is nomask: + _mask = ndarray.__getattribute__(self, '_mask') + if _mask is nomask: mask = nomask else: - mask = self._mask.all(axis) + mask = _mask.all(axis) if (not mask.ndim) and mask: + if out is not None: + out = masked return masked - result = self.filled(0).sum(axis, dtype=dtype).view(type(self)) - if result.ndim > 0: - result.__setmask__(mask) + if out is None: + result = self.filled(0).sum(axis, dtype=dtype).view(type(self)) + if result.ndim: + result.__setmask__(mask) + else: + result = self.filled(0).sum(axis, dtype=dtype, out=out) + _fill_output_mask(out, mask) return result def cumsum(self, axis=None, dtype=None): @@ -2361,49 +2392,71 @@ fill_value = default_fill_value(self) d = self.filled(fill_value).view(ndarray) return d.argsort(axis=axis, kind=kind, order=order) - #........................ - def argmin(self, axis=None, fill_value=None): - """Return an ndarray of indices for the minimum values of a - along the specified axis. - Masked values are treated as if they had the value fill_value. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. If None, the - output of minimum_fill_value(self._data) is used. + def argmin(self, axis=None, fill_value=None, out=None): + """a.argmin(axis=None, out=None) + Return array of indices to the minimum values along the given axis. + + Parameters + ---------- + axis : {None, integer} + If None, the index is into the flattened array, otherwise along + the specified axis + fill_value : {var}, optional + Value used to fill in the masked values. If None, the output of + minimum_fill_value(self._data) is used instead. + out : {None, array}, optional + Array into which the result can be placed. Its type is preserved + and it must be of the right shape to hold the output. + """ if fill_value is None: fill_value = minimum_fill_value(self) d = self.filled(fill_value).view(ndarray) - return d.argmin(axis) - #........................ - def argmax(self, axis=None, fill_value=None): - """Returns the array of indices for the maximum values of `a` - along the specified axis. + return d.argmin(axis, out=out) - Masked values are treated as if they had the value fill_value. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. If None, the - output of maximum_fill_value(self._data) is used. + def argmax(self, axis=None, fill_value=None, out=None): + """a.argmax(axis=None, out=None) + Returns array of indices of the maximum values along the given axis. + Masked values are treated as if they had the value fill_value. + + Parameters + ---------- + axis : {None, integer} + If None, the index is into the flattened array, otherwise along + the specified axis + fill_value : {var}, optional + Value used to fill in the masked values. If None, the output of + maximum_fill_value(self._data) is used instead. + out : {None, array}, optional + Array into which the result can be placed. Its type is preserved + and it must be of the right shape to hold the output. + + Returns + ------- + index_array : {integer_array} + + Examples + -------- + >>> a = arange(6).reshape(2,3) + >>> a.argmax() + 5 + >>> a.argmax(0) + array([1, 1, 1]) + >>> a.argmax(1) + array([2, 2]) + """ if fill_value is None: fill_value = maximum_fill_value(self._data) d = self.filled(fill_value).view(ndarray) - return d.argmax(axis) + return d.argmax(axis, out=out) + def sort(self, axis=-1, kind='quicksort', order=None, endwith=True, fill_value=None): """Sort along the given axis. @@ -3230,30 +3283,72 @@ d._mask = nomask return d -def choose (indices, t, out=None, mode='raise'): - "Return array shaped like indices with elements chosen from t" - #!!!: implement options `out` and `mode`, if possible + test. +def choose (indices, choices, out=None, mode='raise'): + """ + choose(a, choices, out=None, mode='raise') + + Use an index array to construct a new array from a set of choices. + + Given an array of integers and a set of n choice arrays, this method + will create a new array that merges each of the choice arrays. Where a + value in `a` is i, the new array will have the value that choices[i] + contains in the same place. + + Parameters + ---------- + a : int array + This array must contain integers in [0, n-1], where n is the number + of choices. + choices : sequence of arrays + Choice arrays. The index array and all of the choices should be + broadcastable to the same shape. + out : array, optional + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + 'raise' : raise an error + 'wrap' : wrap around + 'clip' : clip to the range + + Returns + ------- + merged_array : array + + See Also + -------- + choose : equivalent function + + """ def fmask (x): "Returns the filled array, or True if masked." if x is masked: - return 1 + return True return filled(x) def nmask (x): "Returns the mask, True if ``masked``, False if ``nomask``." if x is masked: - return 1 - m = getmask(x) - if m is nomask: - return 0 - return m + return True + return getmask(x) + # Get the indices...... c = filled(indices, 0) - masks = [nmask(x) for x in t] - a = [fmask(x) for x in t] - d = np.choose(c, a) - m = np.choose(c, masks) - m = make_mask(mask_or(m, getmask(indices)), copy=0, shrink=True) - return masked_array(d, mask=m) + # Get the masks........ + masks = [nmask(x) for x in choices] + data = [fmask(x) for x in choices] + # Construct the mask + outputmask = np.choose(c, masks, mode=mode) + outputmask = make_mask(mask_or(outputmask, getmask(indices)), + copy=0, shrink=True) + # Get the choices...... + d = np.choose(c, data, mode=mode, out=out).view(MaskedArray) + if out is not None: + if isinstance(out, MaskedArray): + out.__setmask__(outputmask) + return out + d.__setmask__(outputmask) + return d + def round_(a, decimals=0, out=None): """Return a copy of a, rounded to 'decimals' places. Modified: trunk/numpy/ma/tests/test_core.py =================================================================== --- trunk/numpy/ma/tests/test_core.py 2008-06-05 23:27:52 UTC (rev 5255) +++ trunk/numpy/ma/tests/test_core.py 2008-06-07 02:17:17 UTC (rev 5256) @@ -1,4 +1,4 @@ -# pylint: disable-msg=W0611, W0612, W0511,R0201 +# pylint: disable-msg=W0611, W0612, W0614, W0511,R0201 """Tests suite for MaskedArray & subclassing. :author: Pierre Gerard-Marchant @@ -1031,7 +1031,7 @@ #............................................................................... -class TestArrayMethods(NumpyTestCase): +class TestArrayMathMethods(NumpyTestCase): "Test class for miscellaneous MaskedArrays methods." def setUp(self): "Base data definition." @@ -1261,6 +1261,26 @@ assert_equal(mXsmall.any(0), numpy.matrix([True, True, False])) assert_equal(mXsmall.any(1), numpy.matrix([True, True, False]).T) + + def test_allany_oddities(self): + "Some fun with all and any" + store = empty(1, dtype=bool) + full = array([1,2,3], mask=True) + # + assert(full.all() is masked) + full.all(out=store) + assert(store) + assert(store._mask, True) + assert(store is not masked) + # + store = empty(1, dtype=bool) + assert(full.any() is masked) + full.any(out=store) + assert(not store) + assert(store._mask, True) + assert(store is not masked) + + def test_keepmask(self): "Tests the keep mask flag" x = masked_array([1,2,3], mask=[1,0,0]) @@ -1587,7 +1607,7 @@ assert_equal(b.shape, a.shape) assert_equal(b.fill_value, a.fill_value) -class TestArrayMethodsComplex(NumpyTestCase): +class TestArrayMathMethodsComplex(NumpyTestCase): "Test class for miscellaneous MaskedArrays methods." def setUp(self): "Base data definition." @@ -1685,7 +1705,53 @@ assert_almost_equal(x._data,y._data) + def test_choose(self): + "Test choose" + choices = [[0, 1, 2, 3], [10, 11, 12, 13], + [20, 21, 22, 23], [30, 31, 32, 33]] + chosen = choose([2, 3, 1, 0], choices) + assert_equal(chosen, array([20, 31, 12, 3])) + chosen = choose([2, 4, 1, 0], choices, mode='clip') + assert_equal(chosen, array([20, 31, 12, 3])) + chosen = choose([2, 4, 1, 0], choices, mode='wrap') + assert_equal(chosen, array([20, 1, 12, 3])) + # Check with some masked indices + indices_ = array([2, 4, 1, 0], mask=[1,0,0,1]) + chosen = choose(indices_, choices, mode='wrap') + assert_equal(chosen, array([99, 1, 12, 99])) + assert_equal(chosen.mask, [1,0,0,1]) + # Check with some masked choices + choices = array(choices, mask=[[0, 0, 0, 1], [1, 1, 0, 1], + [1, 0, 0, 0], [0, 0, 0, 0]]) + indices_ = [2, 3, 1, 0] + chosen = choose(indices_, choices, mode='wrap') + assert_equal(chosen, array([20, 31, 12, 3])) + assert_equal(chosen.mask, [1,0,0,1]) + + def test_choose_with_out(self): + "Test choose with an explicit out keyword" + choices = [[0, 1, 2, 3], [10, 11, 12, 13], + [20, 21, 22, 23], [30, 31, 32, 33]] + store = empty(4, dtype=int) + chosen = choose([2, 3, 1, 0], choices, out=store) + assert_equal(store, array([20, 31, 12, 3])) + assert(store is chosen) + # Check with some masked indices + out + store = empty(4, dtype=int) + indices_ = array([2, 3, 1, 0], mask=[1,0,0,1]) + chosen = choose(indices_, choices, mode='wrap', out=store) + assert_equal(store, array([99, 31, 12, 99])) + assert_equal(store.mask, [1,0,0,1]) + # Check with some masked choices + out ina ndarray ! + choices = array(choices, mask=[[0, 0, 0, 1], [1, 1, 0, 1], + [1, 0, 0, 0], [0, 0, 0, 0]]) + indices_ = [2, 3, 1, 0] + store = empty(4, dtype=int).view(ndarray) + chosen = choose(indices_, choices, mode='wrap', out=store) + assert_equal(store, array([999999, 31, 12, 999999])) + + ############################################################################### #------------------------------------------------------------------------------ if __name__ == "__main__": From numpy-svn at scipy.org Sat Jun 7 01:08:46 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 7 Jun 2008 00:08:46 -0500 (CDT) Subject: [Numpy-svn] r5257 - trunk/numpy/core/tests Message-ID: <20080607050846.0F7A539C664@scipy.org> Author: christoph.weidemann Date: 2008-06-07 00:08:06 -0500 (Sat, 07 Jun 2008) New Revision: 5257 Modified: trunk/numpy/core/tests/test_regression.py Log: Testcases for ticket #791 Modified: trunk/numpy/core/tests/test_regression.py =================================================================== --- trunk/numpy/core/tests/test_regression.py 2008-06-07 02:17:17 UTC (rev 5256) +++ trunk/numpy/core/tests/test_regression.py 2008-06-07 05:08:06 UTC (rev 5257) @@ -1067,10 +1067,62 @@ self.info = getattr(obj, 'info', '') dat = TestArray([[1,2,3,4],[5,6,7,8]],'jubba') assert dat.info == 'jubba' + dat.resize((4,2)) + assert dat.info == 'jubba' + dat.sort() + assert dat.info == 'jubba' + dat.fill(2) + assert dat.info == 'jubba' + dat.put([2,3,4],[6,3,4]) + assert dat.info == 'jubba' + dat.setfield(4, np.int32,0) + assert dat.info == 'jubba' + dat.setflags() + assert dat.info == 'jubba' + assert dat.all(1).info == 'jubba' + assert dat.any(1).info == 'jubba' + assert dat.argmax(1).info == 'jubba' + assert dat.argmin(1).info == 'jubba' + assert dat.argsort(1).info == 'jubba' + assert dat.astype(TestArray).info == 'jubba' + assert dat.byteswap().info == 'jubba' + assert dat.clip(2,7).info == 'jubba' + assert dat.compress([0,1,1]).info == 'jubba' + assert dat.conj().info == 'jubba' + assert dat.conjugate().info == 'jubba' + assert dat.copy().info == 'jubba' + dat2 = TestArray([2, 3, 1, 0],'jubba') + choices = [[0, 1, 2, 3], [10, 11, 12, 13], + [20, 21, 22, 23], [30, 31, 32, 33]] + assert dat2.choose(choices).info == 'jubba' + assert dat.cumprod(1).info == 'jubba' + assert dat.cumsum(1).info == 'jubba' + assert dat.diagonal().info == 'jubba' + assert dat.flatten().info == 'jubba' + assert dat.getfield(np.int32,0).info == 'jubba' + assert dat.imag.info == 'jubba' + assert dat.max(1).info == 'jubba' assert dat.mean(1).info == 'jubba' + assert dat.min(1).info == 'jubba' + assert dat.newbyteorder().info == 'jubba' + assert dat.nonzero()[0].info == 'jubba' + assert dat.nonzero()[1].info == 'jubba' + assert dat.prod(1).info == 'jubba' + assert dat.ptp(1).info == 'jubba' + assert dat.ravel().info == 'jubba' + assert dat.real.info == 'jubba' + assert dat.repeat(2).info == 'jubba' + assert dat.reshape((2,4)).info == 'jubba' + assert dat.round().info == 'jubba' + assert dat.squeeze().info == 'jubba' assert dat.std(1).info == 'jubba' - assert dat.clip(2,7).info == 'jubba' - assert dat.imag.info == 'jubba' + assert dat.sum(1).info == 'jubba' + assert dat.swapaxes(0,1).info == 'jubba' + assert dat.take([2,3,5]).info == 'jubba' + assert dat.transpose().info == 'jubba' + assert dat.T.info == 'jubba' + assert dat.var(1).info == 'jubba' + assert dat.view(TestArray).info == 'jubba' def check_recarray_tolist(self, level=rlevel): From numpy-svn at scipy.org Sat Jun 7 11:58:07 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 7 Jun 2008 10:58:07 -0500 (CDT) Subject: [Numpy-svn] r5258 - in trunk: . numpy/distutils/command numpy/fft numpy/lib numpy/linalg numpy/numarray numpy/random Message-ID: <20080607155807.ED742C7C060@scipy.org> Author: cdavid Date: 2008-06-07 10:57:45 -0500 (Sat, 07 Jun 2008) New Revision: 5258 Modified: trunk/ trunk/numpy/distutils/command/scons.py trunk/numpy/fft/SConstruct trunk/numpy/lib/SConstruct trunk/numpy/linalg/SConstruct trunk/numpy/numarray/SConstruct trunk/numpy/random/SConstruct Log: Merged revisions 5204-5257 via svnmerge from http://svn.scipy.org/svn/numpy/branches/cdavid ........ r5205 | cdavid | 2008-05-20 17:14:30 +0900 (Tue, 20 May 2008) | 3 lines Initialized merge tracking via "svnmerge" with revisions "1-5204" from http://svn.scipy.org/svn/numpy/trunk ........ r5206 | cdavid | 2008-05-20 17:17:27 +0900 (Tue, 20 May 2008) | 7 lines Current handling of bootstrapping is flawed: I should handle it at the distutils level, not at the scons level. This is the first step to detect bootstrapping at distutils level, and pass its state to scons through command line. ........ r5207 | cdavid | 2008-05-20 17:35:01 +0900 (Tue, 20 May 2008) | 1 line Fix typo when passing bootstrapping option to scons. ........ r5208 | cdavid | 2008-05-20 17:41:11 +0900 (Tue, 20 May 2008) | 5 lines Do not mess with __NUMPY_SETUP__ in scons scripts anymore: this is handled in numscons. ........ r5209 | cdavid | 2008-05-20 17:43:46 +0900 (Tue, 20 May 2008) | 1 line Forgot one file in lapack_lite when no LAPACK is available. ........ r5210 | cdavid | 2008-05-20 18:24:38 +0900 (Tue, 20 May 2008) | 1 line Handle fortran compiler on open-solaris ........ Property changes on: trunk ___________________________________________________________________ Name: svnmerge-integrated - /branches/aligned_alloca:1-5127 /branches/build_with_scons:1-4676 /branches/cdavid:1-5203 /branches/cleanconfig_rtm:1-4677 /branches/distutils-revamp:1-2752 /branches/distutils_scons_command:1-4619 /branches/multicore:1-3687 /branches/numpy.scons:1-4484 /trunk:1-2871 + /branches/aligned_alloca:1-5127 /branches/build_with_scons:1-4676 /branches/cdavid:1-5257 /branches/cleanconfig_rtm:1-4677 /branches/distutils-revamp:1-2752 /branches/distutils_scons_command:1-4619 /branches/multicore:1-3687 /branches/numpy.scons:1-4484 /trunk:1-2871 Modified: trunk/numpy/distutils/command/scons.py =================================================================== --- trunk/numpy/distutils/command/scons.py 2008-06-07 05:08:06 UTC (rev 5257) +++ trunk/numpy/distutils/command/scons.py 2008-06-07 15:57:45 UTC (rev 5258) @@ -85,6 +85,8 @@ return 'g77' elif compiler.compiler_type == 'gnu95': return 'gfortran' + elif compiler.compiler_type == 'sun': + return 'sunf77' else: # XXX: Just give up for now, and use generic fortran compiler return 'fortran' @@ -196,6 +198,15 @@ raise ValueError(msg) return common +def is_bootstrapping(): + import __builtin__ + try: + __builtin__.__NUMPY_SETUP__ + return True + except AttributeError: + return False + __NUMPY_SETUP__ = False + class scons(old_build_ext): # XXX: add an option to the scons command for configuration (auto/force/cache). description = "Scons builder" @@ -303,6 +314,8 @@ else: # nothing to do, just leave it here. return + + print "is bootstrapping ? %s" % is_bootstrapping() # XXX: when a scons script is missing, scons only prints warnings, and # does not return a failure (status is 0). We have to detect this from # distutils (this cannot work for recursive scons builds...) @@ -326,6 +339,11 @@ post_hooks = self.post_hooks pkg_names = self.pkg_names + if is_bootstrapping(): + bootstrap = 1 + else: + bootstrap = 0 + for sconscript, pre_hook, post_hook, pkg_name in zip(sconscripts, pre_hooks, post_hooks, pkg_names): @@ -364,6 +382,7 @@ elif int(self.silent) == 3: cmd.append('-s') cmd.append('silent=%d' % int(self.silent)) + cmd.append('bootstrapping=%d' % bootstrap) cmdstr = ' '.join(cmd) if int(self.silent) < 1: log.info("Executing scons command (pkg is %s): %s ", pkg_name, cmdstr) Modified: trunk/numpy/fft/SConstruct =================================================================== --- trunk/numpy/fft/SConstruct 2008-06-07 05:08:06 UTC (rev 5257) +++ trunk/numpy/fft/SConstruct 2008-06-07 15:57:45 UTC (rev 5258) @@ -1,8 +1,5 @@ -# Last Change: Thu Oct 18 09:00 PM 2007 J +# Last Change: Tue May 20 05:00 PM 2008 J # vim:syntax=python -import __builtin__ -__builtin__.__NUMPY_SETUP__ = True -from numpy.distutils.misc_util import get_numpy_include_dirs from numscons import GetNumpyEnvironment, scons_get_paths env = GetNumpyEnvironment(ARGUMENTS) Modified: trunk/numpy/lib/SConstruct =================================================================== --- trunk/numpy/lib/SConstruct 2008-06-07 05:08:06 UTC (rev 5257) +++ trunk/numpy/lib/SConstruct 2008-06-07 15:57:45 UTC (rev 5258) @@ -1,8 +1,5 @@ -# Last Change: Thu Oct 18 09:00 PM 2007 J +# Last Change: Tue May 20 05:00 PM 2008 J # vim:syntax=python -import __builtin__ -__builtin__.__NUMPY_SETUP__ = True -from numpy.distutils.misc_util import get_numpy_include_dirs from numscons import GetNumpyEnvironment, scons_get_paths env = GetNumpyEnvironment(ARGUMENTS) Modified: trunk/numpy/linalg/SConstruct =================================================================== --- trunk/numpy/linalg/SConstruct 2008-06-07 05:08:06 UTC (rev 5257) +++ trunk/numpy/linalg/SConstruct 2008-06-07 15:57:45 UTC (rev 5258) @@ -1,11 +1,7 @@ -# Last Change: Fri Nov 16 05:00 PM 2007 J +# Last Change: Tue May 20 05:00 PM 2008 J # vim:syntax=python import os.path -import __builtin__ -__builtin__.__NUMPY_SETUP__ = True - -from numpy.distutils.misc_util import get_numpy_include_dirs, get_mathlibs from numscons import GetNumpyEnvironment, scons_get_paths, \ scons_get_mathlib from numscons import CheckF77LAPACK @@ -27,7 +23,7 @@ sources = ['lapack_litemodule.c'] if not use_lapack: - sources.extend(['zlapack_lite.c', 'dlapack_lite.c', 'blas_lite.c', - 'dlamch.c', 'f2c_lite.c']) + sources.extend(['python_xerbla.c', 'zlapack_lite.c', 'dlapack_lite.c', + 'blas_lite.c', 'dlamch.c', 'f2c_lite.c']) lapack_lite = env.NumpyPythonExtension('lapack_lite', source = sources) Modified: trunk/numpy/numarray/SConstruct =================================================================== --- trunk/numpy/numarray/SConstruct 2008-06-07 05:08:06 UTC (rev 5257) +++ trunk/numpy/numarray/SConstruct 2008-06-07 15:57:45 UTC (rev 5258) @@ -1,8 +1,5 @@ -# Last Change: Fri Oct 19 09:00 AM 2007 J +# Last Change: Tue May 20 05:00 PM 2008 J # vim:syntax=python -import __builtin__ -__builtin__.__NUMPY_SETUP__ = True -from numpy.distutils.misc_util import get_numpy_include_dirs from numscons import GetNumpyEnvironment, scons_get_paths env = GetNumpyEnvironment(ARGUMENTS) Modified: trunk/numpy/random/SConstruct =================================================================== --- trunk/numpy/random/SConstruct 2008-06-07 05:08:06 UTC (rev 5257) +++ trunk/numpy/random/SConstruct 2008-06-07 15:57:45 UTC (rev 5258) @@ -1,11 +1,7 @@ -# Last Change: Tue Nov 13 11:00 PM 2007 J +# Last Change: Tue May 20 05:00 PM 2008 J # vim:syntax=python import os -import __builtin__ -__builtin__.__NUMPY_SETUP__ = True - -from numpy.distutils.misc_util import get_numpy_include_dirs, get_mathlibs from numscons import GetNumpyEnvironment, scons_get_paths, \ scons_get_mathlib From numpy-svn at scipy.org Sat Jun 7 18:43:06 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 7 Jun 2008 17:43:06 -0500 (CDT) Subject: [Numpy-svn] r5259 - trunk/numpy/core/src Message-ID: <20080607224306.2A4B339C2DF@scipy.org> Author: charris Date: 2008-06-07 17:43:03 -0500 (Sat, 07 Jun 2008) New Revision: 5259 Modified: trunk/numpy/core/src/arrayobject.c Log: Fix missing return value, closes ticket #813. Modified: trunk/numpy/core/src/arrayobject.c =================================================================== --- trunk/numpy/core/src/arrayobject.c 2008-06-07 15:57:45 UTC (rev 5258) +++ trunk/numpy/core/src/arrayobject.c 2008-06-07 22:43:03 UTC (rev 5259) @@ -6645,19 +6645,20 @@ static int _zerofill(PyArrayObject *ret) { - intp n; - if (PyDataType_REFCHK(ret->descr)) { PyObject *zero = PyInt_FromLong(0); PyArray_FillObjectArray(ret, zero); Py_DECREF(zero); - if (PyErr_Occurred()) {Py_DECREF(ret); return -1;} + if (PyErr_Occurred()) { + Py_DECREF(ret); + return -1; + } } else { - n = PyArray_NBYTES(ret); + intp n = PyArray_NBYTES(ret); memset(ret->data, 0, n); - return 0; - } + } + return 0; } From numpy-svn at scipy.org Sat Jun 7 23:58:00 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 7 Jun 2008 22:58:00 -0500 (CDT) Subject: [Numpy-svn] r5260 - in trunk/numpy/ma: . tests Message-ID: <20080608035800.849BE39C260@scipy.org> Author: pierregm Date: 2008-06-07 22:57:56 -0500 (Sat, 07 Jun 2008) New Revision: 5260 Modified: trunk/numpy/ma/core.py trunk/numpy/ma/tests/test_core.py Log: * revamped the functions min/max so that the methods are called * revamped the methods sum/prod/var/std/min/max/round to accept an explicit out argument * Force var to return masked when a masked scalar was returned Modified: trunk/numpy/ma/core.py =================================================================== --- trunk/numpy/ma/core.py 2008-06-07 22:43:03 UTC (rev 5259) +++ trunk/numpy/ma/core.py 2008-06-08 03:57:56 UTC (rev 5260) @@ -1131,20 +1131,6 @@ return d -def _fill_output_mask(output, mask): - """Fills the mask of output (if any) with mask. - Private functions used for the methods accepting out as an argument.""" - if isinstance(output,MaskedArray): - outmask = getattr(output, '_mask', nomask) - if (outmask is nomask): - if mask is not nomask: - outmask = output._mask = make_mask_none(output.shape) - outmask.flat = mask - else: - outmask.flat = mask - return output - - class MaskedArray(ndarray): """Arrays with possibly masked values. Masked values of True exclude the corresponding element from any computation. @@ -2087,7 +2073,8 @@ """ return narray(self.filled(0), copy=False).nonzero() - #............................................ + + def trace(self, offset=0, axis1=0, axis2=1, dtype=None, out=None): """a.trace(offset=0, axis1=0, axis2=1, dtype=None, out=None) @@ -2103,12 +2090,15 @@ return result.astype(dtype) else: D = self.diagonal(offset=offset, axis1=axis1, axis2=axis2) - return D.astype(dtype).filled(0).sum(axis=None) - #............................................ + return D.astype(dtype).filled(0).sum(axis=None, out=out) + + def sum(self, axis=None, dtype=None, out=None): - """Return the sum of the array elements over the given axis. -Masked elements are set to 0 internally. + """a.sum(axis=None, dtype=None, out=None) + Return the sum of the array elements over the given axis. + Masked elements are set to 0 internally. + Parameters ---------- axis : {None, -1, int}, optional @@ -2120,7 +2110,7 @@ the type of a is an integer type of precision less than the default platform integer, then the default platform integer precision is used. Otherwise, the dtype is the same as that of a. - out : ndarray, optional + out : {None, ndarray}, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary. @@ -2128,106 +2118,242 @@ """ _mask = ndarray.__getattribute__(self, '_mask') - if _mask is nomask: - mask = nomask - else: - mask = _mask.all(axis) - if (not mask.ndim) and mask: - if out is not None: - out = masked - return masked + newmask = _mask.all(axis=axis) + # No explicit output if out is None: result = self.filled(0).sum(axis, dtype=dtype).view(type(self)) if result.ndim: - result.__setmask__(mask) - else: - result = self.filled(0).sum(axis, dtype=dtype, out=out) - _fill_output_mask(out, mask) - return result + result.__setmask__(newmask) + elif newmask: + result = masked + return result + # Explicit output + result = self.filled(0).sum(axis, dtype=dtype, out=out) + if isinstance(out, MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = newmask + return out - def cumsum(self, axis=None, dtype=None): - """Return the cumulative sum of the elements of the array - along the given axis. - Masked values are set to 0 internally. + def cumsum(self, axis=None, dtype=None, out=None): + """a.cumsum(axis=None, dtype=None, out=None) - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + Return the cumulative sum of the elements along the given axis. + The cumulative sum is calculated over the flattened array by + default, otherwise over the specified axis. + + Masked values are set to 0 internally during the computation. + However, their position is saved, and the result will be masked at + the same locations. + + Parameters + ---------- + axis : {None, -1, int}, optional + Axis along which the sum is computed. The default + (`axis` = None) is to compute over the flattened array. + dtype : {None, dtype}, optional + Determines the type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and + the type of a is an integer type of precision less than the default + platform integer, then the default platform integer precision is + used. Otherwise, the dtype is the same as that of a. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. + WARNING : The mask is lost if out is not a valid MaskedArray ! + + Returns + ------- + cumsum : ndarray. + A new array holding the result is returned unless ``out`` is + specified, in which case a reference to ``out`` is returned. + + Example + ------- + >>> print array(arange(10),mask=[0,0,0,1,1,1,0,0,0,0]).cumsum() + [0 1 3 -- -- -- 9 16 24 33] + + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + """ - result = self.filled(0).cumsum(axis=axis, dtype=dtype).view(type(self)) - result.__setmask__(self.mask) + result = self.filled(0).cumsum(axis=axis, dtype=dtype, out=out) + if out is not None: + if isinstance(out, MaskedArray): + out.__setmask__(self.mask) + return out + result = result.view(type(self)) + result.__setmask__(self._mask) return result - def prod(self, axis=None, dtype=None): - """Return the product of the elements of the array along the - given axis. - Masked elements are set to 1 internally. + def prod(self, axis=None, dtype=None, out=None): + """a.prod(axis=None, dtype=None, out=None) - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + Return the product of the array elements over the given axis. + Masked elements are set to 1 internally for computation. + Parameters + ---------- + axis : {None, -1, int}, optional + Axis over which the product is taken. If None is used, then the + product is over all the array elements. + dtype : {None, dtype}, optional + Determines the type of the returned array and of the accumulator + where the elements are multiplied. If dtype has the value None and + the type of a is an integer type of precision less than the default + platform integer, then the default platform integer precision is + used. Otherwise, the dtype is the same as that of a. + out : {None, array}, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + + Returns + ------- + product_along_axis : {array, scalar}, see dtype parameter above. + Returns an array whose shape is the same as a with the specified + axis removed. Returns a 0d array when a is 1d or axis=None. + Returns a reference to the specified output array if specified. + + See Also + -------- + prod : equivalent function + + Examples + -------- + >>> prod([1.,2.]) + 2.0 + >>> prod([1.,2.], dtype=int32) + 2 + >>> prod([[1.,2.],[3.,4.]]) + 24.0 + >>> prod([[1.,2.],[3.,4.]], axis=1) + array([ 2., 12.]) + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + """ - if self._mask is nomask: - mask = nomask - else: - mask = self._mask.all(axis) - if (not mask.ndim) and mask: - return masked - result = self.filled(1).prod(axis=axis, dtype=dtype).view(type(self)) - if result.ndim: - result.__setmask__(mask) - return result + _mask = ndarray.__getattribute__(self, '_mask') + newmask = _mask.all(axis=axis) + # No explicit output + if out is None: + result = self.filled(1).prod(axis, dtype=dtype).view(type(self)) + if result.ndim: + result.__setmask__(newmask) + elif newmask: + result = masked + return result + # Explicit output + result = self.filled(1).prod(axis, dtype=dtype, out=out) + if isinstance(out,MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = newmask + return out product = prod - def cumprod(self, axis=None, dtype=None): - """Return the cumulative product of the elements of the array - along the given axis. + def cumprod(self, axis=None, dtype=None, out=None): + """ + a.cumprod(axis=None, dtype=None, out=None) - Masked values are set to 1 internally. + Return the cumulative product of the elements along the given axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + The cumulative product is taken over the flattened array by + default, otherwise over the specified axis. - """ - result = self.filled(1).cumprod(axis=axis, dtype=dtype).view(type(self)) - result.__setmask__(self.mask) + Masked values are set to 1 internally during the computation. + However, their position is saved, and the result will be masked at + the same locations. + + Parameters + ---------- + axis : {None, -1, int}, optional + Axis along which the product is computed. The default + (`axis` = None) is to compute over the flattened array. + dtype : {None, dtype}, optional + Determines the type of the returned array and of the accumulator + where the elements are multiplied. If dtype has the value None and + the type of a is an integer type of precision less than the default + platform integer, then the default platform integer precision is + used. Otherwise, the dtype is the same as that of a. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. + WARNING : The mask is lost if out is not a valid MaskedArray ! + + Returns + ------- + cumprod : ndarray. + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + """ + result = self.filled(1).cumprod(axis=axis, dtype=dtype, out=out) + if out is not None: + if isinstance(out, MaskedArray): + out.__setmask__(self._mask) + return out + result = result.view(type(self)) + result.__setmask__(self._mask) return result + def mean(self, axis=None, dtype=None, out=None): - """Average the array over the given axis. Equivalent to + """a.mean(axis=None, dtype=None, out=None) -> mean - a.sum(axis, dtype) / a.size(axis). + Returns the average of the array elements. The average is taken over the + flattened array by default, otherwise over the specified axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + Parameters + ---------- + axis : integer + Axis along which the means are computed. The default is + to compute the mean of the flattened array. + dtype : type + Type to use in computing the means. For arrays of + integer type the default is float32, for arrays of float types it + is the same as the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + Returns + ------- + mean : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. + + See Also + -------- + var : variance + std : standard deviation + + Notes + ----- + The mean is the sum of the elements along the axis divided by the + number of elements. + + """ if self._mask is nomask: result = super(MaskedArray, self).mean(axis=axis, dtype=dtype) @@ -2236,7 +2362,13 @@ cnt = self.count(axis=axis) result = dsum*1./cnt if out is not None: - out.flat = result.ravel() + out.flat = result + if isinstance(out, MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = getattr(result, '_mask', nomask) + return out return result def anom(self, axis=None, dtype=None): @@ -2259,87 +2391,149 @@ else: return (self - expand_dims(m,axis)) - def var(self, axis=None, dtype=None, ddof=0): - """Return the variance, a measure of the spread of a distribution. + def var(self, axis=None, dtype=None, out=None, ddof=0): + """a.var(axis=None, dtype=None, out=None, ddof=0) -> variance - The variance is the average of the squared deviations from the - mean, i.e. var = mean(abs(x - x.mean())**2). + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by default, + otherwise over the specified axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + Parameters + ---------- + axis : integer + Axis along which the variance is computed. The default is to + compute the variance of the flattened array. + dtype : data-type + Type to use in computing the variance. For arrays of integer type + the default is float32, for arrays of float types it is the same as + the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + ddof : {0, integer}, + Means Delta Degrees of Freedom. The divisor used in calculation is + N - ddof. - Notes - ----- - The value returned is by default a biased estimate of the - true variance, since the mean is computed by dividing by N-ddof. - For the (more standard) unbiased estimate, use ddof=1 or. - Note that for complex numbers the absolute value is taken before - squaring, so that the result is always real and nonnegative. + Returns + ------- + variance : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. + See Also + -------- + std : standard deviation + mean: average + + Notes + ----- + The variance is the average of the squared deviations from the mean, + i.e. var = mean(abs(x - x.mean())**2). The mean is computed by + dividing by N-ddof, where N is the number of elements. The argument + ddof defaults to zero; for an unbiased estimate supply ddof=1. Note + that for complex numbers the absolute value is taken before squaring, + so that the result is always real and nonnegative. + """ + # Easy case: nomask, business as usual if self._mask is nomask: - #???: Do we keep super, or var _data and take a view ? - return super(MaskedArray, self).var(axis=axis, dtype=dtype, - ddof=ddof) + return self._data.var(axis=axis, dtype=dtype, out=out, ddof=ddof) + # Some data are masked, yay! + cnt = self.count(axis=axis)-ddof + danom = self.anom(axis=axis, dtype=dtype) + if iscomplexobj(self): + danom = umath.absolute(danom)**2 else: - cnt = self.count(axis=axis)-ddof - danom = self.anom(axis=axis, dtype=dtype) - if iscomplexobj(self): - danom = umath.absolute(danom)**2 - else: - danom *= danom - dvar = narray(danom.sum(axis) / cnt).view(type(self)) - if axis is not None: - dvar._mask = mask_or(self._mask.all(axis), (cnt==1)) + danom *= danom + dvar = divide(danom.sum(axis), cnt).view(type(self)) + # Apply the mask if it's not a scalar + if dvar.ndim: + dvar._mask = mask_or(self._mask.all(axis), (cnt<=ddof)) dvar._update_from(self) - return dvar + elif getattr(dvar,'_mask', False): + # Make sure that masked is returned when the scalar is masked. + dvar = masked + if out is not None: + if isinstance(out, MaskedArray): + out.__setmask__(True) + else: + out.flat = np.nan + return out + # In case with have an explicit output + if out is not None: + # Set the data + out.flat = dvar + # Set the mask if needed + if isinstance(out, MaskedArray): + out.__setmask__(dvar.mask) + return out + return dvar - def std(self, axis=None, dtype=None, ddof=0): - """Return the standard deviation, a measure of the spread of a - distribution. + def std(self, axis=None, dtype=None, out=None, ddof=0): + """a.std(axis=None, dtype=None, out=None, ddof=0) - The standard deviation is the square root of the average of - the squared deviations from the mean, i.e. + Returns the standard deviation of the array elements, a measure of the + spread of a distribution. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. - std = sqrt(mean(abs(x - x.mean())**2)). + Parameters + ---------- + axis : integer + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : type + Type to use in computing the standard deviation. For arrays of + integer type the default is float32, for arrays of float types it + is the same as the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + ddof : {0, integer} + Means Delta Degrees of Freedom. The divisor used in calculations + is N-ddof. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. - If not given, the current dtype is used instead. + Returns + ------- + standard deviation : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. - Notes - ----- - The value returned is by default a biased estimate of the - true standard deviation, since the mean is computed by dividing - by N-ddof. For the more standard unbiased estimate, use ddof=1. - Note that for complex numbers the absolute value is taken before - squaring, so that the result is always real and nonnegative. - """ - dvar = self.var(axis,dtype,ddof=ddof) - if axis is not None or dvar is not masked: + See Also + -------- + var : variance + mean : average + + Notes + ----- + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e. var = sqrt(mean(abs(x - x.mean())**2)). The + computed standard deviation is computed by dividing by the number of + elements, N-ddof. The option ddof defaults to zero, that is, a biased + estimate. Note that for complex numbers std takes the absolute value before + squaring, so that the result is always real and nonnegative. + + """ + dvar = self.var(axis=axis,dtype=dtype,out=out, ddof=ddof) + if dvar is not masked: dvar = sqrt(dvar) + if out is not None: + out **= 0.5 + return out return dvar #............................................ def round(self, decimals=0, out=None): - result = self._data.round(decimals).view(type(self)) + result = self._data.round(decimals=decimals, out=out).view(type(self)) result._mask = self._mask result._update_from(self) + # No explicit output: we're done if out is None: return result - out[:] = result - return + if isinstance(out, MaskedArray): + out.__setmask__(self._mask) + return out round.__doc__ = ndarray.round.__doc__ #............................................ @@ -2528,43 +2722,56 @@ return #............................................ - def min(self, axis=None, fill_value=None): - """Return the minimum of a along the given axis. + def min(self, axis=None, out=None, fill_value=None): + """a.min(axis=None, out=None, fill_value=None) - Masked values are filled with fill_value. + Return the minimum along a given axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. - If None, use the the output of minimum_fill_value(). + Parameters + ---------- + axis : {None, int}, optional + Axis along which to operate. By default, ``axis`` is None and the + flattened input is used. + out : array_like, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + fill_value : {var}, optional + Value used to fill in the masked values. + If None, use the output of minimum_fill_value(). - """ - mask = self._mask - # Check all/nothing case ...... - if mask is nomask: - return super(MaskedArray, self).min(axis=axis) - elif (not mask.ndim) and mask: - return masked - # Get the mask ................ - if axis is None: - mask = umath.logical_and.reduce(mask.flat) - else: - mask = umath.logical_and.reduce(mask, axis=axis) - # Skip if all masked .......... - if not mask.ndim and mask: - return masked - # Get the fill value ........... + Returns + ------- + amin : array_like + New array holding the result. + If ``out`` was specified, ``out`` is returned. + + """ + _mask = ndarray.__getattribute__(self, '_mask') + newmask = _mask.all(axis=axis) if fill_value is None: fill_value = minimum_fill_value(self) - # Get the data ................ - result = self.filled(fill_value).min(axis=axis).view(type(self)) - if result.ndim > 0: - result._mask = mask - return result + # No explicit output + if out is None: + result = self.filled(fill_value).min(axis=axis, out=out).view(type(self)) + if result.ndim: + # Set the mask + result.__setmask__(newmask) + # Get rid of Infs + if newmask.ndim: + np.putmask(result, newmask, result.fill_value) + elif newmask: + result = masked + return result + # Explicit output + result = self.filled(fill_value).min(axis=axis, out=out) + if isinstance(out, MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = newmask + else: + np.putmask(out, newmask, np.nan) + return out def mini(self, axis=None): if axis is None: @@ -2573,58 +2780,90 @@ return minimum.reduce(self, axis) #........................ - def max(self, axis=None, fill_value=None): - """Return the maximum/a along the given axis. + def max(self, axis=None, out=None, fill_value=None): + """a.max(axis=None, out=None, fill_value=None) - Masked values are filled with fill_value. + Return the maximum along a given axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. - If None, use the the output of maximum_fill_value(). + Parameters + ---------- + axis : {None, int}, optional + Axis along which to operate. By default, ``axis`` is None and the + flattened input is used. + out : array_like, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + fill_value : {var}, optional + Value used to fill in the masked values. + If None, use the output of maximum_fill_value(). + + Returns + ------- + amax : array_like + New array holding the result. + If ``out`` was specified, ``out`` is returned. + """ - mask = self._mask - # Check all/nothing case ...... - if mask is nomask: - return super(MaskedArray, self).max(axis=axis) - elif (not mask.ndim) and mask: - return masked - # Check the mask .............. - if axis is None: - mask = umath.logical_and.reduce(mask.flat) - else: - mask = umath.logical_and.reduce(mask, axis=axis) - # Skip if all masked .......... - if not mask.ndim and mask: - return masked - # Get the fill value .......... + _mask = ndarray.__getattribute__(self, '_mask') + newmask = _mask.all(axis=axis) if fill_value is None: fill_value = maximum_fill_value(self) - # Get the data ................ - result = self.filled(fill_value).max(axis=axis).view(type(self)) - if result.ndim > 0: - result._mask = mask - return result - #........................ - def ptp(self, axis=None, fill_value=None): - """Return the visible data range (max-min) along the given axis. + # No explicit output + if out is None: + result = self.filled(fill_value).max(axis=axis, out=out).view(type(self)) + if result.ndim: + # Set the mask + result.__setmask__(newmask) + # Get rid of Infs + if newmask.ndim: + np.putmask(result, newmask, result.fill_value) + elif newmask: + result = masked + return result + # Explicit output + result = self.filled(fill_value).max(axis=axis, out=out) + if isinstance(out, MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = newmask + else: + np.putmask(out, newmask, np.nan) + return out - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. If None, the - maximum uses the maximum default, the minimum uses the - minimum default. + def ptp(self, axis=None, out=None, fill_value=None): + """a.ptp(axis=None, out=None) + Return (maximum - minimum) along the the given dimension + (i.e. peak-to-peak value). + + Parameters + ---------- + axis : {None, int}, optional + Axis along which to find the peaks. If None (default) the + flattened array is used. + out : array_like + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. + fill_value : {var}, optional + Value used to fill in the masked values. + + Returns + ------- + ptp : ndarray. + A new array holding the result, unless ``out`` was + specified, in which case a reference to ``out`` is returned. + + """ - return self.max(axis, fill_value) - self.min(axis, fill_value) + if out is None: + result = self.max(axis=axis, fill_value=fill_value) + result -= self.min(axis=axis, fill_value=fill_value) + return result + out.flat = self.max(axis=axis, out=out, fill_value=fill_value) + out -= self.min(axis=axis, fill_value=fill_value) + return out # Array methods --------------------------------------- copy = _arraymethod('copy') @@ -2864,35 +3103,32 @@ self.fill_value_func = maximum_fill_value #.......................................................... -def min(array, axis=None, out=None): - """Return the minima along the given axis. +def min(obj, axis=None, out=None, fill_value=None): + try: + return obj.min(axis=axis, fill_value=fill_value, out=out) + except (AttributeError, TypeError): + # If obj doesn't have a max method, + # ...or if the method doesn't accept a fill_value argument + return asanyarray(obj).min(axis=axis, fill_value=fill_value, out=out) +min.__doc__ = MaskedArray.min.__doc__ - If `axis` is None, applies to the flattened array. +def max(obj, axis=None, out=None, fill_value=None): + try: + return obj.max(axis=axis, fill_value=fill_value, out=out) + except (AttributeError, TypeError): + # If obj doesn't have a max method, + # ...or if the method doesn't accept a fill_value argument + return asanyarray(obj).max(axis=axis, fill_value=fill_value, out=out) +max.__doc__ = MaskedArray.max.__doc__ - """ - if out is not None: - raise TypeError("Output arrays Unsupported for masked arrays") - if axis is None: - return minimum(array) - else: - return minimum.reduce(array, axis) -min.__doc__ = MaskedArray.min.__doc__ -#............................ -def max(obj, axis=None, out=None): - if out is not None: - raise TypeError("Output arrays Unsupported for masked arrays") - if axis is None: - return maximum(obj) - else: - return maximum.reduce(obj, axis) -max.__doc__ = MaskedArray.max.__doc__ -#............................. -def ptp(obj, axis=None): +def ptp(obj, axis=None, out=None, fill_value=None): """a.ptp(axis=None) = a.max(axis)-a.min(axis)""" try: - return obj.max(axis)-obj.min(axis) - except AttributeError: - return max(obj, axis=axis) - min(obj, axis=axis) + return obj.ptp(axis, out=out, fill_value=fill_value) + except (AttributeError, TypeError): + # If obj doesn't have a max method, + # ...or if the method doesn't accept a fill_value argument + return asanyarray(obj).ptp(axis=axis, fill_value=fill_value, out=out) ptp.__doc__ = MaskedArray.ptp.__doc__ @@ -3265,13 +3501,13 @@ yv = getdata(y) if x is masked: ndtype = yv.dtype - xm = np.ones(fc.shape, dtype=MaskType) +# xm = np.ones(fc.shape, dtype=MaskType) elif y is masked: ndtype = xv.dtype - ym = np.ones(fc.shape, dtype=MaskType) +# ym = np.ones(fc.shape, dtype=MaskType) else: ndtype = np.max([xv.dtype, yv.dtype]) - xm = getmask(x) +# xm = getmask(x) d = np.empty(fc.shape, dtype=ndtype).view(MaskedArray) np.putmask(d._data, fc, xv.astype(ndtype)) np.putmask(d._data, notfc, yv.astype(ndtype)) Modified: trunk/numpy/ma/tests/test_core.py =================================================================== --- trunk/numpy/ma/tests/test_core.py 2008-06-07 22:43:03 UTC (rev 5259) +++ trunk/numpy/ma/tests/test_core.py 2008-06-08 03:57:56 UTC (rev 5260) @@ -1028,6 +1028,23 @@ assert_equal(amask.min(0), [5,6,7,8]) assert(amask.max(1)[0].mask) assert(amask.min(1)[0].mask) + #........................ + def test_minmax_funcs_with_out(self): + mask = numpy.random.rand(12).round() + xm = array(numpy.random.uniform(0,10,12),mask=mask) + xm.shape = (3,4) + for funcname in ('min', 'max'): + # Initialize + npfunc = getattr(numpy, funcname) + mafunc = getattr(coremodule, funcname) + # Use the np version + nout = np.empty((4,), dtype=int) + result = npfunc(xm,axis=0,out=nout) + assert(result is nout) + # Use the ma version + nout.fill(-999) + result = mafunc(xm,axis=0,out=nout) + assert(result is nout) #............................................................................... @@ -1607,6 +1624,43 @@ assert_equal(b.shape, a.shape) assert_equal(b.fill_value, a.fill_value) + + def test_varstd_specialcases(self): + "Test a special case for var" + nout = np.empty(1, dtype=float) + mout = empty(1, dtype=float) + # + x = array(arange(10), mask=True) + for methodname in ('var', 'std'): + method = getattr(x,methodname) + assert(method() is masked) + assert(method(0) is masked) + assert(method(-1) is masked) + # Using a masked array as explicit output + _ = method(out=mout) + assert(mout is not masked) + assert_equal(mout.mask, True) + # Using a ndarray as explicit output + _ = method(out=nout) + assert(np.isnan(nout)) + # + x = array(arange(10), mask=True) + x[-1] = 9 + for methodname in ('var', 'std'): + method = getattr(x,methodname) + assert(method(ddof=1) is masked) + assert(method(0, ddof=1) is masked) + assert(method(-1, ddof=1) is masked) + # Using a masked array as explicit output + _ = method(out=mout, ddof=1) + assert(mout is not masked) + assert_equal(mout.mask, True) + # Using a ndarray as explicit output + _ = method(out=nout, ddof=1) + assert(np.isnan(nout)) + + + class TestArrayMathMethodsComplex(NumpyTestCase): "Test class for miscellaneous MaskedArrays methods." def setUp(self): @@ -1751,7 +1805,72 @@ chosen = choose(indices_, choices, mode='wrap', out=store) assert_equal(store, array([999999, 31, 12, 999999])) + def test_functions_with_output(self): + xm = array(np.random.uniform(0,10,12)).reshape(3,4) + xm[:,0] = xm[0] = xm[-1,-1] = masked + # + funclist = ('sum','prod','var','std', 'max', 'min', 'ptp', 'mean', ) + # + for funcname in funclist: + npfunc = getattr(np, funcname) + xmmeth = getattr(xm, funcname) + + # A ndarray as explicit input + output = np.empty(4, dtype=float) + output.fill(-9999) + result = npfunc(xm, axis=0,out=output) + # ... the result should be the given output + assert(result is output) + assert_equal(result, xmmeth(axis=0, out=output)) + # + output = empty(4, dtype=int) + result = xmmeth(axis=0, out=output) + assert(result is output) + assert(output[0] is masked) + + def test_cumsumprod_with_output(self): + "Tests cumsum/cumprod w/ output" + xm = array(np.random.uniform(0,10,12)).reshape(3,4) + xm[:,0] = xm[0] = xm[-1,-1] = masked + # + funclist = ('cumsum','cumprod') + # + for funcname in funclist: + npfunc = getattr(np, funcname) + xmmeth = getattr(xm, funcname) + + # A ndarray as explicit input + output = np.empty((3,4), dtype=float) + output.fill(-9999) + result = npfunc(xm, axis=0,out=output) + # ... the result should be the given output + assert(result is output) + assert_equal(result, xmmeth(axis=0, out=output)) + # + output = empty((3,4), dtype=int) + result = xmmeth(axis=0, out=output) + assert(result is output) + + + def test_round_with_output(self): + "Testing round with an explicit output" + + xm = array(np.random.uniform(0,10,12)).reshape(3,4) + xm[:,0] = xm[0] = xm[-1,-1] = masked + + # A ndarray as explicit input + output = np.empty((3,4), dtype=float) + output.fill(-9999) + result = np.round(xm, decimals=2,out=output) + # ... the result should be the given output + assert(result is output) + assert_equal(result, xm.round(decimals=2, out=output)) + # + output = empty((3,4), dtype=float) + result = xm.round(decimals=2, out=output) + assert(result is output) + ############################################################################### #------------------------------------------------------------------------------ if __name__ == "__main__": From numpy-svn at scipy.org Sun Jun 8 05:20:06 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sun, 8 Jun 2008 04:20:06 -0500 (CDT) Subject: [Numpy-svn] r5261 - trunk/numpy/distutils/command Message-ID: <20080608092006.CF10139C689@scipy.org> Author: cdavid Date: 2008-06-08 04:20:03 -0500 (Sun, 08 Jun 2008) New Revision: 5261 Modified: trunk/numpy/distutils/command/scons.py Log: MSVC compiler does not have compiler_cxx member. Modified: trunk/numpy/distutils/command/scons.py =================================================================== --- trunk/numpy/distutils/command/scons.py 2008-06-08 03:57:56 UTC (rev 5260) +++ trunk/numpy/distutils/command/scons.py 2008-06-08 09:20:03 UTC (rev 5261) @@ -94,6 +94,9 @@ def dist2sconscxx(compiler): """This converts the name passed to distutils to scons name convention (C++ compiler). The argument should be a Compiler instance.""" + if compiler.compiler_type == 'msvc': + return compiler.compiler_type + return compiler.compiler_cxx[0] def get_compiler_executable(compiler): From numpy-svn at scipy.org Sun Jun 8 08:18:49 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sun, 8 Jun 2008 07:18:49 -0500 (CDT) Subject: [Numpy-svn] r5262 - in trunk/numpy/core: code_generators src tests Message-ID: <20080608121849.4B22439C720@scipy.org> Author: ptvirtan Date: 2008-06-08 07:18:37 -0500 (Sun, 08 Jun 2008) New Revision: 5262 Added: trunk/numpy/core/code_generators/docstrings.py Modified: trunk/numpy/core/code_generators/generate_umath.py trunk/numpy/core/src/ufuncobject.c trunk/numpy/core/tests/test_umath.py Log: Move umath docstrings to a separate file. Make the automatic ufunc signature compatible with the documentation standard. Added: trunk/numpy/core/code_generators/docstrings.py =================================================================== --- trunk/numpy/core/code_generators/docstrings.py 2008-06-08 09:20:03 UTC (rev 5261) +++ trunk/numpy/core/code_generators/docstrings.py 2008-06-08 12:18:37 UTC (rev 5262) @@ -0,0 +1,403 @@ +# Docstrings for generated ufuncs + +docdict = {} + +def get(name): + return docdict.get(name) + +def add_newdoc(place, name, doc): + docdict['.'.join((place, name))] = doc + + +add_newdoc('numpy.core.umath', 'absolute', + """ + Takes |x| elementwise. + + """) + +add_newdoc('numpy.core.umath', 'add', + """ + Adds the arguments elementwise. + + """) + +add_newdoc('numpy.core.umath', 'arccos', + """ + Inverse cosine elementwise. + + """) + +add_newdoc('numpy.core.umath', 'arccosh', + """ + Inverse hyperbolic cosine elementwise. + + """) + +add_newdoc('numpy.core.umath', 'arcsin', + """ + Inverse sine elementwise. + + """) + +add_newdoc('numpy.core.umath', 'arcsinh', + """ + Inverse hyperbolic sine elementwise. + + """) + +add_newdoc('numpy.core.umath', 'arctan', + """ + Inverse tangent elementwise. + + """) + +add_newdoc('numpy.core.umath', 'arctan2', + """ + A safe and correct arctan(x1/x2) + + """) + +add_newdoc('numpy.core.umath', 'arctanh', + """ + Inverse hyperbolic tangent elementwise. + + """) + +add_newdoc('numpy.core.umath', 'bitwise_and', + """ + Computes x1 & x2 elementwise. + + """) + +add_newdoc('numpy.core.umath', 'bitwise_or', + """ + Computes x1 | x2 elementwise. + + """) + +add_newdoc('numpy.core.umath', 'bitwise_xor', + """ + Computes x1 ^ x2 elementwise. + + """) + +add_newdoc('numpy.core.umath', 'ceil', + """ + Elementwise smallest integer >= x. + + """) + +add_newdoc('numpy.core.umath', 'conjugate', + """ + Takes the conjugate of x elementwise. + + """) + +add_newdoc('numpy.core.umath', 'cos', + """ + Cosine elementwise. + + """) + +add_newdoc('numpy.core.umath', 'cosh', + """ + Hyperbolic cosine elementwise. + + """) + +add_newdoc('numpy.core.umath', 'degrees', + """ + Converts angle from radians to degrees + + """) + +add_newdoc('numpy.core.umath', 'divide', + """ + Divides the arguments elementwise. + + """) + +add_newdoc('numpy.core.umath', 'equal', + """ + Returns elementwise x1 == x2 in a bool array + + """) + +add_newdoc('numpy.core.umath', 'exp', + """ + e**x elementwise. + + """) + +add_newdoc('numpy.core.umath', 'expm1', + """ + e**x-1 elementwise. + + """) + +add_newdoc('numpy.core.umath', 'fabs', + """ + Absolute values. + + """) + +add_newdoc('numpy.core.umath', 'floor', + """ + Elementwise largest integer <= x + + """) + +add_newdoc('numpy.core.umath', 'floor_divide', + """ + Floor divides the arguments elementwise. + + """) + +add_newdoc('numpy.core.umath', 'fmod', + """ + Computes (C-like) x1 % x2 elementwise. + + """) + +add_newdoc('numpy.core.umath', 'greater', + """ + Returns elementwise x1 > x2 in a bool array. + + """) + +add_newdoc('numpy.core.umath', 'greater_equal', + """ + Returns elementwise x1 >= x2 in a bool array. + + """) + +add_newdoc('numpy.core.umath', 'hypot', + """ + sqrt(x1**2 + x2**2) elementwise + + """) + +add_newdoc('numpy.core.umath', 'invert', + """ + Computes ~x (bit inversion) elementwise. + + """) + +add_newdoc('numpy.core.umath', 'isfinite', + """ + Returns True where x is finite + + """) + +add_newdoc('numpy.core.umath', 'isinf', + """ + Returns True where x is +inf or -inf + + """) + +add_newdoc('numpy.core.umath', 'isnan', + """ + Returns True where x is Not-A-Number + + """) + +add_newdoc('numpy.core.umath', 'left_shift', + """ + Computes x1 << x2 (x1 shifted to left by x2 bits) elementwise. + + """) + +add_newdoc('numpy.core.umath', 'less', + """ + Returns elementwise x1 < x2 in a bool array. + + """) + +add_newdoc('numpy.core.umath', 'less_equal', + """ + Returns elementwise x1 <= x2 in a bool array + + """) + +add_newdoc('numpy.core.umath', 'log', + """ + Logarithm base e elementwise. + + """) + +add_newdoc('numpy.core.umath', 'log10', + """ + Logarithm base 10 elementwise. + + """) + +add_newdoc('numpy.core.umath', 'log1p', + """ + log(1+x) to base e elementwise. + + """) + +add_newdoc('numpy.core.umath', 'logical_and', + """ + Returns x1 and x2 elementwise. + + """) + +add_newdoc('numpy.core.umath', 'logical_not', + """ + Returns not x elementwise. + + """) + +add_newdoc('numpy.core.umath', 'logical_or', + """ + Returns x1 or x2 elementwise. + + """) + +add_newdoc('numpy.core.umath', 'logical_xor', + """ + Returns x1 xor x2 elementwise. + + """) + +add_newdoc('numpy.core.umath', 'maximum', + """ + Returns maximum (if x1 > x2: x1; else: x2) elementwise. + + """) + +add_newdoc('numpy.core.umath', 'minimum', + """ + Returns minimum (if x1 < x2: x1; else: x2) elementwise + + """) + +add_newdoc('numpy.core.umath', 'modf', + """ + Breaks x into fractional (y1) and integral (y2) parts. + + Each output has the same sign as the input. + + """) + +add_newdoc('numpy.core.umath', 'multiply', + """ + Multiplies the arguments elementwise. + + """) + +add_newdoc('numpy.core.umath', 'negative', + """ + Determines -x elementwise + + """) + +add_newdoc('numpy.core.umath', 'not_equal', + """ + Returns elementwise x1 |= x2 + + """) + +add_newdoc('numpy.core.umath', 'ones_like', + """ + Returns an array of ones of the shape and typecode of x. + + """) + +add_newdoc('numpy.core.umath', 'power', + """ + Computes x1**x2 elementwise. + + """) + +add_newdoc('numpy.core.umath', 'radians', + """ + Converts angle from degrees to radians + + """) + +add_newdoc('numpy.core.umath', 'reciprocal', + """ + Compute 1/x + + """) + +add_newdoc('numpy.core.umath', 'remainder', + """ + Computes x1-n*x2 where n is floor(x1 / x2) + + """) + +add_newdoc('numpy.core.umath', 'right_shift', + """ + Computes x1 >> x2 (x1 shifted to right by x2 bits) elementwise. + + """) + +add_newdoc('numpy.core.umath', 'rint', + """ + Round x elementwise to the nearest integer, round halfway cases away from zero + + """) + +add_newdoc('numpy.core.umath', 'sign', + """ + Returns -1 if x < 0 and 0 if x==0 and 1 if x > 0 + + """) + +add_newdoc('numpy.core.umath', 'signbit', + """ + Returns True where signbit of x is set (x<0). + + """) + +add_newdoc('numpy.core.umath', 'sin', + """ + Sine elementwise. + + """) + +add_newdoc('numpy.core.umath', 'sinh', + """ + Hyperbolic sine elementwise. + + """) + +add_newdoc('numpy.core.umath', 'sqrt', + """ + Square-root elementwise. For real x, the domain is restricted to x>=0. + + """) + +add_newdoc('numpy.core.umath', 'square', + """ + Compute x**2. + + """) + +add_newdoc('numpy.core.umath', 'subtract', + """ + Subtracts the arguments elementwise. + + """) + +add_newdoc('numpy.core.umath', 'tan', + """ + Tangent elementwise. + + """) + +add_newdoc('numpy.core.umath', 'tanh', + """ + Hyperbolic tangent elementwise. + + """) + +add_newdoc('numpy.core.umath', 'true_divide', + """ + True divides the arguments elementwise. + + """) + Modified: trunk/numpy/core/code_generators/generate_umath.py =================================================================== --- trunk/numpy/core/code_generators/generate_umath.py 2008-06-08 09:20:03 UTC (rev 5261) +++ trunk/numpy/core/code_generators/generate_umath.py 2008-06-08 12:18:37 UTC (rev 5262) @@ -1,4 +1,8 @@ -import re +import re, textwrap +import sys, os +sys.path.insert(0, os.path.dirname(__file__)) +import docstrings +sys.path.pop(0) Zero = "PyUFunc_Zero" One = "PyUFunc_One" @@ -153,37 +157,37 @@ defdict = { 'add' : Ufunc(2, 1, Zero, - 'adds the arguments elementwise.', + docstrings.get('numpy.core.umath.add'), TD(noobj), TD(O, f='PyNumber_Add'), ), 'subtract' : Ufunc(2, 1, Zero, - 'subtracts the arguments elementwise.', + docstrings.get('numpy.core.umath.subtract'), TD(noobj), TD(O, f='PyNumber_Subtract'), ), 'multiply' : Ufunc(2, 1, One, - 'multiplies the arguments elementwise.', + docstrings.get('numpy.core.umath.multiply'), TD(noobj), TD(O, f='PyNumber_Multiply'), ), 'divide' : Ufunc(2, 1, One, - 'divides the arguments elementwise.', + docstrings.get('numpy.core.umath.divide'), TD(intfltcmplx), TD(O, f='PyNumber_Divide'), ), 'floor_divide' : Ufunc(2, 1, One, - 'floor divides the arguments elementwise.', + docstrings.get('numpy.core.umath.floor_divide'), TD(intfltcmplx), TD(O, f='PyNumber_FloorDivide'), ), 'true_divide' : Ufunc(2, 1, One, - 'true divides the arguments elementwise.', + docstrings.get('numpy.core.umath.true_divide'), TD('bBhH', out='f'), TD('iIlLqQ', out='d'), TD(flts+cmplx), @@ -191,346 +195,346 @@ ), 'conjugate' : Ufunc(1, 1, None, - 'takes the conjugate of x elementwise.', + docstrings.get('numpy.core.umath.conjugate'), TD(nobool_or_obj), TD(M, f='conjugate'), ), 'fmod' : Ufunc(2, 1, Zero, - 'computes (C-like) x1 % x2 elementwise.', + docstrings.get('numpy.core.umath.fmod'), TD(ints), TD(flts, f='fmod'), TD(M, f='fmod'), ), 'square' : Ufunc(1, 1, None, - 'compute x**2.', + docstrings.get('numpy.core.umath.square'), TD(nobool_or_obj), TD(O, f='Py_square'), ), 'reciprocal' : Ufunc(1, 1, None, - 'compute 1/x', + docstrings.get('numpy.core.umath.reciprocal'), TD(nobool_or_obj), TD(O, f='Py_reciprocal'), ), 'ones_like' : Ufunc(1, 1, None, - 'returns an array of ones of the shape and typecode of x.', + docstrings.get('numpy.core.umath.ones_like'), TD(nobool_or_obj), TD(O, f='Py_get_one'), ), 'power' : Ufunc(2, 1, One, - 'computes x1**x2 elementwise.', + docstrings.get('numpy.core.umath.power'), TD(ints), TD(inexact, f='pow'), TD(O, f='PyNumber_Power'), ), 'absolute' : Ufunc(1, 1, None, - 'takes |x| elementwise.', + docstrings.get('numpy.core.umath.absolute'), TD(nocmplx), TD(cmplx, out=('f', 'd', 'g')), TD(O, f='PyNumber_Absolute'), ), 'negative' : Ufunc(1, 1, None, - 'determines -x elementwise', + docstrings.get('numpy.core.umath.negative'), TD(nocmplx), TD(cmplx, f='neg'), TD(O, f='PyNumber_Negative'), ), 'sign' : Ufunc(1, 1, None, - 'returns -1 if x < 0 and 0 if x==0 and 1 if x > 0', + docstrings.get('numpy.core.umath.sign'), TD(nobool), ), 'greater' : Ufunc(2, 1, None, - 'returns elementwise x1 > x2 in a bool array.', + docstrings.get('numpy.core.umath.greater'), TD(all, out='?'), ), 'greater_equal' : Ufunc(2, 1, None, - 'returns elementwise x1 >= x2 in a bool array.', + docstrings.get('numpy.core.umath.greater_equal'), TD(all, out='?'), ), 'less' : Ufunc(2, 1, None, - 'returns elementwise x1 < x2 in a bool array.', + docstrings.get('numpy.core.umath.less'), TD(all, out='?'), ), 'less_equal' : Ufunc(2, 1, None, - 'returns elementwise x1 <= x2 in a bool array', + docstrings.get('numpy.core.umath.less_equal'), TD(all, out='?'), ), 'equal' : Ufunc(2, 1, None, - 'returns elementwise x1 == x2 in a bool array', + docstrings.get('numpy.core.umath.equal'), TD(all, out='?'), ), 'not_equal' : Ufunc(2, 1, None, - 'returns elementwise x1 |= x2', + docstrings.get('numpy.core.umath.not_equal'), TD(all, out='?'), ), 'logical_and' : Ufunc(2, 1, One, - 'returns x1 and x2 elementwise.', + docstrings.get('numpy.core.umath.logical_and'), TD(noobj, out='?'), TD(M, f='logical_and'), ), 'logical_not' : Ufunc(1, 1, None, - 'returns not x elementwise.', + docstrings.get('numpy.core.umath.logical_not'), TD(noobj, out='?'), TD(M, f='logical_not'), ), 'logical_or' : Ufunc(2, 1, Zero, - 'returns x1 or x2 elementwise.', + docstrings.get('numpy.core.umath.logical_or'), TD(noobj, out='?'), TD(M, f='logical_or'), ), 'logical_xor' : Ufunc(2, 1, None, - 'returns x1 xor x2 elementwise.', + docstrings.get('numpy.core.umath.logical_xor'), TD(noobj, out='?'), TD(M, f='logical_xor'), ), 'maximum' : Ufunc(2, 1, None, - 'returns maximum (if x1 > x2: x1; else: x2) elementwise.', + docstrings.get('numpy.core.umath.maximum'), TD(noobj), TD(O, f='_npy_ObjectMax') ), 'minimum' : Ufunc(2, 1, None, - 'returns minimum (if x1 < x2: x1; else: x2) elementwise', + docstrings.get('numpy.core.umath.minimum'), TD(noobj), TD(O, f='_npy_ObjectMin') ), 'bitwise_and' : Ufunc(2, 1, One, - 'computes x1 & x2 elementwise.', + docstrings.get('numpy.core.umath.bitwise_and'), TD(bints), TD(O, f='PyNumber_And'), ), 'bitwise_or' : Ufunc(2, 1, Zero, - 'computes x1 | x2 elementwise.', + docstrings.get('numpy.core.umath.bitwise_or'), TD(bints), TD(O, f='PyNumber_Or'), ), 'bitwise_xor' : Ufunc(2, 1, None, - 'computes x1 ^ x2 elementwise.', + docstrings.get('numpy.core.umath.bitwise_xor'), TD(bints), TD(O, f='PyNumber_Xor'), ), 'invert' : Ufunc(1, 1, None, - 'computes ~x (bit inversion) elementwise.', + docstrings.get('numpy.core.umath.invert'), TD(bints), TD(O, f='PyNumber_Invert'), ), 'left_shift' : Ufunc(2, 1, None, - 'computes x1 << x2 (x1 shifted to left by x2 bits) elementwise.', + docstrings.get('numpy.core.umath.left_shift'), TD(ints), TD(O, f='PyNumber_Lshift'), ), 'right_shift' : Ufunc(2, 1, None, - 'computes x1 >> x2 (x1 shifted to right by x2 bits) elementwise.', + docstrings.get('numpy.core.umath.right_shift'), TD(ints), TD(O, f='PyNumber_Rshift'), ), 'degrees' : Ufunc(1, 1, None, - 'converts angle from radians to degrees', + docstrings.get('numpy.core.umath.degrees'), TD(fltsM, f='degrees'), ), 'radians' : Ufunc(1, 1, None, - 'converts angle from degrees to radians', + docstrings.get('numpy.core.umath.radians'), TD(fltsM, f='radians'), ), 'arccos' : Ufunc(1, 1, None, - 'inverse cosine elementwise.', + docstrings.get('numpy.core.umath.arccos'), TD(inexact, f='acos'), TD(M, f='arccos'), ), 'arccosh' : Ufunc(1, 1, None, - 'inverse hyperbolic cosine elementwise.', + docstrings.get('numpy.core.umath.arccosh'), TD(inexact, f='acosh'), TD(M, f='arccosh'), ), 'arcsin' : Ufunc(1, 1, None, - 'inverse sine elementwise.', + docstrings.get('numpy.core.umath.arcsin'), TD(inexact, f='asin'), TD(M, f='arcsin'), ), 'arcsinh' : Ufunc(1, 1, None, - 'inverse hyperbolic sine elementwise.', + docstrings.get('numpy.core.umath.arcsinh'), TD(inexact, f='asinh'), TD(M, f='arcsinh'), ), 'arctan' : Ufunc(1, 1, None, - 'inverse tangent elementwise.', + docstrings.get('numpy.core.umath.arctan'), TD(inexact, f='atan'), TD(M, f='arctan'), ), 'arctanh' : Ufunc(1, 1, None, - 'inverse hyperbolic tangent elementwise.', + docstrings.get('numpy.core.umath.arctanh'), TD(inexact, f='atanh'), TD(M, f='arctanh'), ), 'cos' : Ufunc(1, 1, None, - 'cosine elementwise.', + docstrings.get('numpy.core.umath.cos'), TD(inexact, f='cos'), TD(M, f='cos'), ), 'sin' : Ufunc(1, 1, None, - 'sine elementwise.', + docstrings.get('numpy.core.umath.sin'), TD(inexact, f='sin'), TD(M, f='sin'), ), 'tan' : Ufunc(1, 1, None, - 'tangent elementwise.', + docstrings.get('numpy.core.umath.tan'), TD(inexact, f='tan'), TD(M, f='tan'), ), 'cosh' : Ufunc(1, 1, None, - 'hyperbolic cosine elementwise.', + docstrings.get('numpy.core.umath.cosh'), TD(inexact, f='cosh'), TD(M, f='cosh'), ), 'sinh' : Ufunc(1, 1, None, - 'hyperbolic sine elementwise.', + docstrings.get('numpy.core.umath.sinh'), TD(inexact, f='sinh'), TD(M, f='sinh'), ), 'tanh' : Ufunc(1, 1, None, - 'hyperbolic tangent elementwise.', + docstrings.get('numpy.core.umath.tanh'), TD(inexact, f='tanh'), TD(M, f='tanh'), ), 'exp' : Ufunc(1, 1, None, - 'e**x elementwise.', + docstrings.get('numpy.core.umath.exp'), TD(inexact, f='exp'), TD(M, f='exp'), ), 'expm1' : Ufunc(1, 1, None, - 'e**x-1 elementwise.', + docstrings.get('numpy.core.umath.expm1'), TD(inexact, f='expm1'), TD(M, f='expm1'), ), 'log' : Ufunc(1, 1, None, - 'logarithm base e elementwise.', + docstrings.get('numpy.core.umath.log'), TD(inexact, f='log'), TD(M, f='log'), ), 'log10' : Ufunc(1, 1, None, - 'logarithm base 10 elementwise.', + docstrings.get('numpy.core.umath.log10'), TD(inexact, f='log10'), TD(M, f='log10'), ), 'log1p' : Ufunc(1, 1, None, - 'log(1+x) to base e elementwise.', + docstrings.get('numpy.core.umath.log1p'), TD(inexact, f='log1p'), TD(M, f='log1p'), ), 'sqrt' : Ufunc(1, 1, None, - 'square-root elementwise. For real x, the domain is restricted to x>=0.', + docstrings.get('numpy.core.umath.sqrt'), TD(inexact, f='sqrt'), TD(M, f='sqrt'), ), 'ceil' : Ufunc(1, 1, None, - 'elementwise smallest integer >= x.', + docstrings.get('numpy.core.umath.ceil'), TD(flts, f='ceil'), TD(M, f='ceil'), ), 'fabs' : Ufunc(1, 1, None, - 'absolute values.', + docstrings.get('numpy.core.umath.fabs'), TD(flts, f='fabs'), TD(M, f='fabs'), ), 'floor' : Ufunc(1, 1, None, - 'elementwise largest integer <= x', + docstrings.get('numpy.core.umath.floor'), TD(flts, f='floor'), TD(M, f='floor'), ), 'rint' : Ufunc(1, 1, None, - 'round x elementwise to the nearest integer, round halfway cases away from zero', + docstrings.get('numpy.core.umath.rint'), TD(inexact, f='rint'), TD(M, f='rint'), ), 'arctan2' : Ufunc(2, 1, None, - 'a safe and correct arctan(x1/x2)', + docstrings.get('numpy.core.umath.arctan2'), TD(flts, f='atan2'), TD(M, f='arctan2'), ), 'remainder' : Ufunc(2, 1, None, - 'computes x1-n*x2 where n is floor(x1 / x2)', + docstrings.get('numpy.core.umath.remainder'), TD(intflt), TD(O, f='PyNumber_Remainder'), ), 'hypot' : Ufunc(2, 1, None, - 'sqrt(x1**2 + x2**2) elementwise', + docstrings.get('numpy.core.umath.hypot'), TD(flts, f='hypot'), TD(M, f='hypot'), ), 'isnan' : Ufunc(1, 1, None, - 'returns True where x is Not-A-Number', + docstrings.get('numpy.core.umath.isnan'), TD(inexact, out='?'), ), 'isinf' : Ufunc(1, 1, None, - 'returns True where x is +inf or -inf', + docstrings.get('numpy.core.umath.isinf'), TD(inexact, out='?'), ), 'isfinite' : Ufunc(1, 1, None, - 'returns True where x is finite', + docstrings.get('numpy.core.umath.isfinite'), TD(inexact, out='?'), ), 'signbit' : Ufunc(1, 1, None, - 'returns True where signbit of x is set (x<0).', + docstrings.get('numpy.core.umath.signbit'), TD(flts, out='?'), ), 'modf' : Ufunc(1, 2, None, - 'breaks x into fractional (y1) and integral (y2) parts.\\n\\n Each output has the same sign as the input.', + docstrings.get('numpy.core.umath.modf'), TD(flts), ), } @@ -667,6 +671,8 @@ for name in names: uf = funcdict[name] mlist = [] + docstring = textwrap.dedent(uf.docstring).strip() + docstring = docstring.encode('string-escape').replace(r'"', r'\"') mlist.append(\ r"""f = PyUFunc_FromFuncAndData(%s_functions, %s_data, %s_signatures, %d, %d, %d, %s, "%s", @@ -674,7 +680,7 @@ len(uf.type_descriptions), uf.nin, uf.nout, uf.identity, - name, uf.docstring)) + name, docstring)) mlist.append(r"""PyDict_SetItemString(dictionary, "%s", f);""" % name) mlist.append(r"""Py_DECREF(f);""") code3list.append('\n'.join(mlist)) Modified: trunk/numpy/core/src/ufuncobject.c =================================================================== --- trunk/numpy/core/src/ufuncobject.c 2008-06-08 09:20:03 UTC (rev 5261) +++ trunk/numpy/core/src/ufuncobject.c 2008-06-08 12:18:37 UTC (rev 5262) @@ -4007,7 +4007,7 @@ PyObject *outargs, *inargs, *doc; outargs = _makeargs(self->nout, "y"); inargs = _makeargs(self->nin, "x"); - doc = PyString_FromFormat("%s = %s(%s) %s", + doc = PyString_FromFormat("%s = %s(%s)\n\n%s", PyString_AS_STRING(outargs), self->name, PyString_AS_STRING(inargs), Modified: trunk/numpy/core/tests/test_umath.py =================================================================== --- trunk/numpy/core/tests/test_umath.py 2008-06-08 09:20:03 UTC (rev 5261) +++ trunk/numpy/core/tests/test_umath.py 2008-06-08 12:18:37 UTC (rev 5262) @@ -211,7 +211,7 @@ def check_attributes(self): add = ncu.add assert_equal(add.__name__, 'add') - assert_equal(add.__doc__, 'y = add(x1,x2) adds the arguments elementwise.') + assert add.__doc__.startswith('y = add(x1,x2)\n\n') self.failUnless(add.ntypes >= 18) # don't fail if types added self.failUnless('ii->i' in add.types) assert_equal(add.nin, 2) From numpy-svn at scipy.org Sun Jun 8 14:10:57 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sun, 8 Jun 2008 13:10:57 -0500 (CDT) Subject: [Numpy-svn] r5263 - trunk/numpy/ma Message-ID: <20080608181057.EF2AA39C742@scipy.org> Author: pierregm Date: 2008-06-08 13:10:55 -0500 (Sun, 08 Jun 2008) New Revision: 5263 Modified: trunk/numpy/ma/core.py Log: * make_mask_none now accepts a fields argument to construct record-like masks easily * revamped where Modified: trunk/numpy/ma/core.py =================================================================== --- trunk/numpy/ma/core.py 2008-06-08 12:18:37 UTC (rev 5262) +++ trunk/numpy/ma/core.py 2008-06-08 18:10:55 UTC (rev 5263) @@ -277,9 +277,9 @@ # Should we check for contiguity ? and a.flags['CONTIGUOUS']: return a elif isinstance(a, dict): - return narray(a, 'O') + return np.array(a, 'O') else: - return narray(a) + return np.array(a) #####-------------------------------------------------------------------------- def get_masked_subclass(*arrays): @@ -730,15 +730,15 @@ return getattr(a, '_mask', nomask) getmask = get_mask -def getmaskarray(a): - """Return the mask of a, if any, or a boolean array of the shape +def getmaskarray(arr): + """Return the mask of arr, if any, or a boolean array of the shape of a, full of False. """ - m = getmask(a) - if m is nomask: - m = make_mask_none(np.shape(a)) - return m + mask = getmask(arr) + if mask is nomask: + mask = make_mask_none(np.shape(arr), getdata(arr).dtype.names) + return mask def is_mask(m): """Return True if m is a legal mask. @@ -790,18 +790,21 @@ else: return result -def make_mask_none(newshape): +def make_mask_none(newshape, fields=None): """Return a mask of shape s, filled with False. Parameters ---------- news : tuple A tuple indicating the shape of the final mask. - fieldnames: {None, string sequence}, optional + fields: {None, string sequence}, optional A list of field names, if needed. """ - result = np.zeros(newshape, dtype=MaskType) + if not fields: + result = np.zeros(newshape, dtype=MaskType) + else: + result = np.zeros(newshape, dtype=[(n, MaskType) for n in fields]) return result def mask_or (m1, m2, copy=False, shrink=True): @@ -817,9 +820,9 @@ First mask. m2 : array_like Second mask - copy : bool + copy : {False, True}, optional Whether to return a copy. - shrink : bool + shrink : {True, False}, optional Whether to shrink m to nomask if all its values are False. """ @@ -2865,6 +2868,7 @@ out -= self.min(axis=axis, fill_value=fill_value) return out + # Array methods --------------------------------------- copy = _arraymethod('copy') diagonal = _arraymethod('diagonal') @@ -3501,21 +3505,21 @@ yv = getdata(y) if x is masked: ndtype = yv.dtype -# xm = np.ones(fc.shape, dtype=MaskType) elif y is masked: ndtype = xv.dtype -# ym = np.ones(fc.shape, dtype=MaskType) else: ndtype = np.max([xv.dtype, yv.dtype]) -# xm = getmask(x) + # Construct an empty array and fill it d = np.empty(fc.shape, dtype=ndtype).view(MaskedArray) - np.putmask(d._data, fc, xv.astype(ndtype)) - np.putmask(d._data, notfc, yv.astype(ndtype)) - d._mask = np.zeros(fc.shape, dtype=MaskType) - np.putmask(d._mask, fc, getmask(x)) - np.putmask(d._mask, notfc, getmask(y)) - d._mask |= getmaskarray(condition) - if not d._mask.any(): + _data = d._data + np.putmask(_data, fc, xv.astype(ndtype)) + np.putmask(_data, notfc, yv.astype(ndtype)) + # Create an empty mask and fill it + _mask = d._mask = np.zeros(fc.shape, dtype=MaskType) + np.putmask(_mask, fc, getmask(x)) + np.putmask(_mask, notfc, getmask(y)) + _mask |= getmaskarray(condition) + if not _mask.any(): d._mask = nomask return d From numpy-svn at scipy.org Sun Jun 8 19:04:51 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sun, 8 Jun 2008 18:04:51 -0500 (CDT) Subject: [Numpy-svn] r5264 - in trunk/numpy/ma: . tests Message-ID: <20080608230451.389D639C414@scipy.org> Author: pierregm Date: 2008-06-08 18:04:42 -0500 (Sun, 08 Jun 2008) New Revision: 5264 Modified: trunk/numpy/ma/core.py trunk/numpy/ma/mrecords.py trunk/numpy/ma/tests/test_core.py trunk/numpy/ma/tests/test_extras.py trunk/numpy/ma/tests/test_mrecords.py Log: CHANGES: core: * When creating a masked array with named fields, the mask has now a flexible type [(n,bool) for n in fields], which allows individual fields to be masked. * When a masked array has named fields, setting the mask to a sequence of booleans will set the mask of all the fields of the corresponding record. * A new property, recordmask, returns either the mask (when no named fields) or a boolean array where values are True if all the fields of one record are masked, False otherwise. * A new private attribute, _isfield, has been introduced to keep track whether an array is a field of a record-like masked array or not, and make sure that the mask is properly propagated. * Setting an existing mask to nomask will only fill the mask with False, not transform it to nomask mrecords: * _fieldmask is now only a synonym for _mask, kept for convenience * revamped __getattribute__ to the example of numpy.core.records.recarray.__getattribute__ * __setslice__ and filled are now inhertied from MaskedArray tests * The tests in test_core have been reorganized to improve clarity and avoid duplication. * test_extras now uses the convention "import numpy as np" Modified: trunk/numpy/ma/core.py =================================================================== --- trunk/numpy/ma/core.py 2008-06-08 18:10:55 UTC (rev 5263) +++ trunk/numpy/ma/core.py 2008-06-08 23:04:42 UTC (rev 5264) @@ -1196,7 +1196,7 @@ DeprecationWarning) shrink = flag # Process data............ - _data = narray(data, dtype=dtype, copy=copy, subok=True, ndmin=ndmin) + _data = np.array(data, dtype=dtype, copy=copy, subok=True, ndmin=ndmin) _baseclass = getattr(data, '_baseclass', type(_data)) _basedict = getattr(data, '_basedict', getattr(data, '__dict__', {})) if not isinstance(data, MaskedArray) or not subok: @@ -1207,7 +1207,15 @@ if hasattr(data,'_mask') and not isinstance(data, ndarray): _data._mask = data._mask _sharedmask = True - # Process mask ........... + # Process mask ............................... + # Number of named fields (or zero if none) + names_ = _data.dtype.names or () + # Type of the mask + if names_: + mdtype = [(n, MaskType) for n in names_] + else: + mdtype = MaskType + # Case 1. : no mask in input ............ if mask is nomask: # Erase the current mask ? if not keep_mask: @@ -1216,7 +1224,7 @@ _data._mask = nomask # With full version else: - _data._mask = np.zeros(_data.shape, dtype=MaskType) + _data._mask = np.zeros(_data.shape, dtype=mdtype) if copy: _data._mask = _data._mask.copy() _data._sharedmask = False @@ -1224,7 +1232,14 @@ _data._sharedmask = True # Case 2. : With a mask in input ........ else: - mask = np.array(mask, dtype=MaskType, copy=copy) + # Read the mask with the current mdtype + try: + mask = np.array(mask, copy=copy, dtype=mdtype) + # Or assume it's a sequence of bool/int + except TypeError: + mask = np.array([tuple([m]*len(mdtype)) for m in mask], + dtype=mdtype) + # Make sure the mask and the data have the same shape if mask.shape != _data.shape: (nd, nm) = (_data.size, mask.size) if nm == 1: @@ -1245,7 +1260,11 @@ _data._mask = mask _data._sharedmask = not copy else: - _data._mask = umath.logical_or(mask, _data._mask) + if names_: + for n in names_: + _data._mask[n] |= mask[n] + else: + _data._mask = np.logical_or(mask, _data._mask) _data._sharedmask = False # Update fill_value....... if fill_value is None: @@ -1270,6 +1289,7 @@ _dict = dict(_fill_value=getattr(obj, '_fill_value', None), _hardmask=getattr(obj, '_hardmask', False), _sharedmask=getattr(obj, '_sharedmask', False), + _isfield=getattr(obj, '_isfield', False), _baseclass=getattr(obj,'_baseclass', _baseclass), _basedict=_basedict,) self.__dict__.update(_dict) @@ -1379,12 +1399,11 @@ if isinstance(indx, basestring): if self._fill_value is not None: dout._fill_value = self._fill_value[indx] + dout._isfield = True # Update the mask if needed if _mask is not nomask: - if isinstance(indx, basestring): - dout._mask = _mask.reshape(dout.shape) - else: - dout._mask = ndarray.__getitem__(_mask, indx).reshape(dout.shape) + dout._mask = _mask[indx] + dout._sharedmask = True # Note: Don't try to check for m.any(), that'll take too long... return dout #........................ @@ -1402,47 +1421,64 @@ # msg = "Masked arrays must be filled before they can be used as indices!" # raise IndexError, msg if isinstance(indx, basestring): - ndarray.__setitem__(self._data, indx, getdata(value)) - warnings.warn("MaskedArray.__setitem__ on fields: "\ - "The mask is NOT affected!") + ndarray.__setitem__(self._data, indx, value) + ndarray.__setitem__(self._mask, indx, getmask(value)) return - #.... + #........................................ +# ndgetattr = ndarray.__getattribute__ + _names = ndarray.__getattribute__(self,'dtype').names or () + _data = self._data + _mask = ndarray.__getattribute__(self,'_mask') + #........................................ if value is masked: - m = self._mask - if m is nomask: - m = np.zeros(self.shape, dtype=MaskType) - m[indx] = True - self._mask = m - self._sharedmask = False + # The mask wasn't set: create a full version... + if _mask is nomask: + _mask = self._mask = make_mask_none(self.shape, _names) + # Now, set the mask to its value. + if _names: + _mask[indx] = tuple([True,] * len(_names)) + else: + _mask[indx] = True + if not self._isfield: + self._sharedmask = False return - #.... -# dval = np.array(value, copy=False, dtype=self.dtype) + #........................................ + # Get the _data part of the new value dval = value - mval = getmask(value) - if self._mask is nomask: + # Get the _mask part of the new value + mval = getattr(value, '_mask', nomask) + if _names and mval is nomask: + mval = tuple([False] * len(_names)) + if _mask is nomask: # Set the data, then the mask - ndarray.__setitem__(self._data,indx,dval) + ndarray.__setitem__(_data, indx, dval) if mval is not nomask: - self._mask = np.zeros(self.shape, dtype=MaskType) - self._mask[indx] = mval + _mask = self._mask = make_mask_none(self.shape, _names) + ndarray.__setitem__(_mask, indx, mval) elif not self._hardmask: # Unshare the mask if necessary to avoid propagation - self.unshare_mask() + if not self._isfield: + self.unshare_mask() + _mask = ndarray.__getattribute__(self,'_mask') # Set the data, then the mask - ndarray.__setitem__(self._data, indx, dval) - ndarray.__setitem__(self._mask, indx, mval) - elif hasattr(indx, 'dtype') and (indx.dtype==bool_): - indx = indx * umath.logical_not(self._mask) - ndarray.__setitem__(self._data, indx, dval) + ndarray.__setitem__(_data, indx, dval) + ndarray.__setitem__(_mask, indx, mval) + elif hasattr(indx, 'dtype') and (indx.dtype==MaskType): + indx = indx * umath.logical_not(_mask) + ndarray.__setitem__(_data,indx,dval) else: - mindx = mask_or(self._mask[indx], mval, copy=True) + if _names: + err_msg = "Flexible 'hard' masks are not yet supported..." + raise NotImplementedError(err_msg) + mindx = mask_or(_mask[indx], mval, copy=True) dindx = self._data[indx] if dindx.size > 1: dindx[~mindx] = dval elif mindx is nomask: dindx = dval - ndarray.__setitem__(self._data, indx, dindx) - self._mask[indx] = mindx + ndarray.__setitem__(_data, indx, dindx) + _mask[indx] = mindx + return #............................................ def __getslice__(self, i, j): """x.__getslice__(i, j) <==> x[i:j] @@ -1466,28 +1502,57 @@ """Set the mask. """ - if mask is not nomask: - mask = narray(mask, copy=copy, dtype=MaskType) - # We could try to check whether shrinking is needed.. - # ... but we would waste some precious time -# if self._shrinkmask and not mask.any(): -# mask = nomask - if self._mask is nomask: - self._mask = mask - elif self._hardmask: - if mask is not nomask: - self._mask.__ior__(mask) - else: - # This one is tricky: if we set the mask that way, we may break the - # propagation. But if we don't, we end up with a mask full of False - # and a test on nomask fails... + names = ndarray.__getattribute__(self,'dtype').names + current_mask = ndarray.__getattribute__(self,'_mask') + if mask is masked: + mask = True + # Make sure the mask is set + if (current_mask is nomask): + # Just don't do anything is there's nothing to do... if mask is nomask: - self._mask = nomask + return + current_mask = self._mask = make_mask_none(self.shape, names) + # No named fields......... + if names is None: + # Hardmask: don't unmask the data + if self._hardmask: + current_mask |= mask + # Softmask: set everything to False else: - self.unshare_mask() - self._mask.flat = mask - if self._mask.shape: - self._mask = np.reshape(self._mask, self.shape) + current_mask.flat = mask + # Named fields w/ ............ + else: + mdtype = current_mask.dtype + mask = np.array(mask, copy=False) + # Mask is a singleton + if not mask.ndim: + # It's a boolean : make a record + if mask.dtype.kind == 'b': + mask = np.array(tuple([mask.item()]*len(mdtype)), + dtype=mdtype) + # It's a record: make sure the dtype is correct + else: + mask = mask.astype(mdtype) + # Mask is a sequence + else: + # Make sure the new mask is a ndarray with the proper dtype + try: + mask = np.array(mask, copy=copy, dtype=mdtype) + # Or assume it's a sequence of bool/int + except TypeError: + mask = np.array([tuple([m]*len(mdtype)) for m in mask], + dtype=mdtype) + # Hardmask: don't unmask the data + if self._hardmask: + for n in names: + current_mask[n] |= mask[n] + # Softmask: set everything to False + else: + current_mask.flat = mask + # Reshape if needed + if current_mask.shape: + current_mask.shape = self.shape + return _set_mask = __setmask__ #.... def _get_mask(self): @@ -1498,6 +1563,26 @@ # return self._mask.reshape(self.shape) return self._mask mask = property(fget=_get_mask, fset=__setmask__, doc="Mask") + # + def _getrecordmask(self): + """Return the mask of the records. + A record is masked when all the fields are masked. + + """ + if self.dtype.names is None: + return self._mask + elif self.size > 1: + return self._mask.view((bool_, len(self.dtype))).all(1) + else: + return self._mask.view((bool_, len(self.dtype))).all() + + def _setrecordmask(self): + """Return the mask of the records. + A record is masked when all the fields are masked. + + """ + raise NotImplementedError("Coming soon: setting the mask per records!") + recordmask = property(fget=_getrecordmask) #............................................ def harden_mask(self): """Force the mask to hard. @@ -1602,14 +1687,22 @@ """ m = self._mask - if m is nomask or not m.any(): + if m is nomask: return self._data # if fill_value is None: fill_value = self.fill_value # if self is masked_singleton: - result = np.asanyarray(fill_value) + return np.asanyarray(fill_value) + # + if len(self.dtype): + result = self._data.copy() + for n in result.dtype.names: + field = result[n] + np.putmask(field, self._mask[n], self.fill_value[n]) + elif not m.any(): + return self._data else: result = self._data.copy() try: @@ -1682,11 +1775,14 @@ else: return str(self._data) # convert to object array to make filled work -#!!!: the two lines below seem more robust than the self._data.astype -# res = numeric.empty(self._data.shape, object_) -# numeric.putmask(res,~m,self._data) - res = self._data.astype("|O8") - res[m] = f + names = self.dtype.names + if names is None: + res = self._data.astype("|O8") + res[m] = f + else: + res = self._data.astype([(n,'|O8') for n in names]) + for field in names: + np.putmask(res[field], m[field], f) else: res = self.filled(self.fill_value) return str(res) @@ -3399,7 +3495,7 @@ if getmask(a) is nomask: if valmask is not nomask: a._sharedmask = True - a.mask = np.zeros(a.shape, dtype=bool_) + a._mask = make_mask_none(a.shape, a.dtype.names) np.putmask(a._mask, mask, valmask) elif a._hardmask: if valmask is not nomask: Modified: trunk/numpy/ma/mrecords.py =================================================================== --- trunk/numpy/ma/mrecords.py 2008-06-08 18:10:55 UTC (rev 5263) +++ trunk/numpy/ma/mrecords.py 2008-06-08 23:04:42 UTC (rev 5264) @@ -72,7 +72,7 @@ elif isinstance(names, str): new_names = names.split(',') else: - raise NameError, "illegal input names %s" % `names` + raise NameError("illegal input names %s" % `names`) nnames = len(new_names) if nnames < ndescr: new_names += default_names[nnames:] @@ -85,7 +85,7 @@ ndescr.append(t) else: ndescr.append((n,t[1])) - return numeric.dtype(ndescr) + return np.dtype(ndescr) def _get_fieldmask(self): @@ -125,7 +125,7 @@ mdtype = [(k,'|b1') for (k,_) in self.dtype.descr] if mask is nomask or not np.size(mask): if not keep_mask: - self._fieldmask = tuple([False]*len(mdtype)) + self._mask = tuple([False]*len(mdtype)) else: mask = np.array(mask, copy=copy) if mask.shape != self.shape: @@ -144,79 +144,40 @@ self._sharedmask = True else: if mask.dtype == mdtype: - _fieldmask = mask + _mask = mask else: - _fieldmask = np.array([tuple([m]*len(mdtype)) for m in mask], - dtype=mdtype) - self._fieldmask = _fieldmask + _mask = np.array([tuple([m]*len(mdtype)) for m in mask], + dtype=mdtype) + self._mask = _mask return self #...................................................... def __array_finalize__(self,obj): + MaskedArray._update_from(self,obj) # Make sure we have a _fieldmask by default .. _fieldmask = getattr(obj, '_fieldmask', None) if _fieldmask is None: mdescr = [(n,'|b1') for (n,_) in self.dtype.descr] - _mask = getattr(obj, '_mask', nomask) - if _mask is nomask: - _fieldmask = np.empty(self.shape, dtype=mdescr).view(recarray) - _fieldmask.flat = tuple([False]*len(mdescr)) + objmask = getattr(obj, '_mask', nomask) + if objmask is nomask: + _mask = np.empty(self.shape, dtype=mdescr).view(recarray) + _mask.flat = tuple([False]*len(mdescr)) else: - _fieldmask = narray([tuple([m]*len(mdescr)) for m in _mask], - dtype=mdescr).view(recarray) - # Update some of the attributes - if obj is not None: - _baseclass = getattr(obj,'_baseclass',type(obj)) + _mask = narray([tuple([m]*len(mdescr)) for m in objmask], + dtype=mdescr).view(recarray) else: - _baseclass = recarray - attrdict = dict(_fieldmask=_fieldmask, - _hardmask=getattr(obj,'_hardmask',False), - _fill_value=getattr(obj,'_fill_value',None), - _sharedmask=getattr(obj,'_sharedmask',False), - _baseclass=_baseclass) - self.__dict__.update(attrdict) - # Finalize as a regular maskedarray ..... - # Update special attributes ... - self._basedict = getattr(obj, '_basedict', getattr(obj,'__dict__',{})) - self.__dict__.update(self._basedict) + _mask = _fieldmask + # Update some of the attributes + _locdict = self.__dict__ + if _locdict['_baseclass'] == ndarray: + _locdict['_baseclass'] = recarray + _locdict.update(_mask=_mask, _fieldmask=_mask) return - #...................................................... + def _getdata(self): "Returns the data as a recarray." return ndarray.view(self,recarray) _data = property(fget=_getdata) - #...................................................... - def __setmask__(self, mask): - "Sets the mask and update the fieldmask." - names = self.dtype.names - fmask = self.__dict__['_fieldmask'] - # - if isinstance(mask,ndarray) and mask.dtype.names == names: - for n in names: - fmask[n] = mask[n].astype(bool) -# self.__dict__['_fieldmask'] = fmask.view(recarray) - return - newmask = make_mask(mask, copy=False) - if names is not None: - if self._hardmask: - for n in names: - fmask[n].__ior__(newmask) - else: - for n in names: - fmask[n].flat = newmask - return - _setmask = __setmask__ - # - def _getmask(self): - """Return the mask of the mrecord. - A record is masked when all the fields are masked. - """ - if self.size > 1: - return self._fieldmask.view((bool_, len(self.dtype))).all(1) - else: - return self._fieldmask.view((bool_, len(self.dtype))).all() - mask = _mask = property(fget=_getmask, fset=_setmask) - #...................................................... def __len__(self): "Returns the length" # We have more than one record @@ -224,88 +185,104 @@ return len(self._data) # We have only one record: return the nb of fields return len(self.dtype) - #...................................................... + def __getattribute__(self, attr): - "Returns the given attribute." try: - # Returns a generic attribute - return object.__getattribute__(self,attr) - except AttributeError: - # OK, so attr must be a field name + return object.__getattribute__(self, attr) + except AttributeError: # attr must be a fieldname pass - # Get the list of fields ...... - _names = self.dtype.names - if attr in _names: - _data = self._data - _mask = self._fieldmask -# obj = masked_array(_data.__getattribute__(attr), copy=False, -# mask=_mask.__getattribute__(attr)) - # Use a view in order to avoid the copy of the mask in MaskedArray.__new__ - obj = narray(_data.__getattribute__(attr), copy=False).view(MaskedArray) - obj._mask = _mask.__getattribute__(attr) - if not obj.ndim and obj._mask: - return masked - return obj - raise AttributeError,"No attribute '%s' !" % attr + fielddict = ndarray.__getattribute__(self,'dtype').fields + try: + res = fielddict[attr][:2] + except (TypeError, KeyError): + raise AttributeError, "record array has no attribute %s" % attr + # So far, so good... + _localdict = ndarray.__getattribute__(self,'__dict__') + _data = ndarray.view(self, _localdict['_baseclass']) + obj = _data.getfield(*res) + if obj.dtype.fields: + raise NotImplementedError("MaskedRecords is currently limited to"\ + "simple records...") + obj = obj.view(MaskedArray) + obj._baseclass = ndarray + obj._isfield = True + # Get some special attributes + _fill_value = _localdict.get('_fill_value', None) + _mask = _localdict.get('_mask', None) + # Reset the object's mask + if _mask is not None: + try: + obj._mask = _mask[attr] + except IndexError: + # Couldn't find a mask: use the default (nomask) + pass + # Reset the field values + if _fill_value is not None: + try: + obj._fill_value = _fill_value[attr] + except ValueError: + obj._fill_value = None + return obj + def __setattr__(self, attr, val): "Sets the attribute attr to the value val." -# newattr = attr not in self.__dict__ + # Should we call __setmask__ first ? + if attr in ['_mask','mask','_fieldmask','fieldmask']: + self.__setmask__(val) + return + # Create a shortcut (so that we don't have to call getattr all the time) + _localdict = self.__dict__ + # Check whether we're creating a new field + newattr = attr not in _localdict try: # Is attr a generic attribute ? ret = object.__setattr__(self, attr, val) except: # Not a generic attribute: exit if it's not a valid field - fielddict = self.dtype.names or {} + fielddict = ndarray.__getattribute__(self,'dtype').fields or {} if attr not in fielddict: exctype, value = sys.exc_info()[:2] raise exctype, value else: - if attr in ['_mask','fieldmask']: - self.__setmask__(val) - return # Get the list of names ...... - _names = self.dtype.names - if _names is None: - _names = [] - else: - _names = list(_names) + fielddict = ndarray.__getattribute__(self,'dtype').fields or {} # Check the attribute - self_dict = self.__dict__ - if attr not in _names+list(self_dict): +##### _localdict = self.__dict__ + if attr not in fielddict: return ret - if attr not in self_dict: # We just added this one + if newattr: # We just added this one try: # or this setattr worked on an internal # attribute. object.__delattr__(self, attr) except: return ret - # Case #1.: Basic field ............ - base_fmask = self._fieldmask - _names = self.dtype.names or [] - _localdict = self.__dict__ - if attr in _names: - if val is masked: - _fill_value = _localdict['_fill_value'] - if _fill_value is not None: - fval = _fill_value[attr] - else: - fval = None - mval = True + # Let's try to set the field + try: + res = fielddict[attr][:2] + except (TypeError,KeyError): + raise AttributeError, "record array has no attribute %s" % attr + # + if val is masked: + _fill_value = _localdict['_fill_value'] + if _fill_value is not None: + dval = _localdict['_fill_value'][attr] else: - fval = filled(val) - mval = getmaskarray(val) - if self._hardmask: - mval = mask_or(mval, base_fmask.__getattr__(attr)) - self._data.__setattr__(attr, fval) - base_fmask.__setattr__(attr, mval) - return - #............................................ + dval = val + mval = True + else: + dval = filled(val) + mval = getmaskarray(val) + obj = ndarray.__getattribute__(self,'_data').setfield(dval, *res) + _localdict['_mask'].__setitem__(attr, mval) + return obj + + def __getitem__(self, indx): """Returns all the fields sharing the same fieldname base. The fieldname base is either `_data` or `_mask`.""" _localdict = self.__dict__ - _fieldmask = _localdict['_fieldmask'] + _mask = _localdict['_fieldmask'] _data = self._data # We want a field ........ if isinstance(indx, basestring): @@ -314,7 +291,7 @@ #!!!: ...that break propagation #!!!: Don't force the mask to nomask, that wrecks easy masking obj = _data[indx].view(MaskedArray) - obj._mask = _fieldmask[indx] + obj._mask = _mask[indx] obj._sharedmask = True fval = _localdict['_fill_value'] if fval is not None: @@ -325,47 +302,17 @@ return obj # We want some elements .. # First, the data ........ - obj = narray(_data[indx], copy=False).view(mrecarray) - obj._fieldmask = narray(_fieldmask[indx], copy=False).view(recarray) + obj = np.array(_data[indx], copy=False).view(mrecarray) + obj._mask = np.array(_mask[indx], copy=False).view(recarray) return obj #.... def __setitem__(self, indx, value): "Sets the given record to value." MaskedArray.__setitem__(self, indx, value) if isinstance(indx, basestring): - self._fieldmask[indx] = ma.getmaskarray(value) + self._mask[indx] = ma.getmaskarray(value) - #............................................ - def __setslice__(self, i, j, value): - "Sets the slice described by [i,j] to `value`." - _localdict = self.__dict__ - d = self._data - m = _localdict['_fieldmask'] - names = self.dtype.names - if value is masked: - for n in names: - m[i:j][n] = True - elif not self._hardmask: - fval = filled(value) - mval = getmaskarray(value) - for n in names: - d[n][i:j] = fval - m[n][i:j] = mval - else: - mindx = getmaskarray(self)[i:j] - dval = np.asarray(value) - valmask = getmask(value) - if valmask is nomask: - for n in names: - mval = mask_or(m[n][i:j], valmask) - d[n][i:j][~mval] = value - elif valmask.size > 1: - for n in names: - mval = mask_or(m[n][i:j], valmask) - d[n][i:j][~mval] = dval[~mval] - m[n][i:j] = mask_or(m[n][i:j], mval) - self._fieldmask = m - #...................................................... + def __str__(self): "Calculates the string representation." if self.size > 1: @@ -394,55 +341,25 @@ return ndarray.view(self, obj) except TypeError: pass - dtype = np.dtype(obj) - if dtype.fields is None: - return self.__array__().view(dtype) + dtype_ = np.dtype(obj) + if dtype_.fields is None: + return self.__array__().view(dtype_) return ndarray.view(self, obj) - #...................................................... - def filled(self, fill_value=None): - """Returns an array of the same class as the _data part, where masked - values are filled with fill_value. - If fill_value is None, self.fill_value is used instead. - Subclassing is preserved. - - """ - _localdict = self.__dict__ - d = self._data - fm = _localdict['_fieldmask'] - if not np.asarray(fm, dtype=bool_).any(): - return d - # - if fill_value is None: - value = _check_fill_value(_localdict['_fill_value'], d.dtype) - else: - value = fill_value - if np.size(value) == 1: - value = np.array(tuple([value,] * len(d.dtype)), - dtype=d.dtype) - # - if self is masked: - result = np.asanyarray(value) - else: - result = d.copy() - for n in d.dtype.names: - np.putmask(np.asarray(result[n]), np.asarray(fm[n]), value[n]) - return result - #...................................................... def harden_mask(self): "Forces the mask to hard" self._hardmask = True def soften_mask(self): "Forces the mask to soft" self._hardmask = False - #...................................................... + def copy(self): """Returns a copy of the masked record.""" _localdict = self.__dict__ copied = self._data.copy().view(type(self)) copied._fieldmask = self._fieldmask.copy() return copied - #...................................................... + def tolist(self, fill_value=None): """Copy the data portion of the array to a hierarchical python list and returns that list. @@ -638,10 +555,10 @@ # Start the conversion loop ....... for f in arr: try: - val = int(f) + int(f) except ValueError: try: - val = float(f) + float(f) except ValueError: try: val = complex(f) Modified: trunk/numpy/ma/tests/test_core.py =================================================================== --- trunk/numpy/ma/tests/test_core.py 2008-06-08 18:10:55 UTC (rev 5263) +++ trunk/numpy/ma/tests/test_core.py 2008-06-08 23:04:42 UTC (rev 5264) @@ -1,4 +1,4 @@ -# pylint: disable-msg=W0611, W0612, W0614, W0511,R0201 +# pylint: disable-msg=W0401,W0511,W0611,W0612,W0614,R0201,E1102 """Tests suite for MaskedArray & subclassing. :author: Pierre Gerard-Marchant @@ -9,47 +9,77 @@ import types import warnings -import numpy +import numpy as np import numpy.core.fromnumeric as fromnumeric +from numpy import ndarray + + from numpy.testing import NumpyTest, NumpyTestCase from numpy.testing import set_local_path, restore_path from numpy.testing.utils import build_err_msg -from numpy import array as narray import numpy.ma.testutils -from numpy.ma.testutils import * +from numpy.ma.testutils import NumpyTestCase, \ + assert_equal, assert_array_equal, fail_if_equal, assert_not_equal, \ + assert_almost_equal, assert_mask_equal, assert_equal_records import numpy.ma.core as coremodule from numpy.ma.core import * -pi = numpy.pi +pi = np.pi set_local_path() from test_old_ma import * restore_path() #.............................................................................. -class TestMA(NumpyTestCase): +class TestMaskedArray(NumpyTestCase): "Base test class for MaskedArrays." + def __init__(self, *args, **kwds): NumpyTestCase.__init__(self, *args, **kwds) self.setUp() def setUp (self): "Base data definition." - x = narray([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) - y = narray([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) + x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) + y = np.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) a10 = 10. m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 ,0, 1] xm = masked_array(x, mask=m1) ym = masked_array(y, mask=m2) - z = narray([-.5, 0., .5, .8]) + z = np.array([-.5, 0., .5, .8]) zm = masked_array(z, mask=[0,1,0,0]) - xf = numpy.where(m1, 1.e+20, x) + xf = np.where(m1, 1.e+20, x) xm.set_fill_value(1.e+20) self.d = (x, y, a10, m1, m2, xm, ym, z, zm, xf) - #........................ + + + def test_basicattributes(self): + "Tests some basic array attributes." + a = array([1,3,2]) + b = array([1,3,2], mask=[1,0,1]) + assert_equal(a.ndim, 1) + assert_equal(b.ndim, 1) + assert_equal(a.size, 3) + assert_equal(b.size, 3) + assert_equal(a.shape, (3,)) + assert_equal(b.shape, (3,)) + + + def test_basic0d(self): + "Checks masking a scalar" + x = masked_array(0) + assert_equal(str(x), '0') + x = masked_array(0,mask=True) + assert_equal(str(x), str(masked_print_option)) + x = masked_array(0, mask=False) + assert_equal(str(x), '0') + x = array(0, mask=1) + assert(x.filled().dtype is x.data.dtype) + + def test_basic1d(self): "Test of basic array creation and properties in 1 dimension." (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d @@ -58,7 +88,7 @@ assert((xm-ym).filled(0).any()) fail_if_equal(xm.mask.astype(int_), ym.mask.astype(int_)) s = x.shape - assert_equal(numpy.shape(xm), s) + assert_equal(np.shape(xm), s) assert_equal(xm.shape, s) assert_equal(xm.dtype, x.dtype) assert_equal(zm.dtype, z.dtype) @@ -67,7 +97,8 @@ assert_array_equal(xm, xf) assert_array_equal(filled(xm, 1.e20), xf) assert_array_equal(x, xm) - #........................ + + def test_basic2d(self): "Test of basic array creation and properties in 2 dimensions." (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d @@ -77,7 +108,7 @@ xm.shape = s ym.shape = s xf.shape = s - + # assert(not isMaskedArray(x)) assert(isMaskedArray(xm)) assert_equal(shape(xm), s) @@ -87,308 +118,27 @@ assert_equal(xm, xf) assert_equal(filled(xm, 1.e20), xf) assert_equal(x, xm) - #........................ - def test_basic_arithmetic (self): - "Test of basic arithmetic." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - a2d = array([[1,2],[0,4]]) - a2dm = masked_array(a2d, [[0,0],[1,0]]) - assert_equal(a2d * a2d, a2d * a2dm) - assert_equal(a2d + a2d, a2d + a2dm) - assert_equal(a2d - a2d, a2d - a2dm) - for s in [(12,), (4,3), (2,6)]: - x = x.reshape(s) - y = y.reshape(s) - xm = xm.reshape(s) - ym = ym.reshape(s) - xf = xf.reshape(s) - assert_equal(-x, -xm) - assert_equal(x + y, xm + ym) - assert_equal(x - y, xm - ym) - assert_equal(x * y, xm * ym) - assert_equal(x / y, xm / ym) - assert_equal(a10 + y, a10 + ym) - assert_equal(a10 - y, a10 - ym) - assert_equal(a10 * y, a10 * ym) - assert_equal(a10 / y, a10 / ym) - assert_equal(x + a10, xm + a10) - assert_equal(x - a10, xm - a10) - assert_equal(x * a10, xm * a10) - assert_equal(x / a10, xm / a10) - assert_equal(x**2, xm**2) - assert_equal(abs(x)**2.5, abs(xm) **2.5) - assert_equal(x**y, xm**ym) - assert_equal(numpy.add(x,y), add(xm, ym)) - assert_equal(numpy.subtract(x,y), subtract(xm, ym)) - assert_equal(numpy.multiply(x,y), multiply(xm, ym)) - assert_equal(numpy.divide(x,y), divide(xm, ym)) - #........................ - def test_mixed_arithmetic(self): - "Tests mixed arithmetics." - na = narray([1]) - ma = array([1]) - self.failUnless(isinstance(na + ma, MaskedArray)) - self.failUnless(isinstance(ma + na, MaskedArray)) - #........................ - def test_inplace_arithmetic(self): - """Test of inplace operations and rich comparisons""" - # addition - x = arange(10) - y = arange(10) - xm = arange(10) - xm[2] = masked - x += 1 - assert_equal(x, y+1) - xm += 1 - assert_equal(xm, y+1) - # subtraction - x = arange(10) - xm = arange(10) - xm[2] = masked - x -= 1 - assert_equal(x, y-1) - xm -= 1 - assert_equal(xm, y-1) - # multiplication - x = arange(10)*1.0 - xm = arange(10)*1.0 - xm[2] = masked - x *= 2.0 - assert_equal(x, y*2) - xm *= 2.0 - assert_equal(xm, y*2) - # division - x = arange(10)*2 - xm = arange(10)*2 - xm[2] = masked - x /= 2 - assert_equal(x, y) - xm /= 2 - assert_equal(xm, y) - # division, pt 2 - x = arange(10)*1.0 - xm = arange(10)*1.0 - xm[2] = masked - x /= 2.0 - assert_equal(x, y/2.0) - xm /= arange(10) - assert_equal(xm, ones((10,))) - warnings.simplefilter('ignore', DeprecationWarning) - x = arange(10).astype(float_) - xm = arange(10) - xm[2] = masked - id1 = x.raw_data().ctypes.data - x += 1. - assert (id1 == x.raw_data().ctypes.data) - assert_equal(x, y+1.) - warnings.simplefilter('default', DeprecationWarning) - - # addition w/ array - x = arange(10, dtype=float_) - xm = arange(10, dtype=float_) - xm[2] = masked - m = xm.mask - a = arange(10, dtype=float_) - a[-1] = masked - x += a - xm += a - assert_equal(x,y+a) - assert_equal(xm,y+a) - assert_equal(xm.mask, mask_or(m,a.mask)) - # subtraction w/ array - x = arange(10, dtype=float_) - xm = arange(10, dtype=float_) - xm[2] = masked - m = xm.mask - a = arange(10, dtype=float_) - a[-1] = masked - x -= a - xm -= a - assert_equal(x,y-a) - assert_equal(xm,y-a) - assert_equal(xm.mask, mask_or(m,a.mask)) - # multiplication w/ array - x = arange(10, dtype=float_) - xm = arange(10, dtype=float_) - xm[2] = masked - m = xm.mask - a = arange(10, dtype=float_) - a[-1] = masked - x *= a - xm *= a - assert_equal(x,y*a) - assert_equal(xm,y*a) - assert_equal(xm.mask, mask_or(m,a.mask)) - # division w/ array - x = arange(10, dtype=float_) - xm = arange(10, dtype=float_) - xm[2] = masked - m = xm.mask - a = arange(10, dtype=float_) - a[-1] = masked - x /= a - xm /= a - assert_equal(x,y/a) - assert_equal(xm,y/a) - assert_equal(xm.mask, mask_or(mask_or(m,a.mask), (a==0))) - # + def test_concatenate_basic(self): + "Tests concatenations." (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - z = xm/ym - assert_equal(z._mask, [1,1,1,0,0,1,1,0,0,0,1,1]) - assert_equal(z._data, [0.2,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) - xm = xm.copy() - xm /= ym - assert_equal(xm._mask, [1,1,1,0,0,1,1,0,0,0,1,1]) - assert_equal(xm._data, [1/5.,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) + # basic concatenation + assert_equal(np.concatenate((x,y)), concatenate((xm,ym))) + assert_equal(np.concatenate((x,y)), concatenate((x,y))) + assert_equal(np.concatenate((x,y)), concatenate((xm,y))) + assert_equal(np.concatenate((x,y,x)), concatenate((x,ym,x))) - def test_inplace_arithmetixx(self): - tiny = numpy.finfo(float).tiny - a = array([tiny, 1./tiny, 0.]) - assert_equal(getmaskarray(a/2), [0,0,0]) - assert_equal(getmaskarray(2/a), [1,0,1]) - - #.......................... - def test_scalararithmetic(self): - "Tests some scalar arithmetics on MaskedArrays." - xm = array(0, mask=1) - assert((1/array(0)).mask) - assert((1 + xm).mask) - assert((-xm).mask) - assert((-xm).mask) - assert(maximum(xm, xm).mask) - assert(minimum(xm, xm).mask) - assert(xm.filled().dtype is xm.data.dtype) - x = array(0, mask=0) - assert_equal(x.filled().ctypes.data, x.ctypes.data) - assert_equal(str(xm), str(masked_print_option)) - # Make sure we don't lose the shape in some circumstances - xm = array((0,0))/0. - assert_equal(xm.shape,(2,)) - assert_equal(xm.mask,[1,1]) - #......................... - def test_basic_ufuncs (self): - "Test various functions such as sin, cos." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - assert_equal(numpy.cos(x), cos(xm)) - assert_equal(numpy.cosh(x), cosh(xm)) - assert_equal(numpy.sin(x), sin(xm)) - assert_equal(numpy.sinh(x), sinh(xm)) - assert_equal(numpy.tan(x), tan(xm)) - assert_equal(numpy.tanh(x), tanh(xm)) - assert_equal(numpy.sqrt(abs(x)), sqrt(xm)) - assert_equal(numpy.log(abs(x)), log(xm)) - assert_equal(numpy.log10(abs(x)), log10(xm)) - assert_equal(numpy.exp(x), exp(xm)) - assert_equal(numpy.arcsin(z), arcsin(zm)) - assert_equal(numpy.arccos(z), arccos(zm)) - assert_equal(numpy.arctan(z), arctan(zm)) - assert_equal(numpy.arctan2(x, y), arctan2(xm, ym)) - assert_equal(numpy.absolute(x), absolute(xm)) - assert_equal(numpy.equal(x,y), equal(xm, ym)) - assert_equal(numpy.not_equal(x,y), not_equal(xm, ym)) - assert_equal(numpy.less(x,y), less(xm, ym)) - assert_equal(numpy.greater(x,y), greater(xm, ym)) - assert_equal(numpy.less_equal(x,y), less_equal(xm, ym)) - assert_equal(numpy.greater_equal(x,y), greater_equal(xm, ym)) - assert_equal(numpy.conjugate(x), conjugate(xm)) - #........................ - def test_count_func (self): - "Tests count" - ott = array([0.,1.,2.,3.], mask=[1,0,0,0]) - assert( isinstance(count(ott), int)) - assert_equal(3, count(ott)) - assert_equal(1, count(1)) - assert_equal(0, array(1,mask=[1])) - ott = ott.reshape((2,2)) - assert isinstance(count(ott,0), ndarray) - assert isinstance(count(ott), types.IntType) - assert_equal(3, count(ott)) - assert getmask(count(ott,0)) is nomask - assert_equal([1,2],count(ott,0)) - #........................ - def test_minmax_func (self): - "Tests minimum and maximum." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - xr = numpy.ravel(x) #max doesn't work if shaped - xmr = ravel(xm) - assert_equal(max(xr), maximum(xmr)) #true because of careful selection of data - assert_equal(min(xr), minimum(xmr)) #true because of careful selection of data - # - assert_equal(minimum([1,2,3],[4,0,9]), [1,0,3]) - assert_equal(maximum([1,2,3],[4,0,9]), [4,2,9]) - x = arange(5) - y = arange(5) - 2 - x[3] = masked - y[0] = masked - assert_equal(minimum(x,y), where(less(x,y), x, y)) - assert_equal(maximum(x,y), where(greater(x,y), x, y)) - assert minimum(x) == 0 - assert maximum(x) == 4 - # - x = arange(4).reshape(2,2) - x[-1,-1] = masked - assert_equal(maximum(x), 2) - - def test_minmax_methods(self): - "Additional tests on max/min" - (_, _, _, _, _, xm, _, _, _, _) = self.d - xm.shape = (xm.size,) - assert_equal(xm.max(), 10) - assert(xm[0].max() is masked) - assert(xm[0].max(0) is masked) - assert(xm[0].max(-1) is masked) - assert_equal(xm.min(), -10.) - assert(xm[0].min() is masked) - assert(xm[0].min(0) is masked) - assert(xm[0].min(-1) is masked) - assert_equal(xm.ptp(), 20.) - assert(xm[0].ptp() is masked) - assert(xm[0].ptp(0) is masked) - assert(xm[0].ptp(-1) is masked) - # - x = array([1,2,3], mask=True) - assert(x.min() is masked) - assert(x.max() is masked) - assert(x.ptp() is masked) - #........................ - def test_addsumprod (self): - "Tests add, sum, product." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - assert_equal(numpy.add.reduce(x), add.reduce(x)) - assert_equal(numpy.add.accumulate(x), add.accumulate(x)) - assert_equal(4, sum(array(4),axis=0)) - assert_equal(4, sum(array(4), axis=0)) - assert_equal(numpy.sum(x,axis=0), sum(x,axis=0)) - assert_equal(numpy.sum(filled(xm,0),axis=0), sum(xm,axis=0)) - assert_equal(numpy.sum(x,0), sum(x,0)) - assert_equal(numpy.product(x,axis=0), product(x,axis=0)) - assert_equal(numpy.product(x,0), product(x,0)) - assert_equal(numpy.product(filled(xm,1),axis=0), product(xm,axis=0)) - s = (3,4) - x.shape = y.shape = xm.shape = ym.shape = s - if len(s) > 1: - assert_equal(numpy.concatenate((x,y),1), concatenate((xm,ym),1)) - assert_equal(numpy.add.reduce(x,1), add.reduce(x,1)) - assert_equal(numpy.sum(x,1), sum(x,1)) - assert_equal(numpy.product(x,1), product(x,1)) - #......................... - def test_concat(self): + def test_concatenate_alongaxis(self): "Tests concatenations." (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - # basic concatenation - assert_equal(numpy.concatenate((x,y)), concatenate((xm,ym))) - assert_equal(numpy.concatenate((x,y)), concatenate((x,y))) - assert_equal(numpy.concatenate((x,y)), concatenate((xm,y))) - assert_equal(numpy.concatenate((x,y,x)), concatenate((x,ym,x))) # Concatenation along an axis s = (3,4) x.shape = y.shape = xm.shape = ym.shape = s - assert_equal(xm.mask, numpy.reshape(m1, s)) - assert_equal(ym.mask, numpy.reshape(m2, s)) + assert_equal(xm.mask, np.reshape(m1, s)) + assert_equal(ym.mask, np.reshape(m2, s)) xmym = concatenate((xm,ym),1) - assert_equal(numpy.concatenate((x,y),1), xmym) - assert_equal(numpy.concatenate((xm.mask,ym.mask),1), xmym._mask) + assert_equal(np.concatenate((x,y),1), xmym) + assert_equal(np.concatenate((xm.mask,ym.mask),1), xmym._mask) # x=zeros(2) y=array(ones(2),mask=[False,True]) @@ -399,16 +149,75 @@ assert_array_equal(z,[1,1,0,0]) assert_array_equal(z.mask,[False,True,False,False]) - #........................ + def test_creation_ndmin(self): + "Check the use of ndmin" + x = array([1,2,3],mask=[1,0,0], ndmin=2) + assert_equal(x.shape,(1,3)) + assert_equal(x._data,[[1,2,3]]) + assert_equal(x._mask,[[1,0,0]]) + + def test_creation_maskcreation(self): + "Tests how masks are initialized at the creation of Maskedarrays." + data = arange(24, dtype=float_) + data[[3,6,15]] = masked + dma_1 = MaskedArray(data) + assert_equal(dma_1.mask, data.mask) + dma_2 = MaskedArray(dma_1) + assert_equal(dma_2.mask, dma_1.mask) + dma_3 = MaskedArray(dma_1, mask=[1,0,0,0]*6) + fail_if_equal(dma_3.mask, dma_1.mask) + + def test_asarray(self): + (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d + xm.fill_value = -9999 + xmm = asarray(xm) + assert_equal(xmm._data, xm._data) + assert_equal(xmm._mask, xm._mask) + assert_equal(xmm.fill_value, xm.fill_value) + + def test_fix_invalid(self): + "Checks fix_invalid." + data = masked_array(np.sqrt([-1., 0., 1.]), mask=[0,0,1]) + data_fixed = fix_invalid(data) + assert_equal(data_fixed._data, [data.fill_value, 0., 1.]) + assert_equal(data_fixed._mask, [1., 0., 1.]) + + def test_maskedelement(self): + "Test of masked element" + x = arange(6) + x[1] = masked + assert(str(masked) == '--') + assert(x[1] is masked) + assert_equal(filled(x[1], 0), 0) + # don't know why these should raise an exception... + #self.failUnlessRaises(Exception, lambda x,y: x+y, masked, masked) + #self.failUnlessRaises(Exception, lambda x,y: x+y, masked, 2) + #self.failUnlessRaises(Exception, lambda x,y: x+y, masked, xx) + #self.failUnlessRaises(Exception, lambda x,y: x+y, xx, masked) + + def test_set_element_as_object(self): + """Tests setting elements with object""" + a = empty(1,dtype=object) + x = (1,2,3,4,5) + a[0] = x + assert_equal(a[0], x) + assert(a[0] is x) + # + import datetime + dt = datetime.datetime.now() + a[0] = dt + assert(a[0] is dt) + + def test_indexing(self): "Tests conversions and indexing" - x1 = numpy.array([1,2,4,3]) + x1 = np.array([1,2,4,3]) x2 = array(x1, mask=[1,0,0,0]) x3 = array(x1, mask=[0,1,0,1]) x4 = array(x1) # test conversion to strings junk, garbage = str(x2), repr(x2) - assert_equal(numpy.sort(x1),sort(x2,endwith=False)) + assert_equal(np.sort(x1),sort(x2,endwith=False)) # tests of indexing assert type(x2[1]) is type(x1[1]) assert x1[1] == x2[1] @@ -435,21 +244,22 @@ x4[:] = masked_array([1,2,3,4],[0,1,1,0]) assert allequal(getmask(x4), array([0,1,1,0])) assert allequal(x4, array([1,2,3,4])) - x1 = numpy.arange(5)*1.0 + x1 = np.arange(5)*1.0 x2 = masked_values(x1, 3.0) assert_equal(x1,x2) assert allequal(array([0,0,0,1,0],MaskType), x2.mask) #FIXME: Well, eh, fill_value is now a property assert_equal(3.0, x2.fill_value()) assert_equal(3.0, x2.fill_value) x1 = array([1,'hello',2,3],object) - x2 = numpy.array([1,'hello',2,3],object) + x2 = np.array([1,'hello',2,3],object) s1 = x1[1] s2 = x2[1] assert_equal(type(s2), str) assert_equal(type(s1), str) assert_equal(s1, s2) assert x1[1:1].shape == (0,) - #........................ + + def test_copy(self): "Tests of some subtle points of copying and sizing." n = [0,0,1,0,0] @@ -460,7 +270,7 @@ assert(m is not m3) warnings.simplefilter('ignore', DeprecationWarning) - x1 = numpy.arange(5) + x1 = np.arange(5) y1 = array(x1, mask=m) #assert( y1._data is x1) assert_equal(y1._data.__array_interface__, x1.__array_interface__) @@ -515,48 +325,56 @@ y = masked_array(x, copy=True) assert_not_equal(y._data.ctypes.data, x._data.ctypes.data) assert_not_equal(y._mask.ctypes.data, x._mask.ctypes.data) - #........................ - def test_where(self): - "Test the where function" - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - d = where(xm>2,xm,-9) - assert_equal(d, [-9.,-9.,-9.,-9., -9., 4., -9., -9., 10., -9., -9., 3.]) - assert_equal(d._mask, xm._mask) - d = where(xm>2,-9,ym) - assert_equal(d, [5.,0.,3., 2., -1.,-9.,-9., -10., -9., 1., 0., -9.]) - assert_equal(d._mask, [1,0,1,0,0,0,1,0,0,0,0,0]) - d = where(xm>2, xm, masked) - assert_equal(d, [-9.,-9.,-9.,-9., -9., 4., -9., -9., 10., -9., -9., 3.]) - tmp = xm._mask.copy() - tmp[(xm<=2).filled(True)] = True - assert_equal(d._mask, tmp) + + + def test_pickling(self): + "Tests pickling" + import cPickle + a = arange(10) + a[::3] = masked + a.fill_value = 999 + a_pickled = cPickle.loads(a.dumps()) + assert_equal(a_pickled._mask, a._mask) + assert_equal(a_pickled._data, a._data) + assert_equal(a_pickled.fill_value, 999) # - ixm = xm.astype(int_) - d = where(ixm>2, ixm, masked) - assert_equal(d, [-9,-9,-9,-9, -9, 4, -9, -9, 10, -9, -9, 3]) - assert_equal(d.dtype, ixm.dtype) + a = array(np.matrix(range(10)), mask=[1,0,1,0,0]*2) + a_pickled = cPickle.loads(a.dumps()) + assert_equal(a_pickled._mask, a._mask) + assert_equal(a_pickled, a) + assert(isinstance(a_pickled._data,np.matrix)) + + + def test_single_element_subscript(self): + "Tests single element subscripts of Maskedarrays." + a = array([1,3,2]) + b = array([1,3,2], mask=[1,0,1]) + assert_equal(a[0].shape, ()) + assert_equal(b[0].shape, ()) + assert_equal(b[1].shape, ()) + + + def test_topython(self): + "Tests some communication issues with Python." + assert_equal(1, int(array(1))) + assert_equal(1.0, float(array(1))) + assert_equal(1, int(array([[[1]]]))) + assert_equal(1.0, float(array([[1]]))) + self.assertRaises(TypeError, float, array([1,1])) # - x = arange(10) - x[3] = masked - c = x >= 8 - z = where(c , x, masked) - assert z.dtype is x.dtype - assert z[3] is masked - assert z[4] is masked - assert z[7] is masked - assert z[8] is not masked - assert z[9] is not masked - assert_equal(x,z) + warnings.simplefilter('ignore',UserWarning) + assert np.isnan(float(array([1],mask=[1]))) + warnings.simplefilter('default',UserWarning) # - z = where(c , masked, x) - assert z.dtype is x.dtype - assert z[3] is masked - assert z[4] is not masked - assert z[7] is not masked - assert z[8] is masked - assert z[9] is masked + a = array([1,2,3],mask=[1,0,0]) + self.assertRaises(TypeError, lambda:float(a)) + assert_equal(float(a[-1]), 3.) + assert(np.isnan(float(a[0]))) + self.assertRaises(TypeError, int, a) + assert_equal(int(a[-1]), 3) + self.assertRaises(MAError, lambda:int(a[0])) - #........................ + def test_oddfeatures_1(self): "Test of other odd features" x = arange(20) @@ -568,7 +386,7 @@ assert_equal(z.imag, 10*x) assert_equal((z*conjugate(z)).real, 101*x*x) z.imag[...] = 0.0 - + # x = arange(10) x[3] = masked assert str(x[3]) == str(masked) @@ -584,8 +402,8 @@ assert z[8] is masked assert z[9] is masked assert_equal(x,z) - # - #........................ + + def test_oddfeatures_2(self): "Tests some more features." x = array([1.,2.,3.,4.,5.]) @@ -600,22 +418,8 @@ assert z[1] is not masked assert z[2] is masked # - x = arange(6) - x[5] = masked - y = arange(6)*10 - y[2] = masked - c = array([1,1,1,0,0,0], mask=[1,0,0,0,0,0]) - cm = c.filled(1) - z = where(c,x,y) - zm = where(cm,x,y) - assert_equal(z, zm) - assert getmask(zm) is nomask - assert_equal(zm, [0,1,2,30,40,50]) - z = where(c, masked, 1) - assert_equal(z, [99,99,99,1,1,1]) - z = where(c, 1, masked) - assert_equal(z, [99, 1, 1, 99, 99, 99]) - #........................ + + def test_oddfeatures_3(self): """Tests some generic features.""" atest = array([10], mask=True) @@ -624,57 +428,251 @@ atest[idx] = btest[idx] assert_equal(atest,[20]) #........................ - def test_oddfeatures_4(self): - """Tests some generic features.""" - atest = ones((10,10,10), dtype=float_) - btest = zeros(atest.shape, MaskType) - ctest = masked_where(btest,atest) - assert_equal(atest,ctest) - #........................ - def test_set_oddities(self): - """Tests setting elements with object""" - a = empty(1,dtype=object) - x = (1,2,3,4,5) - a[0] = x - assert_equal(a[0], x) - assert(a[0] is x) + +#------------------------------------------------------------------------------ + +class TestMaskedArrayArithmetic(NumpyTestCase): + "Base test class for MaskedArrays." + + def setUp (self): + "Base data definition." + x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) + y = np.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) + a10 = 10. + m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] + m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 ,0, 1] + xm = masked_array(x, mask=m1) + ym = masked_array(y, mask=m2) + z = np.array([-.5, 0., .5, .8]) + zm = masked_array(z, mask=[0,1,0,0]) + xf = np.where(m1, 1.e+20, x) + xm.set_fill_value(1.e+20) + self.d = (x, y, a10, m1, m2, xm, ym, z, zm, xf) + + + def test_basic_arithmetic (self): + "Test of basic arithmetic." + (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d + a2d = array([[1,2],[0,4]]) + a2dm = masked_array(a2d, [[0,0],[1,0]]) + assert_equal(a2d * a2d, a2d * a2dm) + assert_equal(a2d + a2d, a2d + a2dm) + assert_equal(a2d - a2d, a2d - a2dm) + for s in [(12,), (4,3), (2,6)]: + x = x.reshape(s) + y = y.reshape(s) + xm = xm.reshape(s) + ym = ym.reshape(s) + xf = xf.reshape(s) + assert_equal(-x, -xm) + assert_equal(x + y, xm + ym) + assert_equal(x - y, xm - ym) + assert_equal(x * y, xm * ym) + assert_equal(x / y, xm / ym) + assert_equal(a10 + y, a10 + ym) + assert_equal(a10 - y, a10 - ym) + assert_equal(a10 * y, a10 * ym) + assert_equal(a10 / y, a10 / ym) + assert_equal(x + a10, xm + a10) + assert_equal(x - a10, xm - a10) + assert_equal(x * a10, xm * a10) + assert_equal(x / a10, xm / a10) + assert_equal(x**2, xm**2) + assert_equal(abs(x)**2.5, abs(xm) **2.5) + assert_equal(x**y, xm**ym) + assert_equal(np.add(x,y), add(xm, ym)) + assert_equal(np.subtract(x,y), subtract(xm, ym)) + assert_equal(np.multiply(x,y), multiply(xm, ym)) + assert_equal(np.divide(x,y), divide(xm, ym)) + + def test_mixed_arithmetic(self): + "Tests mixed arithmetics." + na = np.array([1]) + ma = array([1]) + self.failUnless(isinstance(na + ma, MaskedArray)) + self.failUnless(isinstance(ma + na, MaskedArray)) + + + def test_limits_arithmetic(self): + tiny = np.finfo(float).tiny + a = array([tiny, 1./tiny, 0.]) + assert_equal(getmaskarray(a/2), [0,0,0]) + assert_equal(getmaskarray(2/a), [1,0,1]) + + def test_masked_singleton_arithmetic(self): + "Tests some scalar arithmetics on MaskedArrays." + # Masked singleton should remain masked no matter what + xm = array(0, mask=1) + assert((1/array(0)).mask) + assert((1 + xm).mask) + assert((-xm).mask) + assert(maximum(xm, xm).mask) + assert(minimum(xm, xm).mask) + + def test_arithmetic_with_masked_singleton(self): + "Checks that there's no collapsing to masked" + x = masked_array([1,2]) + y = x * masked + assert_equal(y.shape, x.shape) + assert_equal(y._mask, [True, True]) + y = x[0] * masked + assert y is masked + y = x + masked + assert_equal(y.shape, x.shape) + assert_equal(y._mask, [True, True]) + + + + def test_scalar_arithmetic(self): + x = array(0, mask=0) + assert_equal(x.filled().ctypes.data, x.ctypes.data) + # Make sure we don't lose the shape in some circumstances + xm = array((0,0))/0. + assert_equal(xm.shape,(2,)) + assert_equal(xm.mask,[1,1]) + + def test_basic_ufuncs (self): + "Test various functions such as sin, cos." + (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d + assert_equal(np.cos(x), cos(xm)) + assert_equal(np.cosh(x), cosh(xm)) + assert_equal(np.sin(x), sin(xm)) + assert_equal(np.sinh(x), sinh(xm)) + assert_equal(np.tan(x), tan(xm)) + assert_equal(np.tanh(x), tanh(xm)) + assert_equal(np.sqrt(abs(x)), sqrt(xm)) + assert_equal(np.log(abs(x)), log(xm)) + assert_equal(np.log10(abs(x)), log10(xm)) + assert_equal(np.exp(x), exp(xm)) + assert_equal(np.arcsin(z), arcsin(zm)) + assert_equal(np.arccos(z), arccos(zm)) + assert_equal(np.arctan(z), arctan(zm)) + assert_equal(np.arctan2(x, y), arctan2(xm, ym)) + assert_equal(np.absolute(x), absolute(xm)) + assert_equal(np.equal(x,y), equal(xm, ym)) + assert_equal(np.not_equal(x,y), not_equal(xm, ym)) + assert_equal(np.less(x,y), less(xm, ym)) + assert_equal(np.greater(x,y), greater(xm, ym)) + assert_equal(np.less_equal(x,y), less_equal(xm, ym)) + assert_equal(np.greater_equal(x,y), greater_equal(xm, ym)) + assert_equal(np.conjugate(x), conjugate(xm)) + + + def test_count_func (self): + "Tests count" + ott = array([0.,1.,2.,3.], mask=[1,0,0,0]) + assert( isinstance(count(ott), int)) + assert_equal(3, count(ott)) + assert_equal(1, count(1)) + assert_equal(0, array(1,mask=[1])) + ott = ott.reshape((2,2)) + assert isinstance(count(ott,0), ndarray) + assert isinstance(count(ott), types.IntType) + assert_equal(3, count(ott)) + assert getmask(count(ott,0)) is nomask + assert_equal([1,2],count(ott,0)) + + def test_minmax_func (self): + "Tests minimum and maximum." + (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d + xr = np.ravel(x) #max doesn't work if shaped + xmr = ravel(xm) + assert_equal(max(xr), maximum(xmr)) #true because of careful selection of data + assert_equal(min(xr), minimum(xmr)) #true because of careful selection of data # - import datetime - dt = datetime.datetime.now() - a[0] = dt - assert(a[0] is dt) + assert_equal(minimum([1,2,3],[4,0,9]), [1,0,3]) + assert_equal(maximum([1,2,3],[4,0,9]), [4,2,9]) + x = arange(5) + y = arange(5) - 2 + x[3] = masked + y[0] = masked + assert_equal(minimum(x,y), where(less(x,y), x, y)) + assert_equal(maximum(x,y), where(greater(x,y), x, y)) + assert minimum(x) == 0 + assert maximum(x) == 4 + # + x = arange(4).reshape(2,2) + x[-1,-1] = masked + assert_equal(maximum(x), 2) + + + def test_minmax_funcs_with_output(self): + "Tests the min/max functions with explicit outputs" + mask = np.random.rand(12).round() + xm = array(np.random.uniform(0,10,12),mask=mask) + xm.shape = (3,4) + for funcname in ('min', 'max'): + # Initialize + npfunc = getattr(np, funcname) + mafunc = getattr(coremodule, funcname) + # Use the np version + nout = np.empty((4,), dtype=int) + result = npfunc(xm,axis=0,out=nout) + assert(result is nout) + # Use the ma version + nout.fill(-999) + result = mafunc(xm,axis=0,out=nout) + assert(result is nout) + + + def test_minmax_methods(self): + "Additional tests on max/min" + (_, _, _, _, _, xm, _, _, _, _) = self.d + xm.shape = (xm.size,) + assert_equal(xm.max(), 10) + assert(xm[0].max() is masked) + assert(xm[0].max(0) is masked) + assert(xm[0].max(-1) is masked) + assert_equal(xm.min(), -10.) + assert(xm[0].min() is masked) + assert(xm[0].min(0) is masked) + assert(xm[0].min(-1) is masked) + assert_equal(xm.ptp(), 20.) + assert(xm[0].ptp() is masked) + assert(xm[0].ptp(0) is masked) + assert(xm[0].ptp(-1) is masked) + # + x = array([1,2,3], mask=True) + assert(x.min() is masked) + assert(x.max() is masked) + assert(x.ptp() is masked) #........................ - def test_maskingfunctions(self): - "Tests masking functions." - x = array([1.,2.,3.,4.,5.]) - x[2] = masked - assert_equal(masked_where(greater(x, 2), x), masked_greater(x,2)) - assert_equal(masked_where(greater_equal(x, 2), x), masked_greater_equal(x,2)) - assert_equal(masked_where(less(x, 2), x), masked_less(x,2)) - assert_equal(masked_where(less_equal(x, 2), x), masked_less_equal(x,2)) - assert_equal(masked_where(not_equal(x, 2), x), masked_not_equal(x,2)) - assert_equal(masked_where(equal(x, 2), x), masked_equal(x,2)) - assert_equal(masked_where(not_equal(x,2), x), masked_not_equal(x,2)) - assert_equal(masked_inside(range(5), 1, 3), [0, 199, 199, 199, 4]) - assert_equal(masked_outside(range(5), 1, 3),[199,1,2,3,199]) - assert_equal(masked_inside(array(range(5), mask=[1,0,0,0,0]), 1, 3).mask, [1,1,1,1,0]) - assert_equal(masked_outside(array(range(5), mask=[0,1,0,0,0]), 1, 3).mask, [1,1,0,0,1]) - assert_equal(masked_equal(array(range(5), mask=[1,0,0,0,0]), 2).mask, [1,0,1,0,0]) - assert_equal(masked_not_equal(array([2,2,1,2,1], mask=[1,0,0,0,0]), 2).mask, [1,0,1,0,1]) - assert_equal(masked_where([1,1,0,0,0], [1,2,3,4,5]), [99,99,3,4,5]) - #........................ + def test_addsumprod (self): + "Tests add, sum, product." + (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d + assert_equal(np.add.reduce(x), add.reduce(x)) + assert_equal(np.add.accumulate(x), add.accumulate(x)) + assert_equal(4, sum(array(4),axis=0)) + assert_equal(4, sum(array(4), axis=0)) + assert_equal(np.sum(x,axis=0), sum(x,axis=0)) + assert_equal(np.sum(filled(xm,0),axis=0), sum(xm,axis=0)) + assert_equal(np.sum(x,0), sum(x,0)) + assert_equal(np.product(x,axis=0), product(x,axis=0)) + assert_equal(np.product(x,0), product(x,0)) + assert_equal(np.product(filled(xm,1),axis=0), product(xm,axis=0)) + s = (3,4) + x.shape = y.shape = xm.shape = ym.shape = s + if len(s) > 1: + assert_equal(np.concatenate((x,y),1), concatenate((xm,ym),1)) + assert_equal(np.add.reduce(x,1), add.reduce(x,1)) + assert_equal(np.sum(x,1), sum(x,1)) + assert_equal(np.product(x,1), product(x,1)) + + + + def test_TakeTransposeInnerOuter(self): "Test of take, transpose, inner, outer products" x = arange(24) - y = numpy.arange(24) + y = np.arange(24) x[5:6] = masked x = x.reshape(2,3,4) y = y.reshape(2,3,4) - assert_equal(numpy.transpose(y,(2,0,1)), transpose(x,(2,0,1))) - assert_equal(numpy.take(y, (2,0,1), 1), take(x, (2,0,1), 1)) - assert_equal(numpy.inner(filled(x,0),filled(y,0)), + assert_equal(np.transpose(y,(2,0,1)), transpose(x,(2,0,1))) + assert_equal(np.take(y, (2,0,1), 1), take(x, (2,0,1), 1)) + assert_equal(np.inner(filled(x,0),filled(y,0)), inner(x, y)) - assert_equal(numpy.outer(filled(x,0),filled(y,0)), + assert_equal(np.outer(filled(x,0),filled(y,0)), outer(x, y)) y = array(['abc', 1, 'def', 2, 3], object) y[2] = masked @@ -682,166 +680,8 @@ assert t[0] == 'abc' assert t[1] == 2 assert t[2] == 3 - #....................... - def test_maskedelement(self): - "Test of masked element" - x = arange(6) - x[1] = masked - assert(str(masked) == '--') - assert(x[1] is masked) - assert_equal(filled(x[1], 0), 0) - # don't know why these should raise an exception... - #self.failUnlessRaises(Exception, lambda x,y: x+y, masked, masked) - #self.failUnlessRaises(Exception, lambda x,y: x+y, masked, 2) - #self.failUnlessRaises(Exception, lambda x,y: x+y, masked, xx) - #self.failUnlessRaises(Exception, lambda x,y: x+y, xx, masked) - #........................ - def test_scalar(self): - "Checks masking a scalar" - x = masked_array(0) - assert_equal(str(x), '0') - x = masked_array(0,mask=True) - assert_equal(str(x), str(masked_print_option)) - x = masked_array(0, mask=False) - assert_equal(str(x), '0') - #........................ - def test_usingmasked(self): - "Checks that there's no collapsing to masked" - x = masked_array([1,2]) - y = x * masked - assert_equal(y.shape, x.shape) - assert_equal(y._mask, [True, True]) - y = x[0] * masked - assert y is masked - y = x + masked - assert_equal(y.shape, x.shape) - assert_equal(y._mask, [True, True]) - #........................ - def test_topython(self): - "Tests some communication issues with Python." - assert_equal(1, int(array(1))) - assert_equal(1.0, float(array(1))) - assert_equal(1, int(array([[[1]]]))) - assert_equal(1.0, float(array([[1]]))) - self.assertRaises(TypeError, float, array([1,1])) - warnings.simplefilter('ignore',UserWarning) - assert numpy.isnan(float(array([1],mask=[1]))) - warnings.simplefilter('default',UserWarning) - # - a = array([1,2,3],mask=[1,0,0]) - self.assertRaises(TypeError, lambda:float(a)) - assert_equal(float(a[-1]), 3.) - assert(numpy.isnan(float(a[0]))) - self.assertRaises(TypeError, int, a) - assert_equal(int(a[-1]), 3) - self.assertRaises(MAError, lambda:int(a[0])) - #........................ - def test_arraymethods(self): - "Tests some MaskedArray methods." - a = array([1,3,2]) - b = array([1,3,2], mask=[1,0,1]) - assert_equal(a.any(), a.data.any()) - assert_equal(a.all(), a.data.all()) - assert_equal(a.argmax(), a.data.argmax()) - assert_equal(a.argmin(), a.data.argmin()) - assert_equal(a.choose(0,1,2,3,4), a.data.choose(0,1,2,3,4)) - assert_equal(a.compress([1,0,1]), a.data.compress([1,0,1])) - assert_equal(a.conj(), a.data.conj()) - assert_equal(a.conjugate(), a.data.conjugate()) - # - m = array([[1,2],[3,4]]) - assert_equal(m.diagonal(), m.data.diagonal()) - assert_equal(a.sum(), a.data.sum()) - assert_equal(a.take([1,2]), a.data.take([1,2])) - assert_equal(m.transpose(), m.data.transpose()) - #........................ - def test_basicattributes(self): - "Tests some basic array attributes." - a = array([1,3,2]) - b = array([1,3,2], mask=[1,0,1]) - assert_equal(a.ndim, 1) - assert_equal(b.ndim, 1) - assert_equal(a.size, 3) - assert_equal(b.size, 3) - assert_equal(a.shape, (3,)) - assert_equal(b.shape, (3,)) - #........................ - def test_single_element_subscript(self): - "Tests single element subscripts of Maskedarrays." - a = array([1,3,2]) - b = array([1,3,2], mask=[1,0,1]) - assert_equal(a[0].shape, ()) - assert_equal(b[0].shape, ()) - assert_equal(b[1].shape, ()) - #........................ - def test_maskcreation(self): - "Tests how masks are initialized at the creation of Maskedarrays." - data = arange(24, dtype=float_) - data[[3,6,15]] = masked - dma_1 = MaskedArray(data) - assert_equal(dma_1.mask, data.mask) - dma_2 = MaskedArray(dma_1) - assert_equal(dma_2.mask, dma_1.mask) - dma_3 = MaskedArray(dma_1, mask=[1,0,0,0]*6) - fail_if_equal(dma_3.mask, dma_1.mask) - - def test_pickling(self): - "Tests pickling" - import cPickle - a = arange(10) - a[::3] = masked - a.fill_value = 999 - a_pickled = cPickle.loads(a.dumps()) - assert_equal(a_pickled._mask, a._mask) - assert_equal(a_pickled._data, a._data) - assert_equal(a_pickled.fill_value, 999) - # - a = array(numpy.matrix(range(10)), mask=[1,0,1,0,0]*2) - a_pickled = cPickle.loads(a.dumps()) - assert_equal(a_pickled._mask, a._mask) - assert_equal(a_pickled, a) - assert(isinstance(a_pickled._data,numpy.matrix)) - # - def test_fillvalue(self): - "Having fun with the fill_value" - data = masked_array([1,2,3],fill_value=-999) - series = data[[0,2,1]] - assert_equal(series._fill_value, data._fill_value) - # - mtype = [('f',float_),('s','|S3')] - x = array([(1,'a'),(2,'b'),(numpy.pi,'pi')], dtype=mtype) - x.fill_value=999 - assert_equal(x.fill_value.item(),[999.,'999']) - assert_equal(x['f'].fill_value, 999) - assert_equal(x['s'].fill_value, '999') - # - x.fill_value=(9,'???') - assert_equal(x.fill_value.item(), (9,'???')) - assert_equal(x['f'].fill_value, 9) - assert_equal(x['s'].fill_value, '???') - # - x = array([1,2,3.1]) - x.fill_value = 999 - assert_equal(numpy.asarray(x.fill_value).dtype, float_) - assert_equal(x.fill_value, 999.) - # - def test_asarray(self): - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - xm.fill_value = -9999 - xmm = asarray(xm) - assert_equal(xmm._data, xm._data) - assert_equal(xmm._mask, xm._mask) - assert_equal(xmm.fill_value, xm.fill_value) - # - def test_fix_invalid(self): - "Checks fix_invalid." - data = masked_array(numpy.sqrt([-1., 0., 1.]), mask=[0,0,1]) - data_fixed = fix_invalid(data) - assert_equal(data_fixed._data, [data.fill_value, 0., 1.]) - assert_equal(data_fixed._mask, [1., 0., 1.]) - # def test_imag_real(self): "Check complex" xx = array([1+10j,20+2j], mask=[1,0]) @@ -851,42 +691,136 @@ assert_equal(xx.real,[1,20]) assert_equal(xx.real.filled(), [1e+20,20]) assert_equal(xx.real.dtype, xx._data.real.dtype) - # - def test_ndmin(self): - "Check the use of ndmin" - x = array([1,2,3],mask=[1,0,0], ndmin=2) - assert_equal(x.shape,(1,3)) - assert_equal(x._data,[[1,2,3]]) - assert_equal(x._mask,[[1,0,0]]) - # - def test_record(self): - "Check record access" - mtype = [('f',float_),('s','|S3')] - x = array([(1,'a'),(2,'b'),(numpy.pi,'pi')], dtype=mtype) - x[1] = masked + + + def test_methods_with_output(self): + xm = array(np.random.uniform(0,10,12)).reshape(3,4) + xm[:,0] = xm[0] = xm[-1,-1] = masked # - (xf, xs) = (x['f'], x['s']) - assert_equal(xf.data, [1,2,numpy.pi]) - assert_equal(xf.mask, [0,1,0]) - assert_equal(xf.dtype, float_) - assert_equal(xs.data, ['a', 'b', 'pi']) - assert_equal(xs.mask, [0,1,0]) - assert_equal(xs.dtype, '|S3') + funclist = ('sum','prod','var','std', 'max', 'min', 'ptp', 'mean',) + # + for funcname in funclist: + npfunc = getattr(np, funcname) + xmmeth = getattr(xm, funcname) + + # A ndarray as explicit input + output = np.empty(4, dtype=float) + output.fill(-9999) + result = npfunc(xm, axis=0,out=output) + # ... the result should be the given output + assert(result is output) + assert_equal(result, xmmeth(axis=0, out=output)) + # + output = empty(4, dtype=int) + result = xmmeth(axis=0, out=output) + assert(result is output) + assert(output[0] is masked) + +#------------------------------------------------------------------------------ + +class TestMaskedArrayAttributes(NumpyTestCase): + + + def test_keepmask(self): + "Tests the keep mask flag" + x = masked_array([1,2,3], mask=[1,0,0]) + mx = masked_array(x) + assert_equal(mx.mask, x.mask) + mx = masked_array(x, mask=[0,1,0], keep_mask=False) + assert_equal(mx.mask, [0,1,0]) + mx = masked_array(x, mask=[0,1,0], keep_mask=True) + assert_equal(mx.mask, [1,1,0]) + # We default to true + mx = masked_array(x, mask=[0,1,0]) + assert_equal(mx.mask, [1,1,0]) + + def test_hardmask(self): + "Test hard_mask" + d = arange(5) + n = [0,0,0,1,1] + m = make_mask(n) + xh = array(d, mask = m, hard_mask=True) + # We need to copy, to avoid updating d in xh! + xs = array(d, mask = m, hard_mask=False, copy=True) + xh[[1,4]] = [10,40] + xs[[1,4]] = [10,40] + assert_equal(xh._data, [0,10,2,3,4]) + assert_equal(xs._data, [0,10,2,3,40]) + #assert_equal(xh.mask.ctypes.data, m.ctypes.data) + assert_equal(xs.mask, [0,0,0,1,0]) + assert(xh._hardmask) + assert(not xs._hardmask) + xh[1:4] = [10,20,30] + xs[1:4] = [10,20,30] + assert_equal(xh._data, [0,10,20,3,4]) + assert_equal(xs._data, [0,10,20,30,40]) + #assert_equal(xh.mask.ctypes.data, m.ctypes.data) + assert_equal(xs.mask, nomask) + xh[0] = masked + xs[0] = masked + assert_equal(xh.mask, [1,0,0,1,1]) + assert_equal(xs.mask, [1,0,0,0,0]) + xh[:] = 1 + xs[:] = 1 + assert_equal(xh._data, [0,1,1,3,4]) + assert_equal(xs._data, [1,1,1,1,1]) + assert_equal(xh.mask, [1,0,0,1,1]) + assert_equal(xs.mask, nomask) + # Switch to soft mask + xh.soften_mask() + xh[:] = arange(5) + assert_equal(xh._data, [0,1,2,3,4]) + assert_equal(xh.mask, nomask) + # Switch back to hard mask + xh.harden_mask() + xh[xh<3] = masked + assert_equal(xh._data, [0,1,2,3,4]) + assert_equal(xh._mask, [1,1,1,0,0]) + xh[filled(xh>1,False)] = 5 + assert_equal(xh._data, [0,1,2,5,5]) + assert_equal(xh._mask, [1,1,1,0,0]) + # + xh = array([[1,2],[3,4]], mask = [[1,0],[0,0]], hard_mask=True) + xh[0] = 0 + assert_equal(xh._data, [[1,0],[3,4]]) + assert_equal(xh._mask, [[1,0],[0,0]]) + xh[-1,-1] = 5 + assert_equal(xh._data, [[1,0],[3,5]]) + assert_equal(xh._mask, [[1,0],[0,0]]) + xh[filled(xh<5,False)] = 2 + assert_equal(xh._data, [[1,2],[2,5]]) + assert_equal(xh._mask, [[1,0],[0,0]]) + # + "Another test of hardmask" + d = arange(5) + n = [0,0,0,1,1] + m = make_mask(n) + xh = array(d, mask = m, hard_mask=True) + xh[4:5] = 999 + #assert_equal(xh.mask.ctypes.data, m.ctypes.data) + xh[0:1] = 999 + assert_equal(xh._data,[999,1,2,3,4]) + + def test_smallmask(self): + "Checks the behaviour of _smallmask" + a = arange(10) + a[1] = masked + a[1] = 1 + assert_equal(a._mask, nomask) + a = arange(10) + a._smallmask = False + a[1] = masked + a[1] = 1 + assert_equal(a._mask, zeros(10)) + + +#------------------------------------------------------------------------------ + +class TestFillingValues(NumpyTestCase): # - def test_set_records(self): - "Check setting an element of a record)" - mtype = [('f',float_),('s','|S3')] - x = array([(1,'a'),(2,'b'),(numpy.pi,'pi')], dtype=mtype) - x[0] = (10,'A') - (xf, xs) = (x['f'], x['s']) - assert_equal(xf.data, [10,2,numpy.pi]) - assert_equal(xf.dtype, float_) - assert_equal(xs.data, ['A', 'b', 'pi']) - assert_equal(xs.dtype, '|S3') - # - def test_check_fill_value(self): + def test_check_on_scalar(self): "Test _check_fill_value" - _check_fill_value = numpy.ma.core._check_fill_value + _check_fill_value = np.ma.core._check_fill_value # fval = _check_fill_value(0,int) assert_equal(fval, 0) @@ -900,33 +834,34 @@ # fval = _check_fill_value(1e+20,int) assert_equal(fval, default_fill_value(0)) - - def test_check_fill_value_with_fields(self): + + + def test_check_on_fields(self): "Tests _check_fill_value with records" - _check_fill_value = numpy.ma.core._check_fill_value - # + _check_fill_value = np.ma.core._check_fill_value ndtype = [('a',int),('b',float),('c',"|S3")] + # A check on a list should return a single record fval = _check_fill_value([-999,-999.9,"???"], ndtype) assert(isinstance(fval,ndarray)) assert_equal(fval.item(), [-999,-999.9,"???"]) - # + # A check on Non should output the defaults fval = _check_fill_value(None, ndtype) assert(isinstance(fval,ndarray)) assert_equal(fval.item(), [default_fill_value(0), default_fill_value(0.), default_fill_value("0")]) - # + #.....Using a flexi-ndarray as fill_value should work fill_val = np.array((-999,-999.9,"???"),dtype=ndtype) fval = _check_fill_value(fill_val, ndtype) assert(isinstance(fval,ndarray)) assert_equal(fval.item(), [-999,-999.9,"???"]) - # + #.....Using a flexi-ndarray w/ a different type shouldn't matter fill_val = np.array((-999,-999.9,"???"), dtype=[("A",int),("B",float),("C","|S3")]) fval = _check_fill_value(fill_val, ndtype) assert(isinstance(fval,ndarray)) assert_equal(fval.item(), [-999,-999.9,"???"]) - # + #.....Using an object-array shouldn't matter either fill_value = np.array((-999,-999.9,"???"), dtype=object) fval = _check_fill_value(fill_val, ndtype) assert(isinstance(fval,ndarray)) @@ -936,12 +871,13 @@ fval = _check_fill_value(fill_val, ndtype) assert(isinstance(fval,ndarray)) assert_equal(fval.item(), [-999,-999.9,"???"]) - # + #.....One-field-only flexi-ndarray should work as well ndtype = [("a",int)] fval = _check_fill_value(-999, ndtype) assert(isinstance(fval,ndarray)) assert_equal(fval.item(), (-999,)) - # + + def test_fillvalue_conversion(self): "Tests the behavior of fill_value during conversion" # We had a tailored comment to make sure special attributes are properly @@ -967,8 +903,31 @@ assert_equal(b['a'].fill_value, a.fill_value) -#............................................................................... + def test_fillvalue(self): + "Yet more fun with the fill_value" + data = masked_array([1,2,3],fill_value=-999) + series = data[[0,2,1]] + assert_equal(series._fill_value, data._fill_value) + # + mtype = [('f',float_),('s','|S3')] + x = array([(1,'a'),(2,'b'),(np.pi,'pi')], dtype=mtype) + x.fill_value=999 + assert_equal(x.fill_value.item(),[999.,'999']) + assert_equal(x['f'].fill_value, 999) + assert_equal(x['s'].fill_value, '999') + # + x.fill_value=(9,'???') + assert_equal(x.fill_value.item(), (9,'???')) + assert_equal(x['f'].fill_value, 9) + assert_equal(x['s'].fill_value, '???') + # + x = array([1,2,3.1]) + x.fill_value = 999 + assert_equal(np.asarray(x.fill_value).dtype, float_) + assert_equal(x.fill_value, 999.) +#------------------------------------------------------------------------------ + class TestUfuncs(NumpyTestCase): "Test class for the application of ufuncs on MaskedArrays." def setUp(self): @@ -997,7 +956,6 @@ 'less', 'greater', 'logical_and', 'logical_or', 'logical_xor', ]: - #print f try: uf = getattr(umath, f) except AttributeError: @@ -1028,31 +986,147 @@ assert_equal(amask.min(0), [5,6,7,8]) assert(amask.max(1)[0].mask) assert(amask.min(1)[0].mask) - #........................ - def test_minmax_funcs_with_out(self): - mask = numpy.random.rand(12).round() - xm = array(numpy.random.uniform(0,10,12),mask=mask) - xm.shape = (3,4) - for funcname in ('min', 'max'): - # Initialize - npfunc = getattr(numpy, funcname) - mafunc = getattr(coremodule, funcname) - # Use the np version - nout = np.empty((4,), dtype=int) - result = npfunc(xm,axis=0,out=nout) - assert(result is nout) - # Use the ma version - nout.fill(-999) - result = mafunc(xm,axis=0,out=nout) - assert(result is nout) -#............................................................................... -class TestArrayMathMethods(NumpyTestCase): +#------------------------------------------------------------------------------ + +class TestMaskedArrayInPlaceArithmetics(NumpyTestCase): + "Test MaskedArray Arithmetics" + + def setUp(self): + x = arange(10) + y = arange(10) + xm = arange(10) + xm[2] = masked + self.intdata = (x, y, xm) + self.floatdata = (x.astype(float), y.astype(float), xm.astype(float)) + + def test_inplace_addition_scalar(self): + """Test of inplace additions""" + (x, y, xm) = self.intdata + xm[2] = masked + x += 1 + assert_equal(x, y+1) + xm += 1 + assert_equal(xm, y+1) + # + warnings.simplefilter('ignore', DeprecationWarning) + (x, _, xm) = self.floatdata + id1 = x.raw_data().ctypes.data + x += 1. + assert (id1 == x.raw_data().ctypes.data) + assert_equal(x, y+1.) + warnings.simplefilter('default', DeprecationWarning) + + def test_inplace_addition_array(self): + """Test of inplace additions""" + (x, y, xm) = self.intdata + m = xm.mask + a = arange(10, dtype=float) + a[-1] = masked + x += a + xm += a + assert_equal(x,y+a) + assert_equal(xm,y+a) + assert_equal(xm.mask, mask_or(m,a.mask)) + + def test_inplace_subtraction_scalar(self): + """Test of inplace subtractions""" + (x, y, xm) = self.intdata + x -= 1 + assert_equal(x, y-1) + xm -= 1 + assert_equal(xm, y-1) + + def test_inplace_subtraction_array(self): + """Test of inplace subtractions""" + (x, y, xm) = self.floatdata + m = xm.mask + a = arange(10, dtype=float_) + a[-1] = masked + x -= a + xm -= a + assert_equal(x,y-a) + assert_equal(xm,y-a) + assert_equal(xm.mask, mask_or(m,a.mask)) + + def test_inplace_multiplication_scalar(self): + """Test of inplace multiplication""" + (x, y, xm) = self.floatdata + x *= 2.0 + assert_equal(x, y*2) + xm *= 2.0 + assert_equal(xm, y*2) + + def test_inplace_multiplication_array(self): + """Test of inplace multiplication""" + (x, y, xm) = self.floatdata + m = xm.mask + a = arange(10, dtype=float_) + a[-1] = masked + x *= a + xm *= a + assert_equal(x,y*a) + assert_equal(xm,y*a) + assert_equal(xm.mask, mask_or(m,a.mask)) + + def test_inplace_division_scalar_int(self): + """Test of inplace division""" + (x, y, xm) = self.intdata + x = arange(10)*2 + xm = arange(10)*2 + xm[2] = masked + x /= 2 + assert_equal(x, y) + xm /= 2 + assert_equal(xm, y) + + def test_inplace_division_scalar_float(self): + """Test of inplace division""" + (x, y, xm) = self.floatdata + x /= 2.0 + assert_equal(x, y/2.0) + xm /= arange(10) + assert_equal(xm, ones((10,))) + + def test_inplace_division_array_float(self): + """Test of inplace division""" + (x, y, xm) = self.floatdata + m = xm.mask + a = arange(10, dtype=float_) + a[-1] = masked + x /= a + xm /= a + assert_equal(x,y/a) + assert_equal(xm,y/a) + assert_equal(xm.mask, mask_or(mask_or(m,a.mask), (a==0))) + + def test_inplace_division_misc(self): + # + x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) + y = np.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) + m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] + m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 ,0, 1] + xm = masked_array(x, mask=m1) + ym = masked_array(y, mask=m2) + # + z = xm/ym + assert_equal(z._mask, [1,1,1,0,0,1,1,0,0,0,1,1]) + assert_equal(z._data, [0.2,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) + # + xm = xm.copy() + xm /= ym + assert_equal(xm._mask, [1,1,1,0,0,1,1,0,0,0,1,1]) + assert_equal(xm._data, [1/5.,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) + + +#------------------------------------------------------------------------------ + +class TestMaskedArrayMethods(NumpyTestCase): "Test class for miscellaneous MaskedArrays methods." def setUp(self): "Base data definition." - x = numpy.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, + x = np.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479, @@ -1061,7 +1135,7 @@ X = x.reshape(6,6) XX = x.reshape(3,2,2,3) - m = numpy.array([0, 1, 0, 1, 0, 0, + m = np.array([0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, @@ -1071,7 +1145,7 @@ mX = array(data=X,mask=m.reshape(X.shape)) mXX = array(data=XX,mask=m.reshape(XX.shape)) - m2 = numpy.array([1, 1, 0, 1, 0, 0, + m2 = np.array([1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, @@ -1082,78 +1156,103 @@ m2XX = array(data=XX,mask=m2.reshape(XX.shape)) self.d = (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) - #------------------------------------------------------ - def test_trace(self): - "Tests trace on MaskedArrays." - (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d - mXdiag = mX.diagonal() - assert_equal(mX.trace(), mX.diagonal().compressed().sum()) - assert_almost_equal(mX.trace(), - X.trace() - sum(mXdiag.mask*X.diagonal(),axis=0)) + def test_generic_methods(self): + "Tests some MaskedArray methods." + a = array([1,3,2]) + b = array([1,3,2], mask=[1,0,1]) + assert_equal(a.any(), a.data.any()) + assert_equal(a.all(), a.data.all()) + assert_equal(a.argmax(), a.data.argmax()) + assert_equal(a.argmin(), a.data.argmin()) + assert_equal(a.choose(0,1,2,3,4), a.data.choose(0,1,2,3,4)) + assert_equal(a.compress([1,0,1]), a.data.compress([1,0,1])) + assert_equal(a.conj(), a.data.conj()) + assert_equal(a.conjugate(), a.data.conjugate()) + # + m = array([[1,2],[3,4]]) + assert_equal(m.diagonal(), m.data.diagonal()) + assert_equal(a.sum(), a.data.sum()) + assert_equal(a.take([1,2]), a.data.take([1,2])) + assert_equal(m.transpose(), m.data.transpose()) - def test_clip(self): - "Tests clip on MaskedArrays." - (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d - clipped = mx.clip(2,8) - assert_equal(clipped.mask,mx.mask) - assert_equal(clipped.data,x.clip(2,8)) - assert_equal(clipped.data,mx.data.clip(2,8)) - def test_ptp(self): - "Tests ptp on MaskedArrays." - (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d - (n,m) = X.shape - assert_equal(mx.ptp(),mx.compressed().ptp()) - rows = numpy.zeros(n,numpy.float_) - cols = numpy.zeros(m,numpy.float_) - for k in range(m): - cols[k] = mX[:,k].compressed().ptp() - for k in range(n): - rows[k] = mX[k].compressed().ptp() - assert_equal(mX.ptp(0),cols) - assert_equal(mX.ptp(1),rows) + def test_allany(self): + """Checks the any/all methods/functions.""" + x = np.array([[ 0.13, 0.26, 0.90], + [ 0.28, 0.33, 0.63], + [ 0.31, 0.87, 0.70]]) + m = np.array([[ True, False, False], + [False, False, False], + [True, True, False]], dtype=np.bool_) + mx = masked_array(x, mask=m) + xbig = np.array([[False, False, True], + [False, False, True], + [False, True, True]], dtype=np.bool_) + mxbig = (mx > 0.5) + mxsmall = (mx < 0.5) + # + assert (mxbig.all()==False) + assert (mxbig.any()==True) + assert_equal(mxbig.all(0),[False, False, True]) + assert_equal(mxbig.all(1), [False, False, True]) + assert_equal(mxbig.any(0),[False, False, True]) + assert_equal(mxbig.any(1), [True, True, True]) + # + assert (mxsmall.all()==False) + assert (mxsmall.any()==True) + assert_equal(mxsmall.all(0), [True, True, False]) + assert_equal(mxsmall.all(1), [False, False, False]) + assert_equal(mxsmall.any(0), [True, True, False]) + assert_equal(mxsmall.any(1), [True, True, False]) - def test_swapaxes(self): - "Tests swapaxes on MaskedArrays." - (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d - mXswapped = mX.swapaxes(0,1) - assert_equal(mXswapped[-1],mX[:,-1]) - mXXswapped = mXX.swapaxes(0,2) - assert_equal(mXXswapped.shape,(2,2,3,3)) - def test_cumsumprod(self): - "Tests cumsum & cumprod on MaskedArrays." - (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d - mXcp = mX.cumsum(0) - assert_equal(mXcp.data,mX.filled(0).cumsum(0)) - mXcp = mX.cumsum(1) - assert_equal(mXcp.data,mX.filled(0).cumsum(1)) + def test_allany_onmatrices(self): + x = np.array([[ 0.13, 0.26, 0.90], + [ 0.28, 0.33, 0.63], + [ 0.31, 0.87, 0.70]]) + X = np.matrix(x) + m = np.array([[ True, False, False], + [False, False, False], + [True, True, False]], dtype=np.bool_) + mX = masked_array(X, mask=m) + mXbig = (mX > 0.5) + mXsmall = (mX < 0.5) # - mXcp = mX.cumprod(0) - assert_equal(mXcp.data,mX.filled(1).cumprod(0)) - mXcp = mX.cumprod(1) - assert_equal(mXcp.data,mX.filled(1).cumprod(1)) + assert (mXbig.all()==False) + assert (mXbig.any()==True) + assert_equal(mXbig.all(0), np.matrix([False, False, True])) + assert_equal(mXbig.all(1), np.matrix([False, False, True]).T) + assert_equal(mXbig.any(0), np.matrix([False, False, True])) + assert_equal(mXbig.any(1), np.matrix([ True, True, True]).T) + # + assert (mXsmall.all()==False) + assert (mXsmall.any()==True) + assert_equal(mXsmall.all(0), np.matrix([True, True, False])) + assert_equal(mXsmall.all(1), np.matrix([False, False, False]).T) + assert_equal(mXsmall.any(0), np.matrix([True, True, False])) + assert_equal(mXsmall.any(1), np.matrix([True, True, False]).T) - def test_varstd(self): - "Tests var & std on MaskedArrays." - (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d - assert_almost_equal(mX.var(axis=None),mX.compressed().var()) - assert_almost_equal(mX.std(axis=None),mX.compressed().std()) - assert_almost_equal(mX.std(axis=None,ddof=1), - mX.compressed().std(ddof=1)) - assert_almost_equal(mX.var(axis=None,ddof=1), - mX.compressed().var(ddof=1)) - assert_equal(mXX.var(axis=3).shape,XX.var(axis=3).shape) - assert_equal(mX.var().shape,X.var().shape) - (mXvar0,mXvar1) = (mX.var(axis=0), mX.var(axis=1)) - assert_almost_equal(mX.var(axis=None,ddof=2),mX.compressed().var(ddof=2)) - assert_almost_equal(mX.std(axis=None,ddof=2),mX.compressed().std(ddof=2)) - for k in range(6): - assert_almost_equal(mXvar1[k],mX[k].compressed().var()) - assert_almost_equal(mXvar0[k],mX[:,k].compressed().var()) - assert_almost_equal(numpy.sqrt(mXvar0[k]), mX[:,k].compressed().std()) - def test_argmin(self): + def test_allany_oddities(self): + "Some fun with all and any" + store = empty(1, dtype=bool) + full = array([1,2,3], mask=True) + # + assert(full.all() is masked) + full.all(out=store) + assert(store) + assert(store._mask, True) + assert(store is not masked) + # + store = empty(1, dtype=bool) + assert(full.any() is masked) + full.any(out=store) + assert(not store) + assert(store._mask, True) + assert(store is not masked) + + + def test_argmax_argmin(self): "Tests argmin & argmax on MaskedArrays." (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d # @@ -1176,6 +1275,90 @@ assert_equal(mX.argmax(1), [2,4,1,1,4,1]) assert_equal(m2X.argmax(1), [2,4,1,1,1,1]) + + def test_clip(self): + "Tests clip on MaskedArrays." + x = np.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, + 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, + 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, + 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479, + 7.189, 9.645, 5.395, 4.961, 9.894, 2.893, + 7.357, 9.828, 6.272, 3.758, 6.693, 0.993]) + m = np.array([0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, + 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, + 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0]) + mx = array(x,mask=m) + clipped = mx.clip(2,8) + assert_equal(clipped.mask,mx.mask) + assert_equal(clipped.data,x.clip(2,8)) + assert_equal(clipped.data,mx.data.clip(2,8)) + + + def test_compress(self): + "test compress" + a = masked_array([1., 2., 3., 4., 5.], fill_value=9999) + condition = (a > 1.5) & (a < 3.5) + assert_equal(a.compress(condition),[2.,3.]) + # + a[[2,3]] = masked + b = a.compress(condition) + assert_equal(b._data,[2.,3.]) + assert_equal(b._mask,[0,1]) + assert_equal(b.fill_value,9999) + assert_equal(b,a[condition]) + # + condition = (a<4.) + b = a.compress(condition) + assert_equal(b._data,[1.,2.,3.]) + assert_equal(b._mask,[0,0,1]) + assert_equal(b.fill_value,9999) + assert_equal(b,a[condition]) + # + a = masked_array([[10,20,30],[40,50,60]], mask=[[0,0,1],[1,0,0]]) + b = a.compress(a.ravel() >= 22) + assert_equal(b._data, [30, 40, 50, 60]) + assert_equal(b._mask, [1,1,0,0]) + # + x = np.array([3,1,2]) + b = a.compress(x >= 2, axis=1) + assert_equal(b._data, [[10,30],[40,60]]) + assert_equal(b._mask, [[0,1],[1,0]]) + + + def test_compressed(self): + "Tests compressed" + a = array([1,2,3,4],mask=[0,0,0,0]) + b = a.compressed() + assert_equal(b, a) + a[0] = masked + b = a.compressed() + assert_equal(b, [2,3,4]) + # + a = array(np.matrix([1,2,3,4]), mask=[0,0,0,0]) + b = a.compressed() + assert_equal(b,a) + assert(isinstance(b,np.matrix)) + a[0,0] = masked + b = a.compressed() + assert_equal(b, [[2,3,4]]) + + + def test_empty(self): + "Tests empty/like" + datatype = [('a',int_),('b',float_),('c','|S8')] + a = masked_array([(1,1.1,'1.1'),(2,2.2,'2.2'),(3,3.3,'3.3')], + dtype=datatype) + assert_equal(len(a.fill_value.item()), len(datatype)) + # + b = empty_like(a) + assert_equal(b.shape, a.shape) + assert_equal(b.fill_value, a.fill_value) + # + b = empty(len(a), dtype=datatype) + assert_equal(b.shape, a.shape) + assert_equal(b.fill_value, a.fill_value) + + def test_put(self): "Tests put." d = arange(5) @@ -1207,6 +1390,7 @@ assert_array_equal(x, [0,1,2,3,4,5,6,7,8,9,]) assert_equal(x.mask, [1,0,0,0,1,1,0,0,0,0]) + def test_put_hardmask(self): "Tests put on hardmask" d = arange(5) @@ -1216,184 +1400,76 @@ xh.put([4,2,0,1,3],[1,2,3,4,5]) assert_equal(xh._data, [3,4,2,4,5]) - def test_take(self): - "Tests take" - x = masked_array([10,20,30,40],[0,1,0,1]) - assert_equal(x.take([0,0,3]), masked_array([10, 10, 40], [0,0,1]) ) - assert_equal(x.take([0,0,3]), x[[0,0,3]]) - assert_equal(x.take([[0,1],[0,1]]), - masked_array([[10,20],[10,20]], [[0,1],[0,1]]) ) - # - x = array([[10,20,30],[40,50,60]], mask=[[0,0,1],[1,0,0,]]) - assert_equal(x.take([0,2], axis=1), - array([[10,30],[40,60]], mask=[[0,1],[1,0]])) - assert_equal(take(x, [0,2], axis=1), - array([[10,30],[40,60]], mask=[[0,1],[1,0]])) - #........................ - def test_anyall(self): - """Checks the any/all methods/functions.""" - x = numpy.array([[ 0.13, 0.26, 0.90], - [ 0.28, 0.33, 0.63], - [ 0.31, 0.87, 0.70]]) - m = numpy.array([[ True, False, False], - [False, False, False], - [True, True, False]], dtype=numpy.bool_) - mx = masked_array(x, mask=m) - xbig = numpy.array([[False, False, True], - [False, False, True], - [False, True, True]], dtype=numpy.bool_) - mxbig = (mx > 0.5) - mxsmall = (mx < 0.5) - # - assert (mxbig.all()==False) - assert (mxbig.any()==True) - assert_equal(mxbig.all(0),[False, False, True]) - assert_equal(mxbig.all(1), [False, False, True]) - assert_equal(mxbig.any(0),[False, False, True]) - assert_equal(mxbig.any(1), [True, True, True]) - # - assert (mxsmall.all()==False) - assert (mxsmall.any()==True) - assert_equal(mxsmall.all(0), [True, True, False]) - assert_equal(mxsmall.all(1), [False, False, False]) - assert_equal(mxsmall.any(0), [True, True, False]) - assert_equal(mxsmall.any(1), [True, True, False]) - # - X = numpy.matrix(x) - mX = masked_array(X, mask=m) - mXbig = (mX > 0.5) - mXsmall = (mX < 0.5) - # - assert (mXbig.all()==False) - assert (mXbig.any()==True) - assert_equal(mXbig.all(0), numpy.matrix([False, False, True])) - assert_equal(mXbig.all(1), numpy.matrix([False, False, True]).T) - assert_equal(mXbig.any(0), numpy.matrix([False, False, True])) - assert_equal(mXbig.any(1), numpy.matrix([ True, True, True]).T) - # - assert (mXsmall.all()==False) - assert (mXsmall.any()==True) - assert_equal(mXsmall.all(0), numpy.matrix([True, True, False])) - assert_equal(mXsmall.all(1), numpy.matrix([False, False, False]).T) - assert_equal(mXsmall.any(0), numpy.matrix([True, True, False])) - assert_equal(mXsmall.any(1), numpy.matrix([True, True, False]).T) + def test_putmask(self): + x = arange(6)+1 + mx = array(x, mask=[0,0,0,1,1,1]) + mask = [0,0,1,0,0,1] + # w/o mask, w/o masked values + xx = x.copy() + putmask(xx, mask, 99) + assert_equal(xx, [1,2,99,4,5,99]) + # w/ mask, w/o masked values + mxx = mx.copy() + putmask(mxx, mask, 99) + assert_equal(mxx._data, [1,2,99,4,5,99]) + assert_equal(mxx._mask, [0,0,0,1,1,0]) + # w/o mask, w/ masked values + values = array([10,20,30,40,50,60],mask=[1,1,1,0,0,0]) + xx = x.copy() + putmask(xx, mask, values) + assert_equal(xx._data, [1,2,30,4,5,60]) + assert_equal(xx._mask, [0,0,1,0,0,0]) + # w/ mask, w/ masked values + mxx = mx.copy() + putmask(mxx, mask, values) + assert_equal(mxx._data, [1,2,30,4,5,60]) + assert_equal(mxx._mask, [0,0,1,1,1,0]) + # w/ mask, w/ masked values + hardmask + mxx = mx.copy() + mxx.harden_mask() + putmask(mxx, mask, values) + assert_equal(mxx, [1,2,30,4,5,60]) - def test_allany_oddities(self): - "Some fun with all and any" - store = empty(1, dtype=bool) - full = array([1,2,3], mask=True) - # - assert(full.all() is masked) - full.all(out=store) - assert(store) - assert(store._mask, True) - assert(store is not masked) - # - store = empty(1, dtype=bool) - assert(full.any() is masked) - full.any(out=store) - assert(not store) - assert(store._mask, True) - assert(store is not masked) + def test_ravel(self): + "Tests ravel" + a = array([[1,2,3,4,5]], mask=[[0,1,0,0,0]]) + aravel = a.ravel() + assert_equal(a._mask.shape, a.shape) + a = array([0,0], mask=[1,1]) + aravel = a.ravel() + assert_equal(a._mask.shape, a.shape) + a = array(np.matrix([1,2,3,4,5]), mask=[[0,1,0,0,0]]) + aravel = a.ravel() + assert_equal(a.shape,(1,5)) + assert_equal(a._mask.shape, a.shape) + # Checs that small_mask is preserved + a = array([1,2,3,4],mask=[0,0,0,0],shrink=False) + assert_equal(a.ravel()._mask, [0,0,0,0]) + # Test that the fill_value is preserved + a.fill_value = -99 + a.shape = (2,2) + ar = a.ravel() + assert_equal(ar._mask, [0,0,0,0]) + assert_equal(ar._data, [1,2,3,4]) + assert_equal(ar.fill_value, -99) - def test_keepmask(self): - "Tests the keep mask flag" - x = masked_array([1,2,3], mask=[1,0,0]) - mx = masked_array(x) - assert_equal(mx.mask, x.mask) - mx = masked_array(x, mask=[0,1,0], keep_mask=False) - assert_equal(mx.mask, [0,1,0]) - mx = masked_array(x, mask=[0,1,0], keep_mask=True) - assert_equal(mx.mask, [1,1,0]) - # We default to true - mx = masked_array(x, mask=[0,1,0]) - assert_equal(mx.mask, [1,1,0]) - def test_hardmask(self): - "Test hard_mask" - d = arange(5) - n = [0,0,0,1,1] - m = make_mask(n) - xh = array(d, mask = m, hard_mask=True) - # We need to copy, to avoid updating d in xh! - xs = array(d, mask = m, hard_mask=False, copy=True) - xh[[1,4]] = [10,40] - xs[[1,4]] = [10,40] - assert_equal(xh._data, [0,10,2,3,4]) - assert_equal(xs._data, [0,10,2,3,40]) - #assert_equal(xh.mask.ctypes.data, m.ctypes.data) - assert_equal(xs.mask, [0,0,0,1,0]) - assert(xh._hardmask) - assert(not xs._hardmask) - xh[1:4] = [10,20,30] - xs[1:4] = [10,20,30] - assert_equal(xh._data, [0,10,20,3,4]) - assert_equal(xs._data, [0,10,20,30,40]) - #assert_equal(xh.mask.ctypes.data, m.ctypes.data) - assert_equal(xs.mask, nomask) - xh[0] = masked - xs[0] = masked - assert_equal(xh.mask, [1,0,0,1,1]) - assert_equal(xs.mask, [1,0,0,0,0]) - xh[:] = 1 - xs[:] = 1 - assert_equal(xh._data, [0,1,1,3,4]) - assert_equal(xs._data, [1,1,1,1,1]) - assert_equal(xh.mask, [1,0,0,1,1]) - assert_equal(xs.mask, nomask) - # Switch to soft mask - xh.soften_mask() - xh[:] = arange(5) - assert_equal(xh._data, [0,1,2,3,4]) - assert_equal(xh.mask, nomask) - # Switch back to hard mask - xh.harden_mask() - xh[xh<3] = masked - assert_equal(xh._data, [0,1,2,3,4]) - assert_equal(xh._mask, [1,1,1,0,0]) - xh[filled(xh>1,False)] = 5 - assert_equal(xh._data, [0,1,2,5,5]) - assert_equal(xh._mask, [1,1,1,0,0]) - # - xh = array([[1,2],[3,4]], mask = [[1,0],[0,0]], hard_mask=True) - xh[0] = 0 - assert_equal(xh._data, [[1,0],[3,4]]) - assert_equal(xh._mask, [[1,0],[0,0]]) - xh[-1,-1] = 5 - assert_equal(xh._data, [[1,0],[3,5]]) - assert_equal(xh._mask, [[1,0],[0,0]]) - xh[filled(xh<5,False)] = 2 - assert_equal(xh._data, [[1,2],[2,5]]) - assert_equal(xh._mask, [[1,0],[0,0]]) - # - "Another test of hardmask" - d = arange(5) - n = [0,0,0,1,1] - m = make_mask(n) - xh = array(d, mask = m, hard_mask=True) - xh[4:5] = 999 - #assert_equal(xh.mask.ctypes.data, m.ctypes.data) - xh[0:1] = 999 - assert_equal(xh._data,[999,1,2,3,4]) + def test_reshape(self): + "Tests reshape" + x = arange(4) + x[0] = masked + y = x.reshape(2,2) + assert_equal(y.shape, (2,2,)) + assert_equal(y._mask.shape, (2,2,)) + assert_equal(x.shape, (4,)) + assert_equal(x._mask.shape, (4,)) - def test_smallmask(self): - "Checks the behaviour of _smallmask" - a = arange(10) - a[1] = masked - a[1] = 1 - assert_equal(a._mask, nomask) - a = arange(10) - a._smallmask = False - a[1] = masked - a[1] = 1 - assert_equal(a._mask, zeros(10)) - def test_sort(self): "Test sort" - x = array([1,4,2,3],mask=[0,1,0,0],dtype=numpy.uint8) + x = array([1,4,2,3],mask=[0,1,0,0],dtype=np.uint8) # sortedx = sort(x) assert_equal(sortedx._data,[1,2,3,4]) @@ -1407,7 +1483,7 @@ assert_equal(x._data,[1,2,3,4]) assert_equal(x._mask,[0,0,0,1]) # - x = array([1,4,2,3],mask=[0,1,0,0],dtype=numpy.uint8) + x = array([1,4,2,3],mask=[0,1,0,0],dtype=np.uint8) x.sort(endwith=False) assert_equal(x._data, [4,1,2,3]) assert_equal(x._mask, [1,0,0,0]) @@ -1416,14 +1492,15 @@ sortedx = sort(x) assert(not isinstance(sorted, MaskedArray)) # - x = array([0,1,-1,-2,2], mask=nomask, dtype=numpy.int8) + x = array([0,1,-1,-2,2], mask=nomask, dtype=np.int8) sortedx = sort(x, endwith=False) assert_equal(sortedx._data, [-2,-1,0,1,2]) - x = array([0,1,-1,-2,2], mask=[0,1,0,0,1], dtype=numpy.int8) + x = array([0,1,-1,-2,2], mask=[0,1,0,0,1], dtype=np.int8) sortedx = sort(x, endwith=False) assert_equal(sortedx._data, [1,2,-2,-1,0]) assert_equal(sortedx._mask, [1,1,0,0,0]) + def test_sort_2d(self): "Check sort of 2D array." # 2D array w/o mask @@ -1465,60 +1542,59 @@ assert_equal(am, an) - def test_ravel(self): - "Tests ravel" - a = array([[1,2,3,4,5]], mask=[[0,1,0,0,0]]) - aravel = a.ravel() - assert_equal(a._mask.shape, a.shape) - a = array([0,0], mask=[1,1]) - aravel = a.ravel() - assert_equal(a._mask.shape, a.shape) - a = array(numpy.matrix([1,2,3,4,5]), mask=[[0,1,0,0,0]]) - aravel = a.ravel() - assert_equal(a.shape,(1,5)) - assert_equal(a._mask.shape, a.shape) - # Checs that small_mask is preserved - a = array([1,2,3,4],mask=[0,0,0,0],shrink=False) - assert_equal(a.ravel()._mask, [0,0,0,0]) - # Test that the fill_value is preserved - a.fill_value = -99 - a.shape = (2,2) - ar = a.ravel() - assert_equal(ar._mask, [0,0,0,0]) - assert_equal(ar._data, [1,2,3,4]) - assert_equal(ar.fill_value, -99) + def test_squeeze(self): + "Check squeeze" + data = masked_array([[1,2,3]]) + assert_equal(data.squeeze(), [1,2,3]) + data = masked_array([[1,2,3]], mask=[[1,1,1]]) + assert_equal(data.squeeze(), [1,2,3]) + assert_equal(data.squeeze()._mask, [1,1,1]) + data = masked_array([[1]], mask=True) + assert(data.squeeze() is masked) - def test_reshape(self): - "Tests reshape" - x = arange(4) - x[0] = masked - y = x.reshape(2,2) - assert_equal(y.shape, (2,2,)) - assert_equal(y._mask.shape, (2,2,)) - assert_equal(x.shape, (4,)) - assert_equal(x._mask.shape, (4,)) - def test_compressed(self): - "Tests compressed" - a = array([1,2,3,4],mask=[0,0,0,0]) - b = a.compressed() - assert_equal(b, a) - a[0] = masked - b = a.compressed() - assert_equal(b, [2,3,4]) + def test_swapaxes(self): + "Tests swapaxes on MaskedArrays." + x = np.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, + 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, + 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, + 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479, + 7.189, 9.645, 5.395, 4.961, 9.894, 2.893, + 7.357, 9.828, 6.272, 3.758, 6.693, 0.993]) + m = np.array([0, 1, 0, 1, 0, 0, + 1, 0, 1, 1, 0, 1, + 0, 0, 0, 1, 0, 1, + 0, 0, 0, 1, 1, 1, + 1, 0, 0, 1, 0, 0, + 0, 0, 1, 0, 1, 0]) + mX = array(x,mask=m).reshape(6,6) + mXX = mX.reshape(3,2,2,3) # - a = array(numpy.matrix([1,2,3,4]), mask=[0,0,0,0]) - b = a.compressed() - assert_equal(b,a) - assert(isinstance(b,numpy.matrix)) - a[0,0] = masked - b = a.compressed() - assert_equal(b, [[2,3,4]]) + mXswapped = mX.swapaxes(0,1) + assert_equal(mXswapped[-1],mX[:,-1]) + + mXXswapped = mXX.swapaxes(0,2) + assert_equal(mXXswapped.shape,(2,2,3,3)) + def test_take(self): + "Tests take" + x = masked_array([10,20,30,40],[0,1,0,1]) + assert_equal(x.take([0,0,3]), masked_array([10, 10, 40], [0,0,1]) ) + assert_equal(x.take([0,0,3]), x[[0,0,3]]) + assert_equal(x.take([[0,1],[0,1]]), + masked_array([[10,20],[10,20]], [[0,1],[0,1]]) ) + # + x = array([[10,20,30],[40,50,60]], mask=[[0,0,1],[1,0,0,]]) + assert_equal(x.take([0,2], axis=1), + array([[10,30],[40,60]], mask=[[0,1],[1,0]])) + assert_equal(take(x, [0,2], axis=1), + array([[10,30],[40,60]], mask=[[0,1],[1,0]])) + + def test_tolist(self): "Tests to list" - x = array(numpy.arange(12)) + x = array(np.arange(12)) x[[1,-2]] = masked xlist = x.tolist() assert(xlist[1] is None) @@ -1539,92 +1615,124 @@ assert_equal(x.tolist(), [(1,1.1,'one'),(2,2.2,'two'),(None,None,None)]) - def test_squeeze(self): - "Check squeeze" - data = masked_array([[1,2,3]]) - assert_equal(data.squeeze(), [1,2,3]) - data = masked_array([[1,2,3]], mask=[[1,1,1]]) - assert_equal(data.squeeze(), [1,2,3]) - assert_equal(data.squeeze()._mask, [1,1,1]) - data = masked_array([[1]], mask=True) - assert(data.squeeze() is masked) +#------------------------------------------------------------------------------ - def test_putmask(self): - x = arange(6)+1 - mx = array(x, mask=[0,0,0,1,1,1]) - mask = [0,0,1,0,0,1] - # w/o mask, w/o masked values - xx = x.copy() - putmask(xx, mask, 99) - assert_equal(xx, [1,2,99,4,5,99]) - # w/ mask, w/o masked values - mxx = mx.copy() - putmask(mxx, mask, 99) - assert_equal(mxx._data, [1,2,99,4,5,99]) - assert_equal(mxx._mask, [0,0,0,1,1,0]) - # w/o mask, w/ masked values - values = array([10,20,30,40,50,60],mask=[1,1,1,0,0,0]) - xx = x.copy() - putmask(xx, mask, values) - assert_equal(xx._data, [1,2,30,4,5,60]) - assert_equal(xx._mask, [0,0,1,0,0,0]) - # w/ mask, w/ masked values - mxx = mx.copy() - putmask(mxx, mask, values) - assert_equal(mxx._data, [1,2,30,4,5,60]) - assert_equal(mxx._mask, [0,0,1,1,1,0]) - # w/ mask, w/ masked values + hardmask - mxx = mx.copy() - mxx.harden_mask() - putmask(mxx, mask, values) - assert_equal(mxx, [1,2,30,4,5,60]) - def test_compress(self): - "test compress" - a = masked_array([1., 2., 3., 4., 5.], fill_value=9999) - condition = (a > 1.5) & (a < 3.5) - assert_equal(a.compress(condition),[2.,3.]) +class TestMaskArrayMathMethod(NumpyTestCase): + + def setUp(self): + "Base data definition." + x = np.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, + 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, + 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, + 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479, + 7.189, 9.645, 5.395, 4.961, 9.894, 2.893, + 7.357, 9.828, 6.272, 3.758, 6.693, 0.993]) + X = x.reshape(6,6) + XX = x.reshape(3,2,2,3) + + m = np.array([0, 1, 0, 1, 0, 0, + 1, 0, 1, 1, 0, 1, + 0, 0, 0, 1, 0, 1, + 0, 0, 0, 1, 1, 1, + 1, 0, 0, 1, 0, 0, + 0, 0, 1, 0, 1, 0]) + mx = array(data=x,mask=m) + mX = array(data=X,mask=m.reshape(X.shape)) + mXX = array(data=XX,mask=m.reshape(XX.shape)) + + m2 = np.array([1, 1, 0, 1, 0, 0, + 1, 1, 1, 1, 0, 1, + 0, 0, 1, 1, 0, 1, + 0, 0, 0, 1, 1, 1, + 1, 0, 0, 1, 1, 0, + 0, 0, 1, 0, 1, 1]) + m2x = array(data=x,mask=m2) + m2X = array(data=X,mask=m2.reshape(X.shape)) + m2XX = array(data=XX,mask=m2.reshape(XX.shape)) + self.d = (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) + + + def test_cumsumprod(self): + "Tests cumsum & cumprod on MaskedArrays." + (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d + mXcp = mX.cumsum(0) + assert_equal(mXcp.data,mX.filled(0).cumsum(0)) + mXcp = mX.cumsum(1) + assert_equal(mXcp.data,mX.filled(0).cumsum(1)) # - a[[2,3]] = masked - b = a.compress(condition) - assert_equal(b._data,[2.,3.]) - assert_equal(b._mask,[0,1]) - assert_equal(b.fill_value,9999) - assert_equal(b,a[condition]) + mXcp = mX.cumprod(0) + assert_equal(mXcp.data,mX.filled(1).cumprod(0)) + mXcp = mX.cumprod(1) + assert_equal(mXcp.data,mX.filled(1).cumprod(1)) + + + def test_cumsumprod_with_output(self): + "Tests cumsum/cumprod w/ output" + xm = array(np.random.uniform(0,10,12)).reshape(3,4) + xm[:,0] = xm[0] = xm[-1,-1] = masked # - condition = (a<4.) - b = a.compress(condition) - assert_equal(b._data,[1.,2.,3.]) - assert_equal(b._mask,[0,0,1]) - assert_equal(b.fill_value,9999) - assert_equal(b,a[condition]) - # - a = masked_array([[10,20,30],[40,50,60]], mask=[[0,0,1],[1,0,0]]) - b = a.compress(a.ravel() >= 22) - assert_equal(b._data, [30, 40, 50, 60]) - assert_equal(b._mask, [1,1,0,0]) - # - x = numpy.array([3,1,2]) - b = a.compress(x >= 2, axis=1) - assert_equal(b._data, [[10,30],[40,60]]) - assert_equal(b._mask, [[0,1],[1,0]]) - # - def test_empty(self): - "Tests empty/like" - datatype = [('a',int_),('b',float_),('c','|S8')] - a = masked_array([(1,1.1,'1.1'),(2,2.2,'2.2'),(3,3.3,'3.3')], - dtype=datatype) - assert_equal(len(a.fill_value.item()), len(datatype)) - # - b = empty_like(a) - assert_equal(b.shape, a.shape) - assert_equal(b.fill_value, a.fill_value) - # - b = empty(len(a), dtype=datatype) - assert_equal(b.shape, a.shape) - assert_equal(b.fill_value, a.fill_value) + for funcname in ('cumsum','cumprod'): + npfunc = getattr(np, funcname) + xmmeth = getattr(xm, funcname) + + # A ndarray as explicit input + output = np.empty((3,4), dtype=float) + output.fill(-9999) + result = npfunc(xm, axis=0,out=output) + # ... the result should be the given output + assert(result is output) + assert_equal(result, xmmeth(axis=0, out=output)) + # + output = empty((3,4), dtype=int) + result = xmmeth(axis=0, out=output) + assert(result is output) + def test_ptp(self): + "Tests ptp on MaskedArrays." + (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d + (n,m) = X.shape + assert_equal(mx.ptp(),mx.compressed().ptp()) + rows = np.zeros(n,np.float_) + cols = np.zeros(m,np.float_) + for k in range(m): + cols[k] = mX[:,k].compressed().ptp() + for k in range(n): + rows[k] = mX[k].compressed().ptp() + assert_equal(mX.ptp(0),cols) + assert_equal(mX.ptp(1),rows) + + + def test_trace(self): + "Tests trace on MaskedArrays." + (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d + mXdiag = mX.diagonal() + assert_equal(mX.trace(), mX.diagonal().compressed().sum()) + assert_almost_equal(mX.trace(), + X.trace() - sum(mXdiag.mask*X.diagonal(),axis=0)) + + + def test_varstd(self): + "Tests var & std on MaskedArrays." + (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d + assert_almost_equal(mX.var(axis=None),mX.compressed().var()) + assert_almost_equal(mX.std(axis=None),mX.compressed().std()) + assert_almost_equal(mX.std(axis=None,ddof=1), + mX.compressed().std(ddof=1)) + assert_almost_equal(mX.var(axis=None,ddof=1), + mX.compressed().var(ddof=1)) + assert_equal(mXX.var(axis=3).shape,XX.var(axis=3).shape) + assert_equal(mX.var().shape,X.var().shape) + (mXvar0,mXvar1) = (mX.var(axis=0), mX.var(axis=1)) + assert_almost_equal(mX.var(axis=None,ddof=2),mX.compressed().var(ddof=2)) + assert_almost_equal(mX.std(axis=None,ddof=2),mX.compressed().std(ddof=2)) + for k in range(6): + assert_almost_equal(mXvar1[k],mX[k].compressed().var()) + assert_almost_equal(mXvar0[k],mX[:,k].compressed().var()) + assert_almost_equal(np.sqrt(mXvar0[k]), mX[:,k].compressed().std()) + + def test_varstd_specialcases(self): "Test a special case for var" nout = np.empty(1, dtype=float) @@ -1659,13 +1767,13 @@ _ = method(out=nout, ddof=1) assert(np.isnan(nout)) +#------------------------------------------------------------------------------ - -class TestArrayMathMethodsComplex(NumpyTestCase): +class TestMaskedArrayMathMethodsComplex(NumpyTestCase): "Test class for miscellaneous MaskedArrays methods." def setUp(self): "Base data definition." - x = numpy.array([ 8.375j, 7.545j, 8.828j, 8.5j , 1.757j, 5.928, + x = np.array([ 8.375j, 7.545j, 8.828j, 8.5j , 1.757j, 5.928, 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479j, @@ -1674,7 +1782,7 @@ X = x.reshape(6,6) XX = x.reshape(3,2,2,3) - m = numpy.array([0, 1, 0, 1, 0, 0, + m = np.array([0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, @@ -1684,7 +1792,7 @@ mX = array(data=X,mask=m.reshape(X.shape)) mXX = array(data=XX,mask=m.reshape(XX.shape)) - m2 = numpy.array([1, 1, 0, 1, 0, 0, + m2 = np.array([1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, @@ -1709,19 +1817,65 @@ for k in range(6): assert_almost_equal(mXvar1[k],mX[k].compressed().var()) assert_almost_equal(mXvar0[k],mX[:,k].compressed().var()) - assert_almost_equal(numpy.sqrt(mXvar0[k]), mX[:,k].compressed().std()) + assert_almost_equal(np.sqrt(mXvar0[k]), mX[:,k].compressed().std()) -#.............................................................................. -class TestMiscFunctions(NumpyTestCase): +#------------------------------------------------------------------------------ + +class TestMaskedArrayFunctions(NumpyTestCase): "Test class for miscellaneous functions." # - def test_masked_where(self): + def setUp(self): + x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) + y = np.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) + a10 = 10. + m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] + m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 ,0, 1] + xm = masked_array(x, mask=m1) + ym = masked_array(y, mask=m2) + z = np.array([-.5, 0., .5, .8]) + zm = masked_array(z, mask=[0,1,0,0]) + xf = np.where(m1, 1.e+20, x) + xm.set_fill_value(1.e+20) + self.info = (xm, ym) + + # + def test_masked_where_bool(self): x = [1,2] y = masked_where(False,x) assert_equal(y,[1,2]) assert_equal(y[1],2) - # + + def test_masked_where_condition(self): + "Tests masking functions." + x = array([1.,2.,3.,4.,5.]) + x[2] = masked + assert_equal(masked_where(greater(x, 2), x), masked_greater(x,2)) + assert_equal(masked_where(greater_equal(x, 2), x), masked_greater_equal(x,2)) + assert_equal(masked_where(less(x, 2), x), masked_less(x,2)) + assert_equal(masked_where(less_equal(x, 2), x), masked_less_equal(x,2)) + assert_equal(masked_where(not_equal(x, 2), x), masked_not_equal(x,2)) + assert_equal(masked_where(equal(x, 2), x), masked_equal(x,2)) + assert_equal(masked_where(not_equal(x,2), x), masked_not_equal(x,2)) + assert_equal(masked_where([1,1,0,0,0], [1,2,3,4,5]), [99,99,3,4,5]) + + def test_masked_where_oddities(self): + """Tests some generic features.""" + atest = ones((10,10,10), dtype=float_) + btest = zeros(atest.shape, MaskType) + ctest = masked_where(btest,atest) + assert_equal(atest,ctest) + + + def test_masked_otherfunctions(self): + assert_equal(masked_inside(range(5), 1, 3), [0, 199, 199, 199, 4]) + assert_equal(masked_outside(range(5), 1, 3),[199,1,2,3,199]) + assert_equal(masked_inside(array(range(5), mask=[1,0,0,0,0]), 1, 3).mask, [1,1,1,1,0]) + assert_equal(masked_outside(array(range(5), mask=[0,1,0,0,0]), 1, 3).mask, [1,1,0,0,1]) + assert_equal(masked_equal(array(range(5), mask=[1,0,0,0,0]), 2).mask, [1,0,1,0,0]) + assert_equal(masked_not_equal(array([2,2,1,2,1], mask=[1,0,0,0,0]), 2).mask, [1,0,1,0,1]) + + def test_round(self): a = array([1.23456, 2.34567, 3.45678, 4.56789, 5.67890], mask=[0,1,0,0,0]) @@ -1731,12 +1885,45 @@ b = empty_like(a) a.round(out=b) assert_equal(b, [1., 2., 3., 5., 6.]) - # + + x = array([1.,2.,3.,4.,5.]) + c = array([1,1,1,0,0]) + x[2] = masked + z = where(c, x, -x) + assert_equal(z, [1.,2.,0., -4., -5]) + c[0] = masked + z = where(c, x, -x) + assert_equal(z, [1.,2.,0., -4., -5]) + assert z[0] is masked + assert z[1] is not masked + assert z[2] is masked + + + def test_round_with_output(self): + "Testing round with an explicit output" + + xm = array(np.random.uniform(0,10,12)).reshape(3,4) + xm[:,0] = xm[0] = xm[-1,-1] = masked + + # A ndarray as explicit input + output = np.empty((3,4), dtype=float) + output.fill(-9999) + result = np.round(xm, decimals=2,out=output) + # ... the result should be the given output + assert(result is output) + assert_equal(result, xm.round(decimals=2, out=output)) + # + output = empty((3,4), dtype=float) + result = xm.round(decimals=2, out=output) + assert(result is output) + + def test_identity(self): a = identity(5) assert(isinstance(a, MaskedArray)) - assert_equal(a, numpy.identity(5)) - # + assert_equal(a, np.identity(5)) + + def test_power(self): x = -1.1 assert_almost_equal(power(x,2.), 1.21) @@ -1759,6 +1946,89 @@ assert_almost_equal(x._data,y._data) + def test_where(self): + "Test the where function" + x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) + y = np.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) + a10 = 10. + m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] + m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 ,0, 1] + xm = masked_array(x, mask=m1) + ym = masked_array(y, mask=m2) + z = np.array([-.5, 0., .5, .8]) + zm = masked_array(z, mask=[0,1,0,0]) + xf = np.where(m1, 1.e+20, x) + xm.set_fill_value(1.e+20) + + d = where(xm>2,xm,-9) + assert_equal(d, [-9.,-9.,-9.,-9., -9., 4., -9., -9., 10., -9., -9., 3.]) + assert_equal(d._mask, xm._mask) + d = where(xm>2,-9,ym) + assert_equal(d, [5.,0.,3., 2., -1.,-9.,-9., -10., -9., 1., 0., -9.]) + assert_equal(d._mask, [1,0,1,0,0,0,1,0,0,0,0,0]) + d = where(xm>2, xm, masked) + assert_equal(d, [-9.,-9.,-9.,-9., -9., 4., -9., -9., 10., -9., -9., 3.]) + tmp = xm._mask.copy() + tmp[(xm<=2).filled(True)] = True + assert_equal(d._mask, tmp) + # + ixm = xm.astype(int_) + d = where(ixm>2, ixm, masked) + assert_equal(d, [-9,-9,-9,-9, -9, 4, -9, -9, 10, -9, -9, 3]) + assert_equal(d.dtype, ixm.dtype) + + def test_where_with_masked_choice(self): + x = arange(10) + x[3] = masked + c = x >= 8 + # Set False to masked + z = where(c , x, masked) + assert z.dtype is x.dtype + assert z[3] is masked + assert z[4] is masked + assert z[7] is masked + assert z[8] is not masked + assert z[9] is not masked + assert_equal(x,z) + # Set True to masked + z = where(c , masked, x) + assert z.dtype is x.dtype + assert z[3] is masked + assert z[4] is not masked + assert z[7] is not masked + assert z[8] is masked + assert z[9] is masked + + def test_where_with_masked_condition(self): + x = array([1.,2.,3.,4.,5.]) + c = array([1,1,1,0,0]) + x[2] = masked + z = where(c, x, -x) + assert_equal(z, [1.,2.,0., -4., -5]) + c[0] = masked + z = where(c, x, -x) + assert_equal(z, [1.,2.,0., -4., -5]) + assert z[0] is masked + assert z[1] is not masked + assert z[2] is masked + # + x = arange(1,6) + x[-1] = masked + y = arange(1,6)*10 + y[2] = masked + c = array([1,1,1,0,0], mask=[1,0,0,0,0]) + cm = c.filled(1) + z = where(c,x,y) + zm = where(cm,x,y) + assert_equal(z, zm) + assert getmask(zm) is nomask + assert_equal(zm, [1,2,3,40,50]) + z = where(c, masked, 1) + assert_equal(z, [99,99,99,1,1]) + z = where(c, 1, masked) + assert_equal(z, [99, 1, 1, 99, 99]) + + def test_choose(self): "Test choose" choices = [[0, 1, 2, 3], [10, 11, 12, 13], @@ -1805,71 +2075,78 @@ chosen = choose(indices_, choices, mode='wrap', out=store) assert_equal(store, array([999999, 31, 12, 999999])) - def test_functions_with_output(self): - xm = array(np.random.uniform(0,10,12)).reshape(3,4) - xm[:,0] = xm[0] = xm[-1,-1] = masked - # - funclist = ('sum','prod','var','std', 'max', 'min', 'ptp', 'mean', ) - # - for funcname in funclist: - npfunc = getattr(np, funcname) - xmmeth = getattr(xm, funcname) - - # A ndarray as explicit input - output = np.empty(4, dtype=float) - output.fill(-9999) - result = npfunc(xm, axis=0,out=output) - # ... the result should be the given output - assert(result is output) - assert_equal(result, xmmeth(axis=0, out=output)) - # - output = empty(4, dtype=int) - result = xmmeth(axis=0, out=output) - assert(result is output) - assert(output[0] is masked) +#------------------------------------------------------------------------------ - def test_cumsumprod_with_output(self): - "Tests cumsum/cumprod w/ output" - xm = array(np.random.uniform(0,10,12)).reshape(3,4) - xm[:,0] = xm[0] = xm[-1,-1] = masked - # - funclist = ('cumsum','cumprod') - # - for funcname in funclist: - npfunc = getattr(np, funcname) - xmmeth = getattr(xm, funcname) - - # A ndarray as explicit input - output = np.empty((3,4), dtype=float) - output.fill(-9999) - result = npfunc(xm, axis=0,out=output) - # ... the result should be the given output - assert(result is output) - assert_equal(result, xmmeth(axis=0, out=output)) - # - output = empty((3,4), dtype=int) - result = xmmeth(axis=0, out=output) - assert(result is output) +class TestMaskedFields(NumpyTestCase): + # + def setUp(self): + ilist = [1,2,3,4,5] + flist = [1.1,2.2,3.3,4.4,5.5] + slist = ['one','two','three','four','five'] + ddtype = [('a',int),('b',float),('c','|S8')] + mdtype = [('a',bool),('b',bool),('c',bool)] + mask = [0,1,0,0,1] + base = array(zip(ilist,flist,slist), mask=mask, dtype=ddtype) + self.data = dict(base=base, mask=mask, ddtype=ddtype, mdtype=mdtype) + def test_set_records_masks(self): + base = self.data['base'] + mdtype = self.data['mdtype'] + # Set w/ nomask or masked + base.mask = nomask + assert_equal_records(base._mask, np.zeros(base.shape, dtype=mdtype)) + base.mask = masked + assert_equal_records(base._mask, np.ones(base.shape, dtype=mdtype)) + # Set w/ simple boolean + base.mask = False + assert_equal_records(base._mask, np.zeros(base.shape, dtype=mdtype)) + base.mask = True + assert_equal_records(base._mask, np.ones(base.shape, dtype=mdtype)) + # Set w/ list + base.mask = [0,0,0,1,1] + assert_equal_records(base._mask, + np.array([(x,x,x) for x in [0,0,0,1,1]], + dtype=mdtype)) - def test_round_with_output(self): - "Testing round with an explicit output" + def test_set_record_element(self): + "Check setting an element of a record)" + base = self.data['base'] + (base_a, base_b, base_c) = (base['a'], base['b'], base['c']) + base[0] = (np.pi, np.pi, 'pi') - xm = array(np.random.uniform(0,10,12)).reshape(3,4) - xm[:,0] = xm[0] = xm[-1,-1] = masked + assert_equal(base_a.dtype, int) + assert_equal(base_a.data, [3,2,3,4,5]) - # A ndarray as explicit input - output = np.empty((3,4), dtype=float) - output.fill(-9999) - result = np.round(xm, decimals=2,out=output) - # ... the result should be the given output - assert(result is output) - assert_equal(result, xm.round(decimals=2, out=output)) + assert_equal(base_b.dtype, float) + assert_equal(base_b.data, [np.pi, 2.2, 3.3, 4.4, 5.5]) + + assert_equal(base_c.dtype, '|S8') + assert_equal(base_c.data, ['pi','two','three','four','five']) + + def test_set_record_slice(self): + base = self.data['base'] + (base_a, base_b, base_c) = (base['a'], base['b'], base['c']) + base[:3] = (np.pi, np.pi, 'pi') + + assert_equal(base_a.dtype, int) + assert_equal(base_a.data, [3,3,3,4,5]) + + assert_equal(base_b.dtype, float) + assert_equal(base_b.data, [np.pi, np.pi, np.pi, 4.4, 5.5]) + + assert_equal(base_c.dtype, '|S8') + assert_equal(base_c.data, ['pi','pi','pi','four','five']) + + def test_mask_element(self): + "Check record access" + base = self.data['base'] + (base_a, base_b, base_c) = (base['a'], base['b'], base['c']) + base[0] = masked # - output = empty((3,4), dtype=float) - result = xm.round(decimals=2, out=output) - assert(result is output) + for n in ('a','b','c'): + assert_equal(base[n].mask, [1,1,0,0,1]) + assert_equal(base[n].data, base.data[n]) ############################################################################### #------------------------------------------------------------------------------ Modified: trunk/numpy/ma/tests/test_extras.py =================================================================== --- trunk/numpy/ma/tests/test_extras.py 2008-06-08 18:10:55 UTC (rev 5263) +++ trunk/numpy/ma/tests/test_extras.py 2008-06-08 23:04:42 UTC (rev 5264) @@ -11,7 +11,7 @@ __revision__ = "$Revision: 3473 $" __date__ = '$Date: 2007-10-29 17:18:13 +0200 (Mon, 29 Oct 2007) $' -import numpy as N +import numpy as np from numpy.testing import NumpyTest, NumpyTestCase from numpy.testing.utils import build_err_msg @@ -52,12 +52,15 @@ assert_equal(average(x, axis=0), 2.5) assert_equal(average(x, axis=0, weights=w1), 2.5) y = array([arange(6, dtype=float_), 2.0*arange(6)]) - assert_equal(average(y, None), N.add.reduce(N.arange(6))*3./12.) - assert_equal(average(y, axis=0), N.arange(6) * 3./2.) - assert_equal(average(y, axis=1), [average(x,axis=0), average(x,axis=0) * 2.0]) + assert_equal(average(y, None), np.add.reduce(np.arange(6))*3./12.) + assert_equal(average(y, axis=0), np.arange(6) * 3./2.) + assert_equal(average(y, axis=1), + [average(x,axis=0), average(x,axis=0) * 2.0]) assert_equal(average(y, None, weights=w2), 20./6.) - assert_equal(average(y, axis=0, weights=w2), [0.,1.,2.,3.,4.,10.]) - assert_equal(average(y, axis=1), [average(x,axis=0), average(x,axis=0) * 2.0]) + assert_equal(average(y, axis=0, weights=w2), + [0.,1.,2.,3.,4.,10.]) + assert_equal(average(y, axis=1), + [average(x,axis=0), average(x,axis=0) * 2.0]) m1 = zeros(6) m2 = [0,0,1,1,0,0] m3 = [[0,0,1,1,0,0],[0,1,1,1,1,0]] @@ -115,26 +118,26 @@ "Tests mr_ on 2D arrays." a_1 = rand(5,5) a_2 = rand(5,5) - m_1 = N.round_(rand(5,5),0) - m_2 = N.round_(rand(5,5),0) + m_1 = np.round_(rand(5,5),0) + m_2 = np.round_(rand(5,5),0) b_1 = masked_array(a_1,mask=m_1) b_2 = masked_array(a_2,mask=m_2) d = mr_['1',b_1,b_2] # append columns assert(d.shape == (5,10)) assert_array_equal(d[:,:5],b_1) assert_array_equal(d[:,5:],b_2) - assert_array_equal(d.mask, N.r_['1',m_1,m_2]) + assert_array_equal(d.mask, np.r_['1',m_1,m_2]) d = mr_[b_1,b_2] assert(d.shape == (10,5)) assert_array_equal(d[:5,:],b_1) assert_array_equal(d[5:,:],b_2) - assert_array_equal(d.mask, N.r_[m_1,m_2]) + assert_array_equal(d.mask, np.r_[m_1,m_2]) class TestNotMasked(NumpyTestCase): "Tests notmasked_edges and notmasked_contiguous." def check_edges(self): "Tests unmasked_edges" - a = masked_array(N.arange(24).reshape(3,8), + a = masked_array(np.arange(24).reshape(3,8), mask=[[0,0,0,0,1,1,1,0], [1,1,1,1,1,1,1,1], [0,0,0,0,0,0,1,0],]) @@ -151,7 +154,7 @@ def check_contiguous(self): "Tests notmasked_contiguous" - a = masked_array(N.arange(24).reshape(3,8), + a = masked_array(np.arange(24).reshape(3,8), mask=[[0,0,0,0,1,1,1,1], [1,1,1,1,1,1,1,1], [0,0,0,0,0,0,1,0],]) @@ -176,7 +179,7 @@ "Tests 2D functions" def check_compress2d(self): "Tests compress2d" - x = array(N.arange(9).reshape(3,3), mask=[[1,0,0],[0,0,0],[0,0,0]]) + x = array(np.arange(9).reshape(3,3), mask=[[1,0,0],[0,0,0],[0,0,0]]) assert_equal(compress_rowcols(x), [[4,5],[7,8]] ) assert_equal(compress_rowcols(x,0), [[3,4,5],[6,7,8]] ) assert_equal(compress_rowcols(x,1), [[1,2],[4,5],[7,8]] ) @@ -195,7 +198,7 @@ # def check_mask_rowcols(self): "Tests mask_rowcols." - x = array(N.arange(9).reshape(3,3), mask=[[1,0,0],[0,0,0],[0,0,0]]) + x = array(np.arange(9).reshape(3,3), mask=[[1,0,0],[0,0,0],[0,0,0]]) assert_equal(mask_rowcols(x).mask, [[1,1,1],[1,0,0],[1,0,0]] ) assert_equal(mask_rowcols(x,0).mask, [[1,1,1],[0,0,0],[0,0,0]] ) assert_equal(mask_rowcols(x,1).mask, [[1,0,0],[1,0,0],[1,0,0]] ) @@ -208,13 +211,16 @@ assert_equal(mask_rowcols(x,0).mask, [[1,1,1],[1,1,1],[0,0,0]] ) assert_equal(mask_rowcols(x,1,).mask, [[1,1,0],[1,1,0],[1,1,0]] ) x = array(x._data, mask=[[1,0,0],[0,1,0],[0,0,1]]) - assert(mask_rowcols(x).all()) - assert(mask_rowcols(x,0).all()) - assert(mask_rowcols(x,1).all()) + assert(mask_rowcols(x).all() is masked) + assert(mask_rowcols(x,0).all() is masked) + assert(mask_rowcols(x,1).all() is masked) + assert(mask_rowcols(x).mask.all()) + assert(mask_rowcols(x,0).mask.all()) + assert(mask_rowcols(x,1).mask.all()) # def test_dot(self): "Tests dot product" - n = N.arange(1,7) + n = np.arange(1,7) # m = [1,0,0,0,0,0] a = masked_array(n, mask=m).reshape(2,3) @@ -224,9 +230,9 @@ c = dot(b,a,True) assert_equal(c.mask, [[1,1,1],[1,0,0],[1,0,0]]) c = dot(a,b,False) - assert_equal(c, N.dot(a.filled(0), b.filled(0))) + assert_equal(c, np.dot(a.filled(0), b.filled(0))) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0), a.filled(0))) + assert_equal(c, np.dot(b.filled(0), a.filled(0))) # m = [0,0,0,0,0,1] a = masked_array(n, mask=m).reshape(2,3) @@ -236,10 +242,10 @@ c = dot(b,a,True) assert_equal(c.mask, [[0,0,1],[0,0,1],[1,1,1]]) c = dot(a,b,False) - assert_equal(c, N.dot(a.filled(0), b.filled(0))) + assert_equal(c, np.dot(a.filled(0), b.filled(0))) assert_equal(c, dot(a,b)) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0), a.filled(0))) + assert_equal(c, np.dot(b.filled(0), a.filled(0))) # m = [0,0,0,0,0,0] a = masked_array(n, mask=m).reshape(2,3) @@ -254,37 +260,37 @@ c = dot(a,b,True) assert_equal(c.mask,[[1,1],[0,0]]) c = dot(a,b,False) - assert_equal(c, N.dot(a.filled(0),b.filled(0))) + assert_equal(c, np.dot(a.filled(0),b.filled(0))) c = dot(b,a,True) assert_equal(c.mask,[[1,0,0],[1,0,0],[1,0,0]]) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0),a.filled(0))) + assert_equal(c, np.dot(b.filled(0),a.filled(0))) # a = masked_array(n, mask=[0,0,0,0,0,1]).reshape(2,3) b = masked_array(n, mask=[0,0,0,0,0,0]).reshape(3,2) c = dot(a,b,True) assert_equal(c.mask,[[0,0],[1,1]]) c = dot(a,b) - assert_equal(c, N.dot(a.filled(0),b.filled(0))) + assert_equal(c, np.dot(a.filled(0),b.filled(0))) c = dot(b,a,True) assert_equal(c.mask,[[0,0,1],[0,0,1],[0,0,1]]) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0), a.filled(0))) + assert_equal(c, np.dot(b.filled(0), a.filled(0))) # a = masked_array(n, mask=[0,0,0,0,0,1]).reshape(2,3) b = masked_array(n, mask=[0,0,1,0,0,0]).reshape(3,2) c = dot(a,b,True) assert_equal(c.mask,[[1,0],[1,1]]) c = dot(a,b,False) - assert_equal(c, N.dot(a.filled(0),b.filled(0))) + assert_equal(c, np.dot(a.filled(0),b.filled(0))) c = dot(b,a,True) assert_equal(c.mask,[[0,0,1],[1,1,1],[0,0,1]]) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0),a.filled(0))) + assert_equal(c, np.dot(b.filled(0),a.filled(0))) def test_ediff1d(self): "Tests mediff1d" - x = masked_array(N.arange(5), mask=[1,0,0,0,1]) + x = masked_array(np.arange(5), mask=[1,0,0,0,1]) difx_d = (x._data[1:]-x._data[:-1]) difx_m = (x._mask[1:]-x._mask[:-1]) dx = ediff1d(x) @@ -292,29 +298,29 @@ assert_equal(dx._mask, difx_m) # dx = ediff1d(x, to_begin=masked) - assert_equal(dx._data, N.r_[0,difx_d]) - assert_equal(dx._mask, N.r_[1,difx_m]) + assert_equal(dx._data, np.r_[0,difx_d]) + assert_equal(dx._mask, np.r_[1,difx_m]) dx = ediff1d(x, to_begin=[1,2,3]) - assert_equal(dx._data, N.r_[[1,2,3],difx_d]) - assert_equal(dx._mask, N.r_[[0,0,0],difx_m]) + assert_equal(dx._data, np.r_[[1,2,3],difx_d]) + assert_equal(dx._mask, np.r_[[0,0,0],difx_m]) # dx = ediff1d(x, to_end=masked) - assert_equal(dx._data, N.r_[difx_d,0]) - assert_equal(dx._mask, N.r_[difx_m,1]) + assert_equal(dx._data, np.r_[difx_d,0]) + assert_equal(dx._mask, np.r_[difx_m,1]) dx = ediff1d(x, to_end=[1,2,3]) - assert_equal(dx._data, N.r_[difx_d,[1,2,3]]) - assert_equal(dx._mask, N.r_[difx_m,[0,0,0]]) + assert_equal(dx._data, np.r_[difx_d,[1,2,3]]) + assert_equal(dx._mask, np.r_[difx_m,[0,0,0]]) # dx = ediff1d(x, to_end=masked, to_begin=masked) - assert_equal(dx._data, N.r_[0,difx_d,0]) - assert_equal(dx._mask, N.r_[1,difx_m,1]) + assert_equal(dx._data, np.r_[0,difx_d,0]) + assert_equal(dx._mask, np.r_[1,difx_m,1]) dx = ediff1d(x, to_end=[1,2,3], to_begin=masked) - assert_equal(dx._data, N.r_[0,difx_d,[1,2,3]]) - assert_equal(dx._mask, N.r_[1,difx_m,[0,0,0]]) + assert_equal(dx._data, np.r_[0,difx_d,[1,2,3]]) + assert_equal(dx._mask, np.r_[1,difx_m,[0,0,0]]) # dx = ediff1d(x._data, to_end=masked, to_begin=masked) - assert_equal(dx._data, N.r_[0,difx_d,0]) - assert_equal(dx._mask, N.r_[1,0,0,0,0,1]) + assert_equal(dx._data, np.r_[0,difx_d,0]) + assert_equal(dx._mask, np.r_[1,0,0,0,0,1]) class TestApplyAlongAxis(NumpyTestCase): "Tests 2D functions" Modified: trunk/numpy/ma/tests/test_mrecords.py =================================================================== --- trunk/numpy/ma/tests/test_mrecords.py 2008-06-08 18:10:55 UTC (rev 5263) +++ trunk/numpy/ma/tests/test_mrecords.py 2008-06-08 23:04:42 UTC (rev 5264) @@ -46,7 +46,8 @@ "Test creation by view" base = self.base mbase = base.view(mrecarray) - assert_equal(mbase._mask, base._mask) + assert_equal(mbase.recordmask, base.recordmask) + assert_equal_records(mbase._mask, base._mask) assert isinstance(mbase._data, recarray) assert_equal_records(mbase._data, base._data.view(recarray)) for field in ('a','b','c'): @@ -66,14 +67,18 @@ assert isinstance(mbase_first, mrecarray) assert_equal(mbase_first.dtype, mbase.dtype) assert_equal(mbase_first.tolist(), (1,1.1,'one')) - assert_equal(mbase_first.mask, nomask) + # Used to be mask, now it's recordmask + assert_equal(mbase_first.recordmask, nomask) + # _fieldmask and _mask should be the same thing assert_equal(mbase_first._fieldmask.item(), (False, False, False)) + assert_equal(mbase_first._mask.item(), (False, False, False)) assert_equal(mbase_first['a'], mbase['a'][0]) mbase_last = mbase[-1] assert isinstance(mbase_last, mrecarray) assert_equal(mbase_last.dtype, mbase.dtype) assert_equal(mbase_last.tolist(), (None,None,None)) - assert_equal(mbase_last.mask, True) + # Used to be mask, now it's recordmask + assert_equal(mbase_last.recordmask, True) assert_equal(mbase_last._fieldmask.item(), (True, True, True)) assert_equal(mbase_last['a'], mbase['a'][-1]) assert (mbase_last['a'] is masked) @@ -81,7 +86,11 @@ mbase_sl = mbase[:2] assert isinstance(mbase_sl, mrecarray) assert_equal(mbase_sl.dtype, mbase.dtype) - assert_equal(mbase_sl._mask, [0,1]) + # Used to be mask, now it's recordmask + assert_equal(mbase_sl.recordmask, [0,1]) + assert_equal_records(mbase_sl.mask, + np.array([(False,False,False),(True,True,True)], + dtype=mbase._mask.dtype)) assert_equal_records(mbase_sl, base[:2].view(mrecarray)) for field in ('a','b','c'): assert_equal(getattr(mbase_sl,field), base[:2][field]) @@ -100,13 +109,16 @@ mbase.a = 1 assert_equal(mbase['a']._data, [1]*5) assert_equal(ma.getmaskarray(mbase['a']), [0]*5) - assert_equal(mbase._mask, [False]*5) + # Use to be _mask, now it's recordmask + assert_equal(mbase.recordmask, [False]*5) assert_equal(mbase._fieldmask.tolist(), np.array([(0,0,0),(0,1,1),(0,0,0),(0,0,0),(0,1,1)], dtype=bool)) # Set a field to mask ........................ mbase.c = masked + # Use to be mask, and now it's still mask ! assert_equal(mbase.c.mask, [1]*5) + assert_equal(mbase.c.recordmask, [1]*5) assert_equal(ma.getmaskarray(mbase['c']), [1]*5) assert_equal(ma.getdata(mbase['c']), ['N/A']*5) assert_equal(mbase._fieldmask.tolist(), @@ -201,10 +213,11 @@ assert_equal(mbase._fieldmask.tolist(), np.array([(0,0,0),(1,1,1),(0,0,0),(1,1,1),(1,1,1)], dtype=bool)) - assert_equal(mbase._mask, [0,1,0,1,1]) + # Used to be mask, now it's recordmask! + assert_equal(mbase.recordmask, [0,1,0,1,1]) # Set slices ................................. mbase = base.view(mrecarray).copy() - mbase[:2] = 5 + mbase[:2] = (5,5,5) assert_equal(mbase.a._data, [5,5,3,4,5]) assert_equal(mbase.a._mask, [0,0,0,0,1]) assert_equal(mbase.b._data, [5.,5.,3.3,4.4,5.5]) @@ -226,13 +239,29 @@ base = self.base.copy() mbase = base.view(mrecarray) mbase.harden_mask() - mbase[-2:] = 5 - assert_equal(mbase.a._data, [1,2,3,5,5]) - assert_equal(mbase.b._data, [1.1,2.2,3.3,5,5.5]) - assert_equal(mbase.c._data, ['one','two','three','5','five']) - assert_equal(mbase.a._mask, [0,1,0,0,1]) - assert_equal(mbase.b._mask, mbase.a._mask) - assert_equal(mbase.b._mask, mbase.c._mask) + try: + mbase[-2:] = (5,5,5) + assert_equal(mbase.a._data, [1,2,3,5,5]) + assert_equal(mbase.b._data, [1.1,2.2,3.3,5,5.5]) + assert_equal(mbase.c._data, ['one','two','three','5','five']) + assert_equal(mbase.a._mask, [0,1,0,0,1]) + assert_equal(mbase.b._mask, mbase.a._mask) + assert_equal(mbase.b._mask, mbase.c._mask) + except NotImplementedError: + # OK, not implemented yet... + pass + except AssertionError: + raise + else: + raise Exception("Flexible hard masks should be supported !") + # Not using a tuple should crash + try: + mbase[-2:] = 3 + except (NotImplementedError, TypeError): + pass + else: + raise TypeError("Should have expected a readable buffer object!") + def test_hardmask(self): "Test hardmask" @@ -241,11 +270,13 @@ mbase.harden_mask() assert(mbase._hardmask) mbase._mask = nomask - assert_equal(mbase._mask, [0,1,0,0,1]) + assert_equal_records(mbase._mask, base._mask) mbase.soften_mask() assert(not mbase._hardmask) mbase._mask = nomask # So, the mask of a field is no longer set to nomask... + assert_equal_records(mbase._mask, + ma.make_mask_none(base.shape,base.dtype.names)) assert(ma.make_mask(mbase['b']._mask) is nomask) assert_equal(mbase['a']._mask,mbase['b']._mask) # From numpy-svn at scipy.org Wed Jun 11 14:38:37 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 11 Jun 2008 13:38:37 -0500 (CDT) Subject: [Numpy-svn] r5265 - trunk/numpy/doc Message-ID: <20080611183837.D3B2CC7C097@scipy.org> Author: stefan Date: 2008-06-11 13:38:20 -0500 (Wed, 11 Jun 2008) New Revision: 5265 Modified: trunk/numpy/doc/HOWTO_DOCUMENT.txt Log: How to use variables in math markup. Modified: trunk/numpy/doc/HOWTO_DOCUMENT.txt =================================================================== --- trunk/numpy/doc/HOWTO_DOCUMENT.txt 2008-06-08 23:04:42 UTC (rev 5264) +++ trunk/numpy/doc/HOWTO_DOCUMENT.txt 2008-06-11 18:38:20 UTC (rev 5265) @@ -237,6 +237,12 @@ The value of :math:`\omega` is larger than 5. + Variable names are displayed in typewriter font, obtained by using + ``\mathtt{var}``:: + + We square the input parameter `alpha` to obtain + :math:`\mathtt{alpha}^2`. + Note that LaTeX is not particularly easy to read, so use equations sparingly. From numpy-svn at scipy.org Thu Jun 12 01:45:31 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 00:45:31 -0500 (CDT) Subject: [Numpy-svn] r5266 - trunk/numpy/ma Message-ID: <20080612054531.A6F97C7C01F@scipy.org> Author: cdavid Date: 2008-06-12 00:45:18 -0500 (Thu, 12 Jun 2008) New Revision: 5266 Modified: trunk/numpy/ma/ Log: Ignore python and vim junk in numpy/ma. Property changes on: trunk/numpy/ma ___________________________________________________________________ Name: svn:ignore - core_tmp.py ma_old.py core_mod.py core_new.py mrecords_new.py + core_tmp.py ma_old.py core_mod.py core_new.py mrecords_new.py *.pyc *.swp From numpy-svn at scipy.org Thu Jun 12 02:35:26 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 01:35:26 -0500 (CDT) Subject: [Numpy-svn] r5267 - trunk/numpy/distutils/command Message-ID: <20080612063526.3EDFA39C443@scipy.org> Author: cdavid Date: 2008-06-12 01:35:22 -0500 (Thu, 12 Jun 2008) New Revision: 5267 Modified: trunk/numpy/distutils/command/scons.py Log: scons command: set distutils libdir relatively to build directory. Modified: trunk/numpy/distutils/command/scons.py =================================================================== --- trunk/numpy/distutils/command/scons.py 2008-06-12 05:45:18 UTC (rev 5266) +++ trunk/numpy/distutils/command/scons.py 2008-06-12 06:35:22 UTC (rev 5267) @@ -38,6 +38,14 @@ from numscons import get_scons_path return get_scons_path() +def get_distutils_libdir(cmd): + """Returns the path where distutils install libraries, relatively to the + scons build directory.""" + from numscons import get_scons_build_dir + scdir = get_scons_build_dir() + n = scdir.count(os.sep) + return pjoin(os.sep.join([os.pardir for i in range(n+1)]), cmd.build_lib) + def get_python_exec_invoc(): """This returns the python executable from which this file is invocated.""" # Do we need to take into account the PYTHONPATH, in a cross platform way, @@ -361,7 +369,7 @@ cmd.append('pkg_name="%s"' % pkg_name) #cmd.append('distutils_libdir=%s' % protect_path(pjoin(self.build_lib, # pdirname(sconscript)))) - cmd.append('distutils_libdir=%s' % protect_path(pjoin(self.build_lib))) + cmd.append('distutils_libdir=%s' % protect_path(get_distutils_libdir(self))) if not self._bypass_distutils_cc: cmd.append('cc_opt=%s' % self.scons_compiler) From numpy-svn at scipy.org Thu Jun 12 03:20:32 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 02:20:32 -0500 (CDT) Subject: [Numpy-svn] r5268 - trunk/numpy/core Message-ID: <20080612072032.5BFE439C24F@scipy.org> Author: cdavid Date: 2008-06-12 02:20:28 -0500 (Thu, 12 Jun 2008) New Revision: 5268 Modified: trunk/numpy/core/scons_support.py Log: Remove distutils_dirs_emitter hacks: no need anymore since we use variant_dir. Modified: trunk/numpy/core/scons_support.py =================================================================== --- trunk/numpy/core/scons_support.py 2008-06-12 06:35:22 UTC (rev 5267) +++ trunk/numpy/core/scons_support.py 2008-06-12 07:20:28 UTC (rev 5268) @@ -1,4 +1,4 @@ -#! Last Change: Mon Apr 21 07:00 PM 2008 J +#! Last Change: Thu Jun 12 02:00 PM 2008 J """Code to support special facilities to scons which are only useful for numpy.core, hence not put into numpy.distutils.scons""" @@ -16,10 +16,6 @@ from numscons.numdist import process_c_str as process_str from numscons.core.utils import rsplit, isstring -try: - from numscons import distutils_dirs_emitter -except ImportError: - raise ImportError("You need numscons >= 0.5.2") import SCons.Node import SCons @@ -189,16 +185,14 @@ return nosmp == 1 array_api_gen_bld = Builder(action = Action(do_generate_numpy_api, '$ARRAPIGENCOMSTR'), - emitter = [generate_api_emitter, - distutils_dirs_emitter]) + emitter = generate_api_emitter) + ufunc_api_gen_bld = Builder(action = Action(do_generate_ufunc_api, '$UFUNCAPIGENCOMSTR'), - emitter = [generate_api_emitter, - distutils_dirs_emitter]) + emitter = generate_api_emitter) template_bld = Builder(action = Action(generate_from_template, '$TEMPLATECOMSTR'), - emitter = [generate_from_template_emitter, - distutils_dirs_emitter]) + emitter = generate_from_template_emitter) umath_bld = Builder(action = Action(generate_umath, '$UMATHCOMSTR'), - emitter = [generate_umath_emitter, distutils_dirs_emitter]) + emitter = generate_umath_emitter) From numpy-svn at scipy.org Thu Jun 12 03:23:39 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 02:23:39 -0500 (CDT) Subject: [Numpy-svn] r5269 - trunk/numpy/core Message-ID: <20080612072339.2776839C24F@scipy.org> Author: cdavid Date: 2008-06-12 02:23:31 -0500 (Thu, 12 Jun 2008) New Revision: 5269 Added: trunk/numpy/core/SConscript Removed: trunk/numpy/core/SConstruct Log: variant_dir: Rename SConscript for numpy.core. Copied: trunk/numpy/core/SConscript (from rev 5266, trunk/numpy/core/SConstruct) Deleted: trunk/numpy/core/SConstruct =================================================================== --- trunk/numpy/core/SConstruct 2008-06-12 07:20:28 UTC (rev 5268) +++ trunk/numpy/core/SConstruct 2008-06-12 07:23:31 UTC (rev 5269) @@ -1,269 +0,0 @@ -# Last Change: Mon Apr 21 07:00 PM 2008 J -# vim:syntax=python -import os -import sys -from os.path import join as pjoin, basename as pbasename, dirname as pdirname -from copy import deepcopy - -from numscons import get_python_inc, get_pythonlib_dir -from numscons import GetNumpyEnvironment -from numscons import CheckCBLAS -from numscons import write_info - -from scons_support import CheckBrokenMathlib, define_no_smp, \ - check_mlib, check_mlibs, is_npy_no_signal -from scons_support import array_api_gen_bld, ufunc_api_gen_bld, template_bld, \ - umath_bld - - -env = GetNumpyEnvironment(ARGUMENTS) -env.Append(CPPPATH = [get_python_inc()]) -if os.name == 'nt': - # NT needs the pythonlib to run any code importing Python.h, including - # simple code using only typedef and so on, so we need it for configuration - # checks - env.AppendUnique(LIBPATH = [get_pythonlib_dir()]) - -#======================= -# Starting Configuration -#======================= -config = env.NumpyConfigure(custom_tests = {'CheckBrokenMathlib' : CheckBrokenMathlib, - 'CheckCBLAS' : CheckCBLAS}, config_h = pjoin(env['build_dir'], 'config.h')) - -# numpyconfig_sym will keep the values of some configuration variables, the one -# needed for the public numpy API. - -# Convention: list of tuples (definition, value). value: -# - 0: #undef definition -# - 1: #define definition -# - string: #define definition value -numpyconfig_sym = [] - -#--------------- -# Checking Types -#--------------- -if not config.CheckHeader("Python.h"): - raise RuntimeError("Error: Python.h header is not found (or cannot be " -"compiled). On linux, check that you have python-dev/python-devel packages. On" -" windows, check \ that you have the platform SDK.") - -def check_type(type, include = None): - st = config.CheckTypeSize(type, includes = include) - type = type.replace(' ', '_') - if st: - numpyconfig_sym.append(('SIZEOF_%s' % type.upper(), '%d' % st)) - else: - numpyconfig_sym.append(('SIZEOF_%s' % type.upper(), 0)) - -for type in ('short', 'int', 'long', 'float', 'double', 'long double'): - check_type(type) - -for type in ('Py_intptr_t',): - check_type(type, include = "#include \n") - -# We check declaration AND type because that's how distutils does it. -if config.CheckDeclaration('PY_LONG_LONG', includes = '#include \n'): - st = config.CheckTypeSize('PY_LONG_LONG', - includes = '#include \n') - assert not st == 0 - numpyconfig_sym.append(('DEFINE_NPY_SIZEOF_LONGLONG', - '#define NPY_SIZEOF_LONGLONG %d' % st)) - numpyconfig_sym.append(('DEFINE_NPY_SIZEOF_PY_LONG_LONG', - '#define NPY_SIZEOF_PY_LONG_LONG %d' % st)) -else: - numpyconfig_sym.append(('DEFINE_NPY_SIZEOF_LONGLONG', '')) - numpyconfig_sym.append(('DEFINE_NPY_SIZEOF_PY_LONG_LONG', '')) - -if not config.CheckDeclaration('CHAR_BIT', includes= '#include \n'): - raise RuntimeError(\ -"""Config wo CHAR_BIT is not supported with scons: please contact the -maintainer (cdavid)""") - -#---------------------- -# Checking signal stuff -#---------------------- -if is_npy_no_signal(): - numpyconfig_sym.append(('DEFINE_NPY_NO_SIGNAL', '#define NPY_NO_SIGNAL\n')) - config.Define('__NPY_PRIVATE_NO_SIGNAL', - comment = "define to 1 to disable SMP support ") -else: - numpyconfig_sym.append(('DEFINE_NPY_NO_SIGNAL', '')) - -#--------------------- -# Checking SMP option -#--------------------- -if define_no_smp(): - nosmp = 1 -else: - nosmp = 0 -numpyconfig_sym.append(('NPY_NO_SMP', nosmp)) - -#---------------------- -# Checking the mathlib -#---------------------- -mlibs = [[], ['m'], ['cpml']] -mathlib = os.environ.get('MATHLIB') -if mathlib: - mlibs.insert(0, mathlib) - -mlib = check_mlibs(config, mlibs) - -# XXX: this is ugly: mathlib has nothing to do in a public header file -numpyconfig_sym.append(('MATHLIB', ','.join(mlib))) - -#---------------------------------- -# Checking the math funcs available -#---------------------------------- -# Function to check: -mfuncs = ('expl', 'expf', 'log1p', 'expm1', 'asinh', 'atanhf', 'atanhl', - 'isnan', 'isinf', 'rint') - -# Set value to 1 for each defined function (in math lib) -mfuncs_defined = dict([(f, 0) for f in mfuncs]) - -# TODO: checklib vs checkfunc ? -def check_func(f): - """Check that f is available in mlib, and add the symbol appropriately. """ - st = config.CheckDeclaration(f, language = 'C', includes = "#include ") - if st: - st = config.CheckFunc(f, language = 'C') - if st: - mfuncs_defined[f] = 1 - else: - mfuncs_defined[f] = 0 - -for f in mfuncs: - check_func(f) - -if mfuncs_defined['expl'] == 1: - config.Define('HAVE_LONGDOUBLE_FUNCS', - comment = 'Define to 1 if long double funcs are available') -if mfuncs_defined['expf'] == 1: - config.Define('HAVE_FLOAT_FUNCS', - comment = 'Define to 1 if long double funcs are available') -if mfuncs_defined['asinh'] == 1: - config.Define('HAVE_INVERSE_HYPERBOLIC', - comment = 'Define to 1 if inverse hyperbolic funcs are '\ - 'available') -if mfuncs_defined['atanhf'] == 1: - config.Define('HAVE_INVERSE_HYPERBOLIC_FLOAT', - comment = 'Define to 1 if inverse hyperbolic float funcs '\ - 'are available') -if mfuncs_defined['atanhl'] == 1: - config.Define('HAVE_INVERSE_HYPERBOLIC_LONGDOUBLE', - comment = 'Define to 1 if inverse hyperbolic long double '\ - 'funcs are available') - -#------------------------------------------------------- -# Define the function PyOS_ascii_strod if not available -#------------------------------------------------------- -if not config.CheckDeclaration('PyOS_ascii_strtod', - includes = "#include "): - if config.CheckFunc('strtod'): - config.Define('PyOS_ascii_strtod', 'strtod', - "Define to a function to use as a replacement for "\ - "PyOS_ascii_strtod if not available in python header") - -#------------------------------------ -# DISTUTILS Hack on AMD64 on windows -#------------------------------------ -# XXX: this is ugly -if sys.platform=='win32' or os.name=='nt': - from distutils.msvccompiler import get_build_architecture - a = get_build_architecture() - print 'BUILD_ARCHITECTURE: %r, os.name=%r, sys.platform=%r' % \ - (a, os.name, sys.platform) - if a == 'AMD64': - distutils_use_sdk = 1 - config.Define('DISTUTILS_USE_SDK', distutils_use_sdk, - "define to 1 to disable SMP support ") - -#-------------- -# Checking Blas -#-------------- -if config.CheckCBLAS(): - build_blasdot = 1 -else: - build_blasdot = 0 - -config.Finish() -write_info(env) - -#========== -# Build -#========== - -#--------------------------------------- -# Generate the public configuration file -#--------------------------------------- -config_dict = {} -# XXX: this is ugly, make the API for config.h and numpyconfig.h similar -for key, value in numpyconfig_sym: - config_dict['@%s@' % key] = str(value) -env['SUBST_DICT'] = config_dict - -include_dir = 'include/numpy' -env.SubstInFile(pjoin(env['build_dir'], 'numpyconfig.h'), - pjoin(env['src_dir'], include_dir, 'numpyconfig.h.in')) - -env['CONFIG_H_GEN'] = numpyconfig_sym - -#--------------------------- -# Builder for generated code -#--------------------------- -env.Append(BUILDERS = {'GenerateMultiarrayApi' : array_api_gen_bld, - 'GenerateUfuncApi' : ufunc_api_gen_bld, - 'GenerateFromTemplate' : template_bld, - 'GenerateUmath' : umath_bld}) - -#------------------------ -# Generate generated code -#------------------------ -scalartypes_src = env.GenerateFromTemplate(pjoin('src', 'scalartypes.inc.src')) -arraytypes_src = env.GenerateFromTemplate(pjoin('src', 'arraytypes.inc.src')) -sortmodule_src = env.GenerateFromTemplate(pjoin('src', '_sortmodule.c.src')) -umathmodule_src = env.GenerateFromTemplate(pjoin('src', 'umathmodule.c.src')) -scalarmathmodule_src = env.GenerateFromTemplate( - pjoin('src', 'scalarmathmodule.c.src')) - -umath = env.GenerateUmath('__umath_generated', - pjoin('code_generators', 'generate_umath.py')) - -multiarray_api = env.GenerateMultiarrayApi('multiarray_api', - [ pjoin('code_generators', 'numpy_api_order.txt')]) - -ufunc_api = env.GenerateUfuncApi('ufunc_api', - pjoin('code_generators', 'ufunc_api_order.txt')) - -env.Append(CPPPATH = [pjoin(env['src_dir'], 'include'), env['build_dir']]) - -#----------------- -# Build multiarray -#----------------- -multiarray_src = [pjoin('src', 'multiarraymodule.c')] -multiarray = env.NumpyPythonExtension('multiarray', source = multiarray_src) - -#------------------ -# Build sort module -#------------------ -sort = env.NumpyPythonExtension('_sort', source = sortmodule_src) - -#------------------- -# Build umath module -#------------------- -umathmodule = env.NumpyPythonExtension('umath', source = umathmodule_src) - -#------------------------ -# Build scalarmath module -#------------------------ -scalarmathmodule = env.NumpyPythonExtension('scalarmath', - source = scalarmathmodule_src) - -#---------------------- -# Build _dotblas module -#---------------------- -if build_blasdot: - dotblas_src = [pjoin('blasdot', i) for i in ['_dotblas.c']] - blasenv = env.Clone() - blasenv.Append(CPPPATH = pjoin(env['src_dir'], 'blasdot')) - dotblas = blasenv.NumpyPythonExtension('_dotblas', source = dotblas_src) From numpy-svn at scipy.org Thu Jun 12 03:24:14 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 02:24:14 -0500 (CDT) Subject: [Numpy-svn] r5270 - trunk/numpy/core Message-ID: <20080612072414.264A839C5F4@scipy.org> Author: cdavid Date: 2008-06-12 02:24:09 -0500 (Thu, 12 Jun 2008) New Revision: 5270 Added: trunk/numpy/core/SConstruct Log: Add boilerplate SConstruct to set variant dir transparantly. Added: trunk/numpy/core/SConstruct =================================================================== --- trunk/numpy/core/SConstruct 2008-06-12 07:23:31 UTC (rev 5269) +++ trunk/numpy/core/SConstruct 2008-06-12 07:24:09 UTC (rev 5270) @@ -0,0 +1,2 @@ +from numscons import GetInitEnvironment +GetInitEnvironment(ARGUMENTS).DistutilsSConscript('SConscript') From numpy-svn at scipy.org Thu Jun 12 03:28:31 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 02:28:31 -0500 (CDT) Subject: [Numpy-svn] r5271 - trunk/numpy/core Message-ID: <20080612072831.0064C39C24F@scipy.org> Author: cdavid Date: 2008-06-12 02:28:27 -0500 (Thu, 12 Jun 2008) New Revision: 5271 Modified: trunk/numpy/core/SConscript Log: Adapt SConscript to new architecture for build dir. Modified: trunk/numpy/core/SConscript =================================================================== --- trunk/numpy/core/SConscript 2008-06-12 07:24:09 UTC (rev 5270) +++ trunk/numpy/core/SConscript 2008-06-12 07:28:27 UTC (rev 5271) @@ -1,4 +1,4 @@ -# Last Change: Mon Apr 21 07:00 PM 2008 J +# Last Change: Thu Jun 12 04:00 PM 2008 J # vim:syntax=python import os import sys @@ -28,7 +28,7 @@ # Starting Configuration #======================= config = env.NumpyConfigure(custom_tests = {'CheckBrokenMathlib' : CheckBrokenMathlib, - 'CheckCBLAS' : CheckCBLAS}, config_h = pjoin(env['build_dir'], 'config.h')) + 'CheckCBLAS' : CheckCBLAS}, config_h = pjoin('config.h')) # numpyconfig_sym will keep the values of some configuration variables, the one # needed for the public numpy API. @@ -203,8 +203,7 @@ env['SUBST_DICT'] = config_dict include_dir = 'include/numpy' -env.SubstInFile(pjoin(env['build_dir'], 'numpyconfig.h'), - pjoin(env['src_dir'], include_dir, 'numpyconfig.h.in')) +env.SubstInFile(pjoin(include_dir, 'numpyconfig.h'), pjoin(include_dir, 'numpyconfig.h.in')) env['CONFIG_H_GEN'] = numpyconfig_sym @@ -235,28 +234,28 @@ ufunc_api = env.GenerateUfuncApi('ufunc_api', pjoin('code_generators', 'ufunc_api_order.txt')) -env.Append(CPPPATH = [pjoin(env['src_dir'], 'include'), env['build_dir']]) +env.Append(CPPPATH = ['include', '.']) #----------------- # Build multiarray #----------------- multiarray_src = [pjoin('src', 'multiarraymodule.c')] -multiarray = env.NumpyPythonExtension('multiarray', source = multiarray_src) +multiarray = env.DistutilsPythonExtension('multiarray', source = multiarray_src) #------------------ # Build sort module #------------------ -sort = env.NumpyPythonExtension('_sort', source = sortmodule_src) +sort = env.DistutilsPythonExtension('_sort', source = sortmodule_src) #------------------- # Build umath module #------------------- -umathmodule = env.NumpyPythonExtension('umath', source = umathmodule_src) +umathmodule = env.DistutilsPythonExtension('umath', source = umathmodule_src) #------------------------ # Build scalarmath module #------------------------ -scalarmathmodule = env.NumpyPythonExtension('scalarmath', +scalarmathmodule = env.DistutilsPythonExtension('scalarmath', source = scalarmathmodule_src) #---------------------- @@ -265,5 +264,5 @@ if build_blasdot: dotblas_src = [pjoin('blasdot', i) for i in ['_dotblas.c']] blasenv = env.Clone() - blasenv.Append(CPPPATH = pjoin(env['src_dir'], 'blasdot')) - dotblas = blasenv.NumpyPythonExtension('_dotblas', source = dotblas_src) + blasenv.Append(CPPPATH = pjoin('blasdot')) + dotblas = blasenv.DistutilsPythonExtension('_dotblas', source = dotblas_src) From numpy-svn at scipy.org Thu Jun 12 03:43:31 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 02:43:31 -0500 (CDT) Subject: [Numpy-svn] r5272 - trunk/numpy/core Message-ID: <20080612074331.C6B8C39C24F@scipy.org> Author: cdavid Date: 2008-06-12 02:43:27 -0500 (Thu, 12 Jun 2008) New Revision: 5272 Modified: trunk/numpy/core/setupscons.py Log: Adapt numpyconfig.h location in setup.py file. Modified: trunk/numpy/core/setupscons.py =================================================================== --- trunk/numpy/core/setupscons.py 2008-06-12 07:28:27 UTC (rev 5271) +++ trunk/numpy/core/setupscons.py 2008-06-12 07:43:27 UTC (rev 5272) @@ -50,7 +50,7 @@ # XXX: I really have to think about how to communicate path info # between scons and distutils, and set the options at one single # location. - target = join(scons_build_dir, local_dir, 'numpyconfig.h') + target = join(scons_build_dir, local_dir, 'include/numpy/numpyconfig.h') incl_dir = os.path.dirname(target) if incl_dir not in config.numpy_include_dirs: config.numpy_include_dirs.append(incl_dir) From numpy-svn at scipy.org Thu Jun 12 04:59:26 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 03:59:26 -0500 (CDT) Subject: [Numpy-svn] r5273 - trunk/numpy/distutils/command Message-ID: <20080612085926.EA82039C5C4@scipy.org> Author: cdavid Date: 2008-06-12 03:59:20 -0500 (Thu, 12 Jun 2008) New Revision: 5273 Modified: trunk/numpy/distutils/command/scons.py Log: When src_dir is not null, takes it into account to retrieve distutils libdir. Modified: trunk/numpy/distutils/command/scons.py =================================================================== --- trunk/numpy/distutils/command/scons.py 2008-06-12 07:43:27 UTC (rev 5272) +++ trunk/numpy/distutils/command/scons.py 2008-06-12 08:59:20 UTC (rev 5273) @@ -38,11 +38,11 @@ from numscons import get_scons_path return get_scons_path() -def get_distutils_libdir(cmd): +def get_distutils_libdir(cmd, sconscript_path): """Returns the path where distutils install libraries, relatively to the scons build directory.""" from numscons import get_scons_build_dir - scdir = get_scons_build_dir() + scdir = pjoin(get_scons_build_dir(), pdirname(sconscript_path)) n = scdir.count(os.sep) return pjoin(os.sep.join([os.pardir for i in range(n+1)]), cmd.build_lib) @@ -369,7 +369,8 @@ cmd.append('pkg_name="%s"' % pkg_name) #cmd.append('distutils_libdir=%s' % protect_path(pjoin(self.build_lib, # pdirname(sconscript)))) - cmd.append('distutils_libdir=%s' % protect_path(get_distutils_libdir(self))) + cmd.append('distutils_libdir=%s' % + protect_path(get_distutils_libdir(self, sconscript))) if not self._bypass_distutils_cc: cmd.append('cc_opt=%s' % self.scons_compiler) From numpy-svn at scipy.org Thu Jun 12 05:01:25 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 04:01:25 -0500 (CDT) Subject: [Numpy-svn] r5274 - trunk/numpy/lib Message-ID: <20080612090125.D82D3C7C016@scipy.org> Author: cdavid Date: 2008-06-12 04:01:13 -0500 (Thu, 12 Jun 2008) New Revision: 5274 Added: trunk/numpy/lib/SConscript trunk/numpy/lib/SConstruct Removed: trunk/numpy/lib/SConstruct Log: Adapt numpy.lib to new scons build_dir behavior. Copied: trunk/numpy/lib/SConscript (from rev 5266, trunk/numpy/lib/SConstruct) Deleted: trunk/numpy/lib/SConstruct =================================================================== --- trunk/numpy/lib/SConstruct 2008-06-12 08:59:20 UTC (rev 5273) +++ trunk/numpy/lib/SConstruct 2008-06-12 09:01:13 UTC (rev 5274) @@ -1,9 +0,0 @@ -# Last Change: Tue May 20 05:00 PM 2008 J -# vim:syntax=python -from numscons import GetNumpyEnvironment, scons_get_paths - -env = GetNumpyEnvironment(ARGUMENTS) -env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) - -_compiled_base = env.NumpyPythonExtension('_compiled_base', - source = ['src/_compiled_base.c']) Copied: trunk/numpy/lib/SConstruct (from rev 5270, trunk/numpy/core/SConstruct) From numpy-svn at scipy.org Thu Jun 12 05:48:47 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 04:48:47 -0500 (CDT) Subject: [Numpy-svn] r5275 - trunk/numpy/distutils/command Message-ID: <20080612094847.1673339C3D5@scipy.org> Author: cdavid Date: 2008-06-12 04:48:42 -0500 (Thu, 12 Jun 2008) New Revision: 5275 Modified: trunk/numpy/distutils/command/scons.py Log: Set numpy include path relatively to top setup callee when bootstrapping. Modified: trunk/numpy/distutils/command/scons.py =================================================================== --- trunk/numpy/distutils/command/scons.py 2008-06-12 09:01:13 UTC (rev 5274) +++ trunk/numpy/distutils/command/scons.py 2008-06-12 09:48:42 UTC (rev 5275) @@ -10,7 +10,6 @@ from numpy.distutils.fcompiler import FCompiler from numpy.distutils.exec_command import find_executable from numpy.distutils import log -from numpy.distutils.misc_util import get_numpy_include_dirs def get_scons_build_dir(): """Return the top path where everything produced by scons will be put. @@ -57,6 +56,21 @@ import sys return sys.executable +def get_numpy_include_dirs(sconscript_path): + """Return include dirs for numpy. + + The paths are relatively to the setup.py script path.""" + from numpy.distutils.misc_util import get_numpy_include_dirs as _incdir + from numscons import get_scons_build_dir + scdir = pjoin(get_scons_build_dir(), pdirname(sconscript_path)) + n = scdir.count(os.sep) + + dirs = _incdir() + rdirs = [] + for d in dirs: + rdirs.append(pjoin(os.sep.join([os.pardir for i in range(n+1)]), d)) + return rdirs + def dirl_to_str(dirlist): """Given a list of directories, returns a string where the paths are concatenated by the path separator. @@ -387,7 +401,7 @@ cmd.append('cxx_opt=%s' % dist2sconscxx(self.cxxcompiler)) cmd.append('cxx_opt_path=%s' % protect_path(get_cxx_tool_path(self.cxxcompiler))) - cmd.append('include_bootstrap=%s' % dirl_to_str(get_numpy_include_dirs())) + cmd.append('include_bootstrap=%s' % dirl_to_str(get_numpy_include_dirs(sconscript))) if self.silent: if int(self.silent) == 2: cmd.append('-Q') From numpy-svn at scipy.org Thu Jun 12 05:49:20 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 04:49:20 -0500 (CDT) Subject: [Numpy-svn] r5276 - trunk/numpy/lib Message-ID: <20080612094920.8899D39C2F9@scipy.org> Author: cdavid Date: 2008-06-12 04:49:16 -0500 (Thu, 12 Jun 2008) New Revision: 5276 Modified: trunk/numpy/lib/SConscript Log: Adapat numpy.lib scons build to new build_dir conventions. Modified: trunk/numpy/lib/SConscript =================================================================== --- trunk/numpy/lib/SConscript 2008-06-12 09:48:42 UTC (rev 5275) +++ trunk/numpy/lib/SConscript 2008-06-12 09:49:16 UTC (rev 5276) @@ -1,9 +1,9 @@ -# Last Change: Tue May 20 05:00 PM 2008 J +# Last Change: Thu Jun 12 06:00 PM 2008 J # vim:syntax=python from numscons import GetNumpyEnvironment, scons_get_paths env = GetNumpyEnvironment(ARGUMENTS) env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) -_compiled_base = env.NumpyPythonExtension('_compiled_base', +_compiled_base = env.DistutilsPythonExtension('_compiled_base', source = ['src/_compiled_base.c']) From numpy-svn at scipy.org Thu Jun 12 05:55:37 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 04:55:37 -0500 (CDT) Subject: [Numpy-svn] r5277 - trunk/numpy/numarray Message-ID: <20080612095537.F275539C2F9@scipy.org> Author: cdavid Date: 2008-06-12 04:55:30 -0500 (Thu, 12 Jun 2008) New Revision: 5277 Added: trunk/numpy/numarray/SConscript trunk/numpy/numarray/SConstruct Removed: trunk/numpy/numarray/SConstruct Log: Adapt numpy.numarray to new build dir convention. Copied: trunk/numpy/numarray/SConscript (from rev 5266, trunk/numpy/numarray/SConstruct) =================================================================== --- trunk/numpy/numarray/SConstruct 2008-06-12 05:45:18 UTC (rev 5266) +++ trunk/numpy/numarray/SConscript 2008-06-12 09:55:30 UTC (rev 5277) @@ -0,0 +1,9 @@ +# Last Change: Thu Jun 12 06:00 PM 2008 J +# vim:syntax=python +from numscons import GetNumpyEnvironment, scons_get_paths + +env = GetNumpyEnvironment(ARGUMENTS) +env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) +env.Append(CPPPATH = ['numpy']) + +_capi = env.DistutilsPythonExtension('_capi', source = ['_capi.c']) Deleted: trunk/numpy/numarray/SConstruct =================================================================== --- trunk/numpy/numarray/SConstruct 2008-06-12 09:49:16 UTC (rev 5276) +++ trunk/numpy/numarray/SConstruct 2008-06-12 09:55:30 UTC (rev 5277) @@ -1,9 +0,0 @@ -# Last Change: Tue May 20 05:00 PM 2008 J -# vim:syntax=python -from numscons import GetNumpyEnvironment, scons_get_paths - -env = GetNumpyEnvironment(ARGUMENTS) -env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) -env.Append(CPPPATH = env['src_dir']) - -_capi = env.NumpyPythonExtension('_capi', source = ['_capi.c']) Copied: trunk/numpy/numarray/SConstruct (from rev 5274, trunk/numpy/lib/SConstruct) From numpy-svn at scipy.org Thu Jun 12 05:57:02 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 04:57:02 -0500 (CDT) Subject: [Numpy-svn] r5278 - trunk/numpy/fft Message-ID: <20080612095702.3269239C2F9@scipy.org> Author: cdavid Date: 2008-06-12 04:56:55 -0500 (Thu, 12 Jun 2008) New Revision: 5278 Added: trunk/numpy/fft/SConscript trunk/numpy/fft/SConstruct Removed: trunk/numpy/fft/SConstruct Log: Adapt numpy.fft to new build dir conventions. Copied: trunk/numpy/fft/SConscript (from rev 5266, trunk/numpy/fft/SConstruct) =================================================================== --- trunk/numpy/fft/SConstruct 2008-06-12 05:45:18 UTC (rev 5266) +++ trunk/numpy/fft/SConscript 2008-06-12 09:56:55 UTC (rev 5278) @@ -0,0 +1,10 @@ +# Last Change: Thu Jun 12 06:00 PM 2008 J +# vim:syntax=python +from numscons import GetNumpyEnvironment, scons_get_paths + +env = GetNumpyEnvironment(ARGUMENTS) +env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) + +fftpack_lite = env.DistutilsPythonExtension('fftpack_lite', + source = ['fftpack_litemodule.c', + 'fftpack.c']) Deleted: trunk/numpy/fft/SConstruct =================================================================== --- trunk/numpy/fft/SConstruct 2008-06-12 09:55:30 UTC (rev 5277) +++ trunk/numpy/fft/SConstruct 2008-06-12 09:56:55 UTC (rev 5278) @@ -1,10 +0,0 @@ -# Last Change: Tue May 20 05:00 PM 2008 J -# vim:syntax=python -from numscons import GetNumpyEnvironment, scons_get_paths - -env = GetNumpyEnvironment(ARGUMENTS) -env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) - -fftpack_lite = env.NumpyPythonExtension('fftpack_lite', - source = ['fftpack_litemodule.c', - 'fftpack.c']) Copied: trunk/numpy/fft/SConstruct (from rev 5277, trunk/numpy/numarray/SConstruct) From numpy-svn at scipy.org Thu Jun 12 06:00:48 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 05:00:48 -0500 (CDT) Subject: [Numpy-svn] r5279 - trunk/numpy/linalg Message-ID: <20080612100048.54D5D39C2F9@scipy.org> Author: cdavid Date: 2008-06-12 05:00:37 -0500 (Thu, 12 Jun 2008) New Revision: 5279 Added: trunk/numpy/linalg/SConscript trunk/numpy/linalg/SConstruct Removed: trunk/numpy/linalg/SConstruct Log: adapt numpy.linalg to new scons build_dir architecture. Copied: trunk/numpy/linalg/SConscript (from rev 5266, trunk/numpy/linalg/SConstruct) =================================================================== --- trunk/numpy/linalg/SConstruct 2008-06-12 05:45:18 UTC (rev 5266) +++ trunk/numpy/linalg/SConscript 2008-06-12 10:00:37 UTC (rev 5279) @@ -0,0 +1,29 @@ +# Last Change: Thu Jun 12 06:00 PM 2008 J +# vim:syntax=python +import os.path + +from numscons import GetNumpyEnvironment, scons_get_paths, \ + scons_get_mathlib +from numscons import CheckF77LAPACK +from numscons import write_info + +env = GetNumpyEnvironment(ARGUMENTS) +env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) + +config = env.NumpyConfigure(custom_tests = + {'CheckLAPACK' : CheckF77LAPACK}) + +use_lapack = config.CheckLAPACK() + +mlib = scons_get_mathlib(env) +env.AppendUnique(LIBS = mlib) + +config.Finish() +write_info(env) + +sources = ['lapack_litemodule.c'] +if not use_lapack: + sources.extend(['python_xerbla.c', 'zlapack_lite.c', 'dlapack_lite.c', + 'blas_lite.c', 'dlamch.c', 'f2c_lite.c']) +lapack_lite = env.DistutilsPythonExtension('lapack_lite', source = sources) + Deleted: trunk/numpy/linalg/SConstruct =================================================================== --- trunk/numpy/linalg/SConstruct 2008-06-12 09:56:55 UTC (rev 5278) +++ trunk/numpy/linalg/SConstruct 2008-06-12 10:00:37 UTC (rev 5279) @@ -1,29 +0,0 @@ -# Last Change: Tue May 20 05:00 PM 2008 J -# vim:syntax=python -import os.path - -from numscons import GetNumpyEnvironment, scons_get_paths, \ - scons_get_mathlib -from numscons import CheckF77LAPACK -from numscons import write_info - -env = GetNumpyEnvironment(ARGUMENTS) -env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) - -config = env.NumpyConfigure(custom_tests = - {'CheckLAPACK' : CheckF77LAPACK}) - -use_lapack = config.CheckLAPACK() - -mlib = scons_get_mathlib(env) -env.AppendUnique(LIBS = mlib) - -config.Finish() -write_info(env) - -sources = ['lapack_litemodule.c'] -if not use_lapack: - sources.extend(['python_xerbla.c', 'zlapack_lite.c', 'dlapack_lite.c', - 'blas_lite.c', 'dlamch.c', 'f2c_lite.c']) -lapack_lite = env.NumpyPythonExtension('lapack_lite', source = sources) - Copied: trunk/numpy/linalg/SConstruct (from rev 5277, trunk/numpy/numarray/SConstruct) From numpy-svn at scipy.org Thu Jun 12 06:05:18 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 05:05:18 -0500 (CDT) Subject: [Numpy-svn] r5280 - trunk/numpy/random Message-ID: <20080612100518.E2AA639C3AB@scipy.org> Author: cdavid Date: 2008-06-12 05:05:12 -0500 (Thu, 12 Jun 2008) New Revision: 5280 Added: trunk/numpy/random/SConscript trunk/numpy/random/SConstruct Removed: trunk/numpy/random/SConstruct Log: adapt numpy.random to new scons build_dir architecture. Copied: trunk/numpy/random/SConscript (from rev 5266, trunk/numpy/random/SConstruct) =================================================================== --- trunk/numpy/random/SConstruct 2008-06-12 05:45:18 UTC (rev 5266) +++ trunk/numpy/random/SConscript 2008-06-12 10:05:12 UTC (rev 5280) @@ -0,0 +1,46 @@ +# Last Change: Thu Jun 12 06:00 PM 2008 J +# vim:syntax=python +import os + +from numscons import GetNumpyEnvironment, scons_get_paths, \ + scons_get_mathlib + +def CheckWincrypt(context): + from copy import deepcopy + src = """\ +/* check to see if _WIN32 is defined */ +int main(int argc, char *argv[]) +{ +#ifdef _WIN32 + return 0; +#else + return 1; +#endif +} +""" + + context.Message("Checking if using wincrypt ... ") + st = context.env.TryRun(src, '.C') + if st[0] == 0: + context.Result('No') + else: + context.Result('Yes') + return st[0] + +env = GetNumpyEnvironment(ARGUMENTS) +env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) + +mlib = scons_get_mathlib(env) +env.AppendUnique(LIBS = mlib) + +# On windows, see if we should use Advapi32 +if os.name == 'nt': + config = env.NumpyConfigure(custom_tests = {'CheckWincrypt' : CheckWincrypt}) + if config.CheckWincrypt: + config.env.AppendUnique(LIBS = 'Advapi32') + +sources = [os.path.join('mtrand', x) for x in + ['mtrand.c', 'randomkit.c', 'initarray.c', 'distributions.c']] + +# XXX: Pyrex dependency +mtrand = env.DistutilsPythonExtension('mtrand', source = sources) Deleted: trunk/numpy/random/SConstruct =================================================================== --- trunk/numpy/random/SConstruct 2008-06-12 10:00:37 UTC (rev 5279) +++ trunk/numpy/random/SConstruct 2008-06-12 10:05:12 UTC (rev 5280) @@ -1,46 +0,0 @@ -# Last Change: Tue May 20 05:00 PM 2008 J -# vim:syntax=python -import os - -from numscons import GetNumpyEnvironment, scons_get_paths, \ - scons_get_mathlib - -def CheckWincrypt(context): - from copy import deepcopy - src = """\ -/* check to see if _WIN32 is defined */ -int main(int argc, char *argv[]) -{ -#ifdef _WIN32 - return 0; -#else - return 1; -#endif -} -""" - - context.Message("Checking if using wincrypt ... ") - st = context.env.TryRun(src, '.C') - if st[0] == 0: - context.Result('No') - else: - context.Result('Yes') - return st[0] - -env = GetNumpyEnvironment(ARGUMENTS) -env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) - -mlib = scons_get_mathlib(env) -env.AppendUnique(LIBS = mlib) - -# On windows, see if we should use Advapi32 -if os.name == 'nt': - config = env.NumpyConfigure(custom_tests = {'CheckWincrypt' : CheckWincrypt}) - if config.CheckWincrypt: - config.env.AppendUnique(LIBS = 'Advapi32') - -sources = [os.path.join('mtrand', x) for x in - ['mtrand.c', 'randomkit.c', 'initarray.c', 'distributions.c']] - -# XXX: Pyrex dependency -mtrand = env.NumpyPythonExtension('mtrand', source = sources) Copied: trunk/numpy/random/SConstruct (from rev 5277, trunk/numpy/numarray/SConstruct) From numpy-svn at scipy.org Thu Jun 12 06:56:31 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 05:56:31 -0500 (CDT) Subject: [Numpy-svn] r5281 - trunk/numpy/distutils/command Message-ID: <20080612105631.1E6CB39C3AB@scipy.org> Author: cdavid Date: 2008-06-12 05:56:20 -0500 (Thu, 12 Jun 2008) New Revision: 5281 Modified: trunk/numpy/distutils/command/scons.py Log: Make sure we are using numscons 0.8.0 or above. Modified: trunk/numpy/distutils/command/scons.py =================================================================== --- trunk/numpy/distutils/command/scons.py 2008-06-12 10:05:12 UTC (rev 5280) +++ trunk/numpy/distutils/command/scons.py 2008-06-12 10:56:20 UTC (rev 5281) @@ -336,6 +336,16 @@ raise RuntimeError("importing numscons failed (error was %s), using " \ "scons within distutils is not possible without " "this package " % str(e)) + + try: + from numscons import get_version + if get_version() < '0.8.0': + raise ValueError() + except ImportError, ValueError: + raise RuntimeError("You need numscons >= 0.8.0 to build numpy "\ + "with numscons (imported numscons path " \ + "is %s)." % numscons.__file__) + else: # nothing to do, just leave it here. return From numpy-svn at scipy.org Thu Jun 12 11:16:43 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 12 Jun 2008 10:16:43 -0500 (CDT) Subject: [Numpy-svn] r5282 - trunk/numpy/distutils/command Message-ID: <20080612151643.B8B0AC7C05C@scipy.org> Author: cdavid Date: 2008-06-12 10:16:07 -0500 (Thu, 12 Jun 2008) New Revision: 5282 Modified: trunk/numpy/distutils/command/scons.py Log: Do not fail scons command when cxx compiler is not available. Modified: trunk/numpy/distutils/command/scons.py =================================================================== --- trunk/numpy/distutils/command/scons.py 2008-06-12 10:56:20 UTC (rev 5281) +++ trunk/numpy/distutils/command/scons.py 2008-06-12 15:16:07 UTC (rev 5282) @@ -323,7 +323,10 @@ cxxcompiler.customize(self.distribution, need_cxx = 1) cxxcompiler.customize_cmd(self) self.cxxcompiler = cxxcompiler.cxx_compiler() - #print self.cxxcompiler.compiler_cxx[0] + try: + get_cxx_tool_path(self.cxxcompiler) + except DistutilsSetupError: + self.cxxcompiler = None if self.package_list: self.package_list = parse_package_list(self.package_list) From numpy-svn at scipy.org Sat Jun 14 02:06:39 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 14 Jun 2008 01:06:39 -0500 (CDT) Subject: [Numpy-svn] r5283 - trunk/numpy/core Message-ID: <20080614060639.1A76F39C68E@scipy.org> Author: cdavid Date: 2008-06-14 01:06:13 -0500 (Sat, 14 Jun 2008) New Revision: 5283 Modified: trunk/numpy/core/SConscript Log: Fix dotblas compilation on mac os X: scons scanner is not smart enough to interpret #include CPP_MACRO. Modified: trunk/numpy/core/SConscript =================================================================== --- trunk/numpy/core/SConscript 2008-06-12 15:16:07 UTC (rev 5282) +++ trunk/numpy/core/SConscript 2008-06-14 06:06:13 UTC (rev 5283) @@ -263,6 +263,5 @@ #---------------------- if build_blasdot: dotblas_src = [pjoin('blasdot', i) for i in ['_dotblas.c']] - blasenv = env.Clone() - blasenv.Append(CPPPATH = pjoin('blasdot')) - dotblas = blasenv.DistutilsPythonExtension('_dotblas', source = dotblas_src) + dotblas = env.DistutilsPythonExtension('_dotblas', source = dotblas_src) + env.Depends(dotblas, pjoin("blasdot", "cblas.h")) From numpy-svn at scipy.org Mon Jun 16 13:29:36 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 16 Jun 2008 12:29:36 -0500 (CDT) Subject: [Numpy-svn] r5284 - in trunk/numpy/ma: . tests Message-ID: <20080616172936.47E1F39C57F@scipy.org> Author: pierregm Date: 2008-06-16 12:29:28 -0500 (Mon, 16 Jun 2008) New Revision: 5284 Modified: trunk/numpy/ma/core.py trunk/numpy/ma/tests/test_core.py Log: core.MaskedArray.__new__ * Force a mask to be created from a list of masked arrays when mask=nomask and keep_mask=True Modified: trunk/numpy/ma/core.py =================================================================== --- trunk/numpy/ma/core.py 2008-06-14 06:06:13 UTC (rev 5283) +++ trunk/numpy/ma/core.py 2008-06-16 17:29:28 UTC (rev 5284) @@ -1225,11 +1225,19 @@ # With full version else: _data._mask = np.zeros(_data.shape, dtype=mdtype) - if copy: - _data._mask = _data._mask.copy() - _data._sharedmask = False + # Check whether we missed something + elif isinstance(data, (tuple,list)): + mask = np.array([getmaskarray(m) for m in data], dtype=mdtype) + # Force shrinking of the mask if needed (and possible) + if (mdtype == MaskType) and mask.any(): + _data._mask = mask + _data._sharedmask = False else: - _data._sharedmask = True + if copy: + _data._mask = _data._mask.copy() + _data._sharedmask = False + else: + _data._sharedmask = True # Case 2. : With a mask in input ........ else: # Read the mask with the current mdtype Modified: trunk/numpy/ma/tests/test_core.py =================================================================== --- trunk/numpy/ma/tests/test_core.py 2008-06-14 06:06:13 UTC (rev 5283) +++ trunk/numpy/ma/tests/test_core.py 2008-06-16 17:29:28 UTC (rev 5284) @@ -167,6 +167,18 @@ dma_3 = MaskedArray(dma_1, mask=[1,0,0,0]*6) fail_if_equal(dma_3.mask, dma_1.mask) + def test_creation_with_list_of_maskedarrays(self): + "Tests creaating a masked array from alist of masked arrays." + x = array(np.arange(5), mask=[1,0,0,0,0]) + data = array((x,x[::-1])) + assert_equal(data, [[0,1,2,3,4],[4,3,2,1,0]]) + assert_equal(data._mask, [[1,0,0,0,0],[0,0,0,0,1]]) + # + x.mask = nomask + data = array((x,x[::-1])) + assert_equal(data, [[0,1,2,3,4],[4,3,2,1,0]]) + assert(data.mask is nomask) + def test_asarray(self): (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d xm.fill_value = -9999 From numpy-svn at scipy.org Mon Jun 16 20:08:34 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 16 Jun 2008 19:08:34 -0500 (CDT) Subject: [Numpy-svn] r5285 - trunk Message-ID: <20080617000834.6E4BD39C889@scipy.org> Author: jarrod.millman Date: 2008-06-16 19:08:31 -0500 (Mon, 16 Jun 2008) New Revision: 5285 Modified: trunk/THANKS.txt Log: t Modified: trunk/THANKS.txt =================================================================== --- trunk/THANKS.txt 2008-06-16 17:29:28 UTC (rev 5284) +++ trunk/THANKS.txt 2008-06-17 00:08:31 UTC (rev 5285) @@ -1,3 +1,4 @@ + Travis Oliphant for the majority of code adaptation Jim Hugunin, Paul Dubois, Konrad Hinsen, David Ascher, and many others for Numeric on which the code is based. From numpy-svn at scipy.org Mon Jun 16 20:12:59 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 16 Jun 2008 19:12:59 -0500 (CDT) Subject: [Numpy-svn] r5286 - trunk Message-ID: <20080617001259.0E28A39C4B4@scipy.org> Author: alan.mcintyre Date: 2008-06-16 19:11:02 -0500 (Mon, 16 Jun 2008) New Revision: 5286 Modified: trunk/THANKS.txt Log: test Modified: trunk/THANKS.txt =================================================================== --- trunk/THANKS.txt 2008-06-17 00:08:31 UTC (rev 5285) +++ trunk/THANKS.txt 2008-06-17 00:11:02 UTC (rev 5286) @@ -1,4 +1,3 @@ - Travis Oliphant for the majority of code adaptation Jim Hugunin, Paul Dubois, Konrad Hinsen, David Ascher, and many others for Numeric on which the code is based. From numpy-svn at scipy.org Mon Jun 16 20:24:13 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 16 Jun 2008 19:24:13 -0500 (CDT) Subject: [Numpy-svn] r5287 - in trunk: . numpy numpy/core numpy/core/tests numpy/distutils numpy/distutils/tests numpy/distutils/tests/f2py_ext/tests numpy/distutils/tests/f2py_f90_ext/tests numpy/distutils/tests/gen_ext/tests numpy/distutils/tests/pyrex_ext/tests numpy/distutils/tests/swig_ext/tests numpy/doc numpy/f2py/lib/parser numpy/f2py/lib/tests numpy/f2py/tests/array_from_pyobj/tests numpy/fft numpy/fft/tests numpy/lib numpy/lib/tests numpy/linalg numpy/linalg/tests numpy/ma numpy/ma/tests numpy/numarray numpy/oldnumeric numpy/oldnumeric/tests numpy/random numpy/random/tests numpy/testing numpy/testing/tests numpy/tests Message-ID: <20080617002413.60F3539C67A@scipy.org> Author: alan.mcintyre Date: 2008-06-16 19:23:20 -0500 (Mon, 16 Jun 2008) New Revision: 5287 Added: trunk/numpy/testing/decorators.py trunk/numpy/testing/nosetester.py trunk/numpy/testing/nulltester.py trunk/numpy/testing/pkgtester.py Removed: trunk/numpy/testing/info.py trunk/numpy/testing/parametric.py Modified: trunk/README.txt trunk/numpy/__init__.py trunk/numpy/core/__init__.py trunk/numpy/core/tests/test_defmatrix.py trunk/numpy/core/tests/test_errstate.py trunk/numpy/core/tests/test_memmap.py trunk/numpy/core/tests/test_multiarray.py trunk/numpy/core/tests/test_numeric.py trunk/numpy/core/tests/test_numerictypes.py trunk/numpy/core/tests/test_records.py trunk/numpy/core/tests/test_regression.py trunk/numpy/core/tests/test_scalarmath.py trunk/numpy/core/tests/test_ufunc.py trunk/numpy/core/tests/test_umath.py trunk/numpy/core/tests/test_unicode.py trunk/numpy/distutils/__init__.py trunk/numpy/distutils/tests/f2py_ext/tests/test_fib2.py trunk/numpy/distutils/tests/f2py_f90_ext/tests/test_foo.py trunk/numpy/distutils/tests/gen_ext/tests/test_fib3.py trunk/numpy/distutils/tests/pyrex_ext/tests/test_primes.py trunk/numpy/distutils/tests/swig_ext/tests/test_example.py trunk/numpy/distutils/tests/swig_ext/tests/test_example2.py trunk/numpy/distutils/tests/test_fcompiler_gnu.py trunk/numpy/distutils/tests/test_misc_util.py trunk/numpy/doc/DISTUTILS.txt trunk/numpy/f2py/lib/parser/test_Fortran2003.py trunk/numpy/f2py/lib/parser/test_parser.py trunk/numpy/f2py/lib/tests/test_derived_scalar.py trunk/numpy/f2py/lib/tests/test_module_module.py trunk/numpy/f2py/lib/tests/test_module_scalar.py trunk/numpy/f2py/lib/tests/test_scalar_function_in.py trunk/numpy/f2py/lib/tests/test_scalar_in_out.py trunk/numpy/f2py/tests/array_from_pyobj/tests/test_array_from_pyobj.py trunk/numpy/fft/__init__.py trunk/numpy/fft/tests/test_fftpack.py trunk/numpy/fft/tests/test_helper.py trunk/numpy/lib/__init__.py trunk/numpy/lib/tests/test__datasource.py trunk/numpy/lib/tests/test_arraysetops.py trunk/numpy/lib/tests/test_financial.py trunk/numpy/lib/tests/test_format.py trunk/numpy/lib/tests/test_function_base.py trunk/numpy/lib/tests/test_getlimits.py trunk/numpy/lib/tests/test_index_tricks.py trunk/numpy/lib/tests/test_io.py trunk/numpy/lib/tests/test_machar.py trunk/numpy/lib/tests/test_polynomial.py trunk/numpy/lib/tests/test_regression.py trunk/numpy/lib/tests/test_shape_base.py trunk/numpy/lib/tests/test_twodim_base.py trunk/numpy/lib/tests/test_type_check.py trunk/numpy/lib/tests/test_ufunclike.py trunk/numpy/linalg/__init__.py trunk/numpy/linalg/tests/test_linalg.py trunk/numpy/linalg/tests/test_regression.py trunk/numpy/ma/__init__.py trunk/numpy/ma/tests/test_core.py trunk/numpy/ma/tests/test_extras.py trunk/numpy/ma/tests/test_mrecords.py trunk/numpy/ma/tests/test_old_ma.py trunk/numpy/ma/tests/test_subclassing.py trunk/numpy/ma/testutils.py trunk/numpy/numarray/__init__.py trunk/numpy/oldnumeric/__init__.py trunk/numpy/oldnumeric/tests/test_oldnumeric.py trunk/numpy/random/__init__.py trunk/numpy/random/tests/test_random.py trunk/numpy/testing/__init__.py trunk/numpy/testing/numpytest.py trunk/numpy/testing/tests/test_utils.py trunk/numpy/testing/utils.py trunk/numpy/tests/test_ctypeslib.py Log: Switched to use nose to run tests. Added test and bench functions to all modules. Modified: trunk/README.txt =================================================================== --- trunk/README.txt 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/README.txt 2008-06-17 00:23:20 UTC (rev 5287) @@ -9,11 +9,16 @@ If fast BLAS and LAPACK cannot be found, then a slower default version is used. After installation, tests can be run (from outside the source -directory) with +directory) with: python -c 'import numpy; numpy.test()' -The most current development version is always available from our +Please note that you must have the 'nose' test framework installed in order to +run the tests. More information about nose is available here: + +http://somethingaboutorange.com/mrl/projects/nose/ + +The most current development version of NumPy is always available from our subversion repository: http://svn.scipy.org/svn/numpy/trunk Modified: trunk/numpy/__init__.py =================================================================== --- trunk/numpy/__init__.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/__init__.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -94,8 +94,11 @@ __all__ = ['add_newdocs'] pkgload.__doc__ = PackageLoader.__call__.__doc__ - import testing - from testing import ScipyTest, NumpyTest + + from testing.pkgtester import Tester + test = Tester().test + bench = Tester().bench + import core from core import * import lib @@ -113,15 +116,8 @@ from core import round, abs, max, min __all__.extend(['__version__', 'pkgload', 'PackageLoader', - 'ScipyTest', 'NumpyTest', 'show_config']) + 'show_config']) __all__.extend(core.__all__) __all__.extend(lib.__all__) __all__.extend(['linalg', 'fft', 'random', 'ctypeslib']) - def test(*args, **kw): - import os, sys - print 'Numpy is installed in %s' % (os.path.split(__file__)[0],) - print 'Numpy version %s' % (__version__,) - print 'Python version %s' % (sys.version.replace('\n', '',),) - return NumpyTest().test(*args, **kw) - test.__doc__ = NumpyTest.test.__doc__ Modified: trunk/numpy/core/__init__.py =================================================================== --- trunk/numpy/core/__init__.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/core/__init__.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -31,7 +31,6 @@ __all__ += char.__all__ - -def test(level=1, verbosity=1): - from numpy.testing import NumpyTest - return NumpyTest().test(level, verbosity) +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().bench Modified: trunk/numpy/core/tests/test_defmatrix.py =================================================================== --- trunk/numpy/core/tests/test_defmatrix.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/core/tests/test_defmatrix.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -1,12 +1,12 @@ +import sys from numpy.testing import * set_package_path() -import numpy.core;reload(numpy.core) from numpy.core import * import numpy as np restore_path() -class TestCtor(NumpyTestCase): - def check_basic(self): +class TestCtor(TestCase): + def test_basic(self): A = array([[1,2],[3,4]]) mA = matrix(A) assert all(mA.A == A) @@ -24,8 +24,9 @@ mvec = matrix(vec) assert mvec.shape == (1,5) -class TestProperties(NumpyTestCase): - def check_sum(self): + +class TestProperties(TestCase): + def test_sum(self): """Test whether matrix.sum(axis=1) preserves orientation. Fails in NumPy <= 0.9.6.2127. """ @@ -40,7 +41,7 @@ assert_array_equal(sum1, M.sum(axis=1)) assert sumall == M.sum() - def check_basic(self): + def test_basic(self): import numpy.linalg as linalg A = array([[1., 2.], @@ -57,7 +58,7 @@ assert all(array(transpose(B) == mB.T)) assert all(array(conjugate(transpose(B)) == mB.H)) - def check_comparisons(self): + def test_comparisons(self): A = arange(100).reshape(10,10) mA = matrix(A) mB = matrix(A) + 0.1 @@ -81,19 +82,20 @@ assert not all(abs(mA) > 0) assert all(abs(mB > 0)) - def check_asmatrix(self): + def test_asmatrix(self): A = arange(100).reshape(10,10) mA = asmatrix(A) A[0,0] = -10 assert A[0,0] == mA[0,0] - def check_noaxis(self): + def test_noaxis(self): A = matrix([[1,0],[0,1]]) assert A.sum() == matrix(2) assert A.mean() == matrix(0.5) -class TestCasting(NumpyTestCase): - def check_basic(self): + +class TestCasting(TestCase): + def test_basic(self): A = arange(100).reshape(10,10) mA = matrix(A) @@ -110,8 +112,9 @@ assert mC.dtype.type == complex128 assert all(mA != mB) -class TestAlgebra(NumpyTestCase): - def check_basic(self): + +class TestAlgebra(TestCase): + def test_basic(self): import numpy.linalg as linalg A = array([[1., 2.], @@ -133,8 +136,9 @@ assert allclose((mA + mA).A, (A + A)) assert allclose((3*mA).A, (3*A)) -class TestMatrixReturn(NumpyTestCase): - def check_instance_methods(self): + +class TestMatrixReturn(TestCase): + def test_instance_methods(self): a = matrix([1.0], dtype='f8') methodargs = { 'astype' : ('intc',), @@ -172,33 +176,35 @@ assert type(c) is matrix assert type(d) is matrix -class TestIndexing(NumpyTestCase): - def check_basic(self): + +class TestIndexing(TestCase): + def test_basic(self): x = asmatrix(zeros((3,2),float)) y = zeros((3,1),float) y[:,0] = [0.8,0.2,0.3] x[:,1] = y>0.5 assert_equal(x, [[0,1],[0,0],[0,0]]) -class TestNewScalarIndexing(NumpyTestCase): + +class TestNewScalarIndexing(TestCase): def setUp(self): self.a = matrix([[1, 2],[3,4]]) - def check_dimesions(self): + def test_dimesions(self): a = self.a x = a[0] assert_equal(x.ndim, 2) - def check_array_from_matrix_list(self): + def test_array_from_matrix_list(self): a = self.a x = array([a, a]) assert_equal(x.shape, [2,2,2]) - def check_array_to_list(self): + def test_array_to_list(self): a = self.a assert_equal(a.tolist(),[[1, 2], [3, 4]]) - def check_fancy_indexing(self): + def test_fancy_indexing(self): a = self.a x = a[1, [0,1,0]] assert isinstance(x, matrix) @@ -216,30 +222,36 @@ ## assert_equal(x[0].shape,(1,3)) ## assert_equal(x[:,0].shape,(2,1)) -## x = matrix(0) -## assert_equal(x[0,0],0) -## assert_equal(x[0],0) -## assert_equal(x[:,0].shape,x.shape) + def test_matrix_element(self): + x = matrix([[1,2,3],[4,5,6]]) + assert_equal(x[0][0].shape,(1,3)) + assert_equal(x[0].shape,(1,3)) + assert_equal(x[:,0].shape,(2,1)) - def check_scalar_indexing(self): + x = matrix(0) + assert_equal(x[0,0],0) + assert_equal(x[0],0) + assert_equal(x[:,0].shape,x.shape) + + def test_scalar_indexing(self): x = asmatrix(zeros((3,2),float)) assert_equal(x[0,0],x[0][0]) - def check_row_column_indexing(self): + def test_row_column_indexing(self): x = asmatrix(np.eye(2)) assert_array_equal(x[0,:],[[1,0]]) assert_array_equal(x[1,:],[[0,1]]) assert_array_equal(x[:,0],[[1],[0]]) assert_array_equal(x[:,1],[[0],[1]]) - def check_boolean_indexing(self): + def test_boolean_indexing(self): A = arange(6) A.shape = (3,2) x = asmatrix(A) assert_array_equal(x[:,array([True,False])],x[:,0]) assert_array_equal(x[array([True,False,False]),:],x[0,:]) - def check_list_indexing(self): + def test_list_indexing(self): A = arange(6) A.shape = (3,2) x = asmatrix(A) @@ -247,6 +259,5 @@ assert_array_equal(x[[2,1,0],:],x[::-1,:]) - if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/core/tests/test_errstate.py =================================================================== --- trunk/numpy/core/tests/test_errstate.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/core/tests/test_errstate.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -12,11 +12,7 @@ from numpy.random import rand, randint from numpy.testing import * - - -class TestErrstate(NumpyTestCase): - - +class TestErrstate(TestCase): def test_invalid(self): with errstate(all='raise', under='ignore'): a = -arange(3) @@ -57,6 +53,5 @@ """ -if __name__ == '__main__': - from numpy.testing import * - NumpyTest().run() +if __name__ == "__main__": + nose.run(argv=['', __file__]) Modified: trunk/numpy/core/tests/test_memmap.py =================================================================== --- trunk/numpy/core/tests/test_memmap.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/core/tests/test_memmap.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -1,13 +1,12 @@ from tempfile import NamedTemporaryFile, mktemp import os +import warnings from numpy.core import memmap from numpy import arange, allclose from numpy.testing import * -import warnings - -class TestMemmap(NumpyTestCase): +class TestMemmap(TestCase): def setUp(self): self.tmpfp = NamedTemporaryFile(prefix='mmap') self.shape = (3,4) @@ -46,5 +45,6 @@ fp.sync() warnings.simplefilter('default', DeprecationWarning) -if __name__ == '__main__': - NumpyTest().run() + +if __name__ == "__main__": + nose.run(argv=['', __file__]) Modified: trunk/numpy/core/tests/test_multiarray.py =================================================================== --- trunk/numpy/core/tests/test_multiarray.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/core/tests/test_multiarray.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -1,15 +1,14 @@ import tempfile - +import sys import numpy as np from numpy.testing import * from numpy.core import * - -class TestFlags(NumpyTestCase): +class TestFlags(TestCase): def setUp(self): self.a = arange(10) - def check_writeable(self): + def test_writeable(self): mydict = locals() self.a.flags.writeable = False self.assertRaises(RuntimeError, runstring, 'self.a[0] = 3', mydict) @@ -17,7 +16,7 @@ self.a[0] = 5 self.a[0] = 0 - def check_otherflags(self): + def test_otherflags(self): assert_equal(self.a.flags.carray, True) assert_equal(self.a.flags.farray, False) assert_equal(self.a.flags.behaved, True) @@ -29,13 +28,13 @@ assert_equal(self.a.flags.updateifcopy, False) -class TestAttributes(NumpyTestCase): +class TestAttributes(TestCase): def setUp(self): self.one = arange(10) self.two = arange(20).reshape(4,5) self.three = arange(60,dtype=float64).reshape(2,5,6) - def check_attributes(self): + def test_attributes(self): assert_equal(self.one.shape, (10,)) assert_equal(self.two.shape, (4,5)) assert_equal(self.three.shape, (2,5,6)) @@ -56,7 +55,7 @@ assert_equal(self.two.itemsize, self.two.dtype.itemsize) assert_equal(self.two.base, arange(20)) - def check_dtypeattr(self): + def test_dtypeattr(self): assert_equal(self.one.dtype, dtype(int_)) assert_equal(self.three.dtype, dtype(float_)) assert_equal(self.one.dtype.char, 'l') @@ -65,7 +64,7 @@ assert_equal(self.one.dtype.str[1], 'i') assert_equal(self.three.dtype.str[1], 'f') - def check_stridesattr(self): + def test_stridesattr(self): x = self.one def make_array(size, offset, strides): return ndarray([size], buffer=x, dtype=int, @@ -79,7 +78,7 @@ #self.failUnlessRaises(ValueError, lambda: ndarray([1], strides=4)) - def check_set_stridesattr(self): + def test_set_stridesattr(self): x = self.one def make_array(size, offset, strides): try: @@ -94,7 +93,7 @@ self.failUnlessRaises(ValueError, make_array, 8, 3, 1) #self.failUnlessRaises(ValueError, make_array, 8, 3, 0) - def check_fill(self): + def test_fill(self): for t in "?bhilqpBHILQPfdgFDGO": x = empty((3,2,1), t) y = empty((3,2,1), t) @@ -106,82 +105,85 @@ x.fill(x[0]) assert_equal(x['f1'][1], x['f1'][0]) -class TestDtypedescr(NumpyTestCase): - def check_construction(self): + +class TestDtypedescr(TestCase): + def test_construction(self): d1 = dtype('i4') assert_equal(d1, dtype(int32)) d2 = dtype('f8') assert_equal(d2, dtype(float64)) -class TestFromstring(NumpyTestCase): - def check_binary(self): + +class TestFromstring(TestCase): + def test_binary(self): a = fromstring('\x00\x00\x80?\x00\x00\x00@\x00\x00@@\x00\x00\x80@',dtype=' g2, [g1[i] > g2[i] for i in [0,1,2]]) - def check_mixed(self): + def test_mixed(self): g1 = array(["spam","spa","spammer","and eggs"]) g2 = "spam" assert_array_equal(g1 == g2, [x == g2 for x in g1]) @@ -575,7 +582,7 @@ assert_array_equal(g1 >= g2, [x >= g2 for x in g1]) - def check_unicode(self): + def test_unicode(self): g1 = array([u"This",u"is",u"example"]) g2 = array([u"This",u"was",u"example"]) assert_array_equal(g1 == g2, [g1[i] == g2[i] for i in [0,1,2]]) @@ -586,8 +593,8 @@ assert_array_equal(g1 > g2, [g1[i] > g2[i] for i in [0,1,2]]) -class TestArgmax(NumpyTestCase): - def check_all(self): +class TestArgmax(TestCase): + def test_all(self): a = np.random.normal(0,1,(4,5,6,7,8)) for i in xrange(a.ndim): amax = a.max(i) @@ -596,13 +603,15 @@ axes.remove(i) assert all(amax == aargmax.choose(*a.transpose(i,*axes))) -class TestNewaxis(NumpyTestCase): - def check_basic(self): + +class TestNewaxis(TestCase): + def test_basic(self): sk = array([0,-0.1,0.1]) res = 250*sk[:,newaxis] assert_almost_equal(res.ravel(),250*sk) -class TestClip(NumpyTestCase): + +class TestClip(TestCase): def _check_range(self,x,cmin,cmax): assert np.all(x >= cmin) assert np.all(x <= cmax) @@ -636,7 +645,7 @@ self._check_range(x,expected_min,expected_max) return x - def check_basic(self): + def test_basic(self): for inplace in [False, True]: self._clip_type('float',1024,-12.8,100.2, inplace=inplace) self._clip_type('float',1024,0,0, inplace=inplace) @@ -647,13 +656,13 @@ x = self._clip_type('uint',1024,-120,100,expected_min=0, inplace=inplace) x = self._clip_type('uint',1024,0,0, inplace=inplace) - def check_record_array(self): + def test_record_array(self): rec = np.array([(-5, 2.0, 3.0), (5.0, 4.0, 3.0)], dtype=[('x', '= 3) @@ -662,24 +671,24 @@ x = val.clip(max=4) assert np.all(x <= 4) -class TestPutmask(ParametricTestCase): + +class TestPutmask(TestCase): def tst_basic(self,x,T,mask,val): np.putmask(x,mask,val) assert np.all(x[mask] == T(val)) assert x.dtype == T - def testip_types(self): + def test_ip_types(self): unchecked_types = [str, unicode, np.void, object] x = np.random.random(1000)*100 mask = x < 40 - tests = [] for val in [-100,0,15]: for types in np.sctypes.itervalues(): - tests.extend([(self.tst_basic,x.copy().astype(T),T,mask,val) - for T in types if T not in unchecked_types]) - return tests + for T in types: + if T not in unchecked_types: + yield self.tst_basic,x.copy().astype(T),T,mask,val def test_mask_size(self): self.failUnlessRaises(ValueError, np.putmask, @@ -690,8 +699,9 @@ np.putmask(x,[True,False,True],-1) assert_array_equal(x,[-1,2,-1]) - def testip_byteorder(self): - return [(self.tst_byteorder,dtype) for dtype in ('>i4','i4','i4','i4','']: for dtype in [float,int,np.complex]: dt = np.dtype(dtype).newbyteorder(byteorder) x = (np.random.random((4,7))*5).astype(dt) buf = x.tostring() - tests.append((self.tst_basic,buf,x.flat,{'dtype':dt})) - return tests + yield self.tst_basic,buf,x.flat,{'dtype':dt} -class TestResize(NumpyTestCase): + +class TestResize(TestCase): def test_basic(self): x = np.eye(3) x.resize((5,5)) @@ -827,13 +840,15 @@ y = x self.failUnlessRaises(ValueError,x.resize,(5,1)) -class TestRecord(NumpyTestCase): + +class TestRecord(TestCase): def test_field_rename(self): dt = np.dtype([('f',float),('i',int)]) dt.names = ['p','q'] assert_equal(dt.names,['p','q']) -class TestView(NumpyTestCase): + +class TestView(TestCase): def test_basic(self): x = np.array([(1,2,3,4),(5,6,7,8)],dtype=[('r',np.int8),('g',np.int8), ('b',np.int8),('a',np.int8)]) @@ -857,7 +872,8 @@ assert(isinstance(y,np.matrix)) assert_equal(y.dtype, np.dtype('0],a[1][V>0],a[2][V>0]]) == a[:,V>0]).all() -class TestBinaryRepr(NumpyTestCase): + +class TestBinaryRepr(TestCase): def test_zero(self): assert_equal(binary_repr(0),'0') @@ -252,6 +255,7 @@ assert_equal(binary_repr(-1), '-1') assert_equal(binary_repr(-1, width=8), '11111111') + def assert_array_strict_equal(x, y): assert_array_equal(x, y) # Check flags @@ -260,7 +264,7 @@ assert x.dtype.isnative == y.dtype.isnative -class TestClip(NumpyTestCase): +class TestClip(TestCase): def setUp(self): self.nr = 5 self.nc = 3 @@ -509,7 +513,7 @@ ac = self.clip(a,m,M) assert_array_strict_equal(ac, act) - def test_type_cast_04(self): + def test_type_cast_05(self): "Test native int32 with double arrays min/max." a = self._generate_int_data(self.nr, self.nc) m = -0.5 @@ -518,7 +522,7 @@ act = self.clip(a, m * zeros(a.shape), M) assert_array_strict_equal(ac, act) - def test_type_cast_05(self): + def test_type_cast_06(self): "Test native with NON native scalar min/max." a = self._generate_data(self.nr, self.nc) m = 0.5 @@ -528,7 +532,7 @@ ac = self.fastclip(a, m_s, M) assert_array_strict_equal(ac, act) - def test_type_cast_06(self): + def test_type_cast_07(self): "Test NON native with native array min/max." a = self._generate_data(self.nr, self.nc) m = -0.5 * ones(a.shape) @@ -539,7 +543,7 @@ ac = self.fastclip(a_s, m, M) assert_array_strict_equal(ac, act) - def test_type_cast_07(self): + def test_type_cast_08(self): "Test NON native with native scalar min/max." a = self._generate_data(self.nr, self.nc) m = -0.5 @@ -550,7 +554,7 @@ act = a_s.clip(m, M) assert_array_strict_equal(ac, act) - def test_type_cast_08(self): + def test_type_cast_09(self): "Test native with NON native array min/max." a = self._generate_data(self.nr, self.nc) m = -0.5 * ones(a.shape) @@ -561,7 +565,7 @@ act = self.clip(a, m_s, M) assert_array_strict_equal(ac, act) - def test_type_cast_09(self): + def test_type_cast_10(self): """Test native int32 with float min/max and float out for output argument.""" a = self._generate_int_data(self.nr, self.nc) b = zeros(a.shape, dtype = float32) @@ -571,7 +575,7 @@ ac = self.fastclip(a, m , M, out = b) assert_array_strict_equal(ac, act) - def test_type_cast_10(self): + def test_type_cast_11(self): "Test non native with native scalar, min/max, out non native" a = self._generate_non_native_data(self.nr, self.nc) b = a.copy() @@ -583,7 +587,7 @@ self.clip(a, m, M, out = bt) assert_array_strict_equal(b, bt) - def test_type_cast_11(self): + def test_type_cast_12(self): "Test native int32 input and min/max and float out" a = self._generate_int_data(self.nr, self.nc) b = zeros(a.shape, dtype = float32) @@ -681,7 +685,7 @@ self.assert_(a2 is a) -class test_allclose_inf(ParametricTestCase): +class test_allclose_inf(TestCase): rtol = 1e-5 atol = 1e-8 @@ -691,7 +695,7 @@ def tst_not_allclose(self,x,y): assert not allclose(x,y), "%s and %s shouldn't be close" % (x,y) - def testip_allclose(self): + def test_ip_allclose(self): """Parametric test factory.""" arr = array([100,1000]) aran = arange(125).reshape((5,5,5)) @@ -709,7 +713,7 @@ for (x,y) in data: yield (self.tst_allclose,x,y) - def testip_not_allclose(self): + def test_ip_not_allclose(self): """Parametric test factory.""" aran = arange(125).reshape((5,5,5)) @@ -737,7 +741,8 @@ assert_array_equal(x,array([inf,1])) assert_array_equal(y,array([0,inf])) -class TestStdVar(NumpyTestCase): + +class TestStdVar(TestCase): def setUp(self): self.A = array([1,-1,1,-1]) self.real_var = 1 @@ -745,25 +750,27 @@ def test_basic(self): assert_almost_equal(var(self.A),self.real_var) assert_almost_equal(std(self.A)**2,self.real_var) + def test_ddof1(self): - assert_almost_equal(var(self.A,ddof=1),self.real_var*len(self.A)/float(len(self.A)-1)) - assert_almost_equal(std(self.A,ddof=1)**2,self.real_var*len(self.A)/float(len(self.A)-1)) + assert_almost_equal(var(self.A,ddof=1), + self.real_var*len(self.A)/float(len(self.A)-1)) + assert_almost_equal(std(self.A,ddof=1)**2, + self.real_var*len(self.A)/float(len(self.A)-1)) + def test_ddof2(self): - assert_almost_equal(var(self.A,ddof=2),self.real_var*len(self.A)/float(len(self.A)-2)) - assert_almost_equal(std(self.A,ddof=2)**2,self.real_var*len(self.A)/float(len(self.A)-2)) + assert_almost_equal(var(self.A,ddof=2), + self.real_var*len(self.A)/float(len(self.A)-2)) + assert_almost_equal(std(self.A,ddof=2)**2, + self.real_var*len(self.A)/float(len(self.A)-2)) -class TestStdVarComplex(NumpyTestCase): + +class TestStdVarComplex(TestCase): def test_basic(self): A = array([1,1.j,-1,-1.j]) real_var = 1 assert_almost_equal(var(A),real_var) assert_almost_equal(std(A)**2,real_var) -import sys -if sys.version_info[:2] >= (2, 5): - set_local_path() - from test_errstate import * - restore_path() -if __name__ == '__main__': - NumpyTest().run() +if __name__ == "__main__": + nose.run(argv=['', __file__]) Modified: trunk/numpy/core/tests/test_numerictypes.py =================================================================== --- trunk/numpy/core/tests/test_numerictypes.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/core/tests/test_numerictypes.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -3,7 +3,6 @@ import numpy from numpy import zeros, ones, array - # This is the structure of the table used for plain objects: # # +-+-+-+ @@ -102,7 +101,7 @@ class create_zeros: """Check the creation of heterogeneous arrays zero-valued""" - def check_zeros0D(self): + def test_zeros0D(self): """Check creation of 0-dimensional objects""" h = zeros((), dtype=self._descr) self.assert_(normalize_descr(self._descr) == h.dtype.descr) @@ -112,7 +111,7 @@ # A small check that data is ok assert_equal(h['z'], zeros((), dtype='u1')) - def check_zerosSD(self): + def test_zerosSD(self): """Check creation of single-dimensional objects""" h = zeros((2,), dtype=self._descr) self.assert_(normalize_descr(self._descr) == h.dtype.descr) @@ -122,7 +121,7 @@ # A small check that data is ok assert_equal(h['z'], zeros((2,), dtype='u1')) - def check_zerosMD(self): + def test_zerosMD(self): """Check creation of multi-dimensional objects""" h = zeros((2,3), dtype=self._descr) self.assert_(normalize_descr(self._descr) == h.dtype.descr) @@ -133,11 +132,11 @@ assert_equal(h['z'], zeros((2,3), dtype='u1')) -class test_create_zeros_plain(create_zeros, NumpyTestCase): +class test_create_zeros_plain(create_zeros, TestCase): """Check the creation of heterogeneous arrays zero-valued (plain)""" _descr = Pdescr -class test_create_zeros_nested(create_zeros, NumpyTestCase): +class test_create_zeros_nested(create_zeros, TestCase): """Check the creation of heterogeneous arrays zero-valued (nested)""" _descr = Ndescr @@ -145,7 +144,7 @@ class create_values: """Check the creation of heterogeneous arrays with values""" - def check_tuple(self): + def test_tuple(self): """Check creation from tuples""" h = array(self._buffer, dtype=self._descr) self.assert_(normalize_descr(self._descr) == h.dtype.descr) @@ -154,7 +153,7 @@ else: self.assert_(h.shape == ()) - def check_list_of_tuple(self): + def test_list_of_tuple(self): """Check creation from list of tuples""" h = array([self._buffer], dtype=self._descr) self.assert_(normalize_descr(self._descr) == h.dtype.descr) @@ -163,7 +162,7 @@ else: self.assert_(h.shape == (1,)) - def check_list_of_list_of_tuple(self): + def test_list_of_list_of_tuple(self): """Check creation from list of list of tuples""" h = array([[self._buffer]], dtype=self._descr) self.assert_(normalize_descr(self._descr) == h.dtype.descr) @@ -173,25 +172,25 @@ self.assert_(h.shape == (1,1)) -class test_create_values_plain_single(create_values, NumpyTestCase): +class test_create_values_plain_single(create_values, TestCase): """Check the creation of heterogeneous arrays (plain, single row)""" _descr = Pdescr multiple_rows = 0 _buffer = PbufferT[0] -class test_create_values_plain_multiple(create_values, NumpyTestCase): +class test_create_values_plain_multiple(create_values, TestCase): """Check the creation of heterogeneous arrays (plain, multiple rows)""" _descr = Pdescr multiple_rows = 1 _buffer = PbufferT -class test_create_values_nested_single(create_values, NumpyTestCase): +class test_create_values_nested_single(create_values, TestCase): """Check the creation of heterogeneous arrays (nested, single row)""" _descr = Ndescr multiple_rows = 0 _buffer = NbufferT[0] -class test_create_values_nested_multiple(create_values, NumpyTestCase): +class test_create_values_nested_multiple(create_values, TestCase): """Check the creation of heterogeneous arrays (nested, multiple rows)""" _descr = Ndescr multiple_rows = 1 @@ -205,7 +204,7 @@ class read_values_plain: """Check the reading of values in heterogeneous arrays (plain)""" - def check_access_fields(self): + def test_access_fields(self): h = array(self._buffer, dtype=self._descr) if not self.multiple_rows: self.assert_(h.shape == ()) @@ -222,13 +221,13 @@ self._buffer[1][2]], dtype='u1')) -class test_read_values_plain_single(read_values_plain, NumpyTestCase): +class test_read_values_plain_single(read_values_plain, TestCase): """Check the creation of heterogeneous arrays (plain, single row)""" _descr = Pdescr multiple_rows = 0 _buffer = PbufferT[0] -class test_read_values_plain_multiple(read_values_plain, NumpyTestCase): +class test_read_values_plain_multiple(read_values_plain, TestCase): """Check the values of heterogeneous arrays (plain, multiple rows)""" _descr = Pdescr multiple_rows = 1 @@ -238,7 +237,7 @@ """Check the reading of values in heterogeneous arrays (nested)""" - def check_access_top_fields(self): + def test_access_top_fields(self): """Check reading the top fields of a nested array""" h = array(self._buffer, dtype=self._descr) if not self.multiple_rows: @@ -256,7 +255,7 @@ self._buffer[1][5]], dtype='u1')) - def check_nested1_acessors(self): + def test_nested1_acessors(self): """Check reading the nested fields of a nested array (1st level)""" h = array(self._buffer, dtype=self._descr) if not self.multiple_rows: @@ -286,7 +285,7 @@ self._buffer[1][3][1]], dtype='c16')) - def check_nested2_acessors(self): + def test_nested2_acessors(self): """Check reading the nested fields of a nested array (2nd level)""" h = array(self._buffer, dtype=self._descr) if not self.multiple_rows: @@ -304,7 +303,7 @@ self._buffer[1][1][2][3]], dtype='u4')) - def check_nested1_descriptor(self): + def test_nested1_descriptor(self): """Check access nested descriptors of a nested array (1st level)""" h = array(self._buffer, dtype=self._descr) self.assert_(h.dtype['Info']['value'].name == 'complex128') @@ -312,53 +311,49 @@ self.assert_(h.dtype['info']['Name'].name == 'unicode256') self.assert_(h.dtype['info']['Value'].name == 'complex128') - def check_nested2_descriptor(self): + def test_nested2_descriptor(self): """Check access nested descriptors of a nested array (2nd level)""" h = array(self._buffer, dtype=self._descr) self.assert_(h.dtype['Info']['Info2']['value'].name == 'void256') self.assert_(h.dtype['Info']['Info2']['z3'].name == 'void64') -class test_read_values_nested_single(read_values_nested, NumpyTestCase): +class test_read_values_nested_single(read_values_nested, TestCase): """Check the values of heterogeneous arrays (nested, single row)""" _descr = Ndescr multiple_rows = False _buffer = NbufferT[0] -class test_read_values_nested_multiple(read_values_nested, NumpyTestCase): +class test_read_values_nested_multiple(read_values_nested, TestCase): """Check the values of heterogeneous arrays (nested, multiple rows)""" _descr = Ndescr multiple_rows = True _buffer = NbufferT -class TestEmptyField(NumpyTestCase): - def check_assign(self): +class TestEmptyField(TestCase): + def test_assign(self): a = numpy.arange(10, dtype=numpy.float32) a.dtype = [("int", "<0i4"),("float", "<2f4")] assert(a['int'].shape == (5,0)) assert(a['float'].shape == (5,2)) -class TestCommonType(NumpyTestCase): - def check_scalar_loses1(self): +class TestCommonType(TestCase): + def test_scalar_loses1(self): res = numpy.find_common_type(['f4','f4','i4'],['f8']) assert(res == 'f4') - def check_scalar_loses2(self): + def test_scalar_loses2(self): res = numpy.find_common_type(['f4','f4'],['i8']) assert(res == 'f4') - def check_scalar_wins(self): + def test_scalar_wins(self): res = numpy.find_common_type(['f4','f4','i4'],['c8']) assert(res == 'c8') - def check_scalar_wins2(self): + def test_scalar_wins2(self): res = numpy.find_common_type(['u4','i4','i4'],['f4']) assert(res == 'f8') - def check_scalar_wins3(self): # doesn't go up to 'f16' on purpose + def test_scalar_wins3(self): # doesn't go up to 'f16' on purpose res = numpy.find_common_type(['u8','i8','i8'],['f8']) assert(res == 'f8') - - - - if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/core/tests/test_records.py =================================================================== --- trunk/numpy/core/tests/test_records.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/core/tests/test_records.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -1,29 +1,33 @@ - +from os import path from numpy.testing import * set_package_path() -from os import path -import numpy.core;reload(numpy.core) +import numpy.core +reload(numpy.core) +import numpy from numpy.core import * restore_path() -class TestFromrecords(NumpyTestCase): - def check_fromrecords(self): - r = rec.fromrecords([[456,'dbe',1.2],[2,'de',1.3]],names='col1,col2,col3') +class TestFromrecords(TestCase): + def test_fromrecords(self): + r = rec.fromrecords([[456,'dbe',1.2],[2,'de',1.3]], + names='col1,col2,col3') assert_equal(r[0].item(),(456, 'dbe', 1.2)) - def check_method_array(self): + def test_method_array(self): r = rec.array('abcdefg'*100,formats='i2,a3,i4',shape=3,byteorder='big') assert_equal(r[1].item(),(25444, 'efg', 1633837924)) - def check_method_array2(self): - r=rec.array([(1,11,'a'),(2,22,'b'),(3,33,'c'),(4,44,'d'),(5,55,'ex'),(6,66,'f'),(7,77,'g')],formats='u1,f4,a1') + def test_method_array2(self): + r=rec.array([(1,11,'a'),(2,22,'b'),(3,33,'c'),(4,44,'d'),(5,55,'ex'), + (6,66,'f'),(7,77,'g')],formats='u1,f4,a1') assert_equal(r[1].item(),(2, 22.0, 'b')) - def check_recarray_slices(self): - r=rec.array([(1,11,'a'),(2,22,'b'),(3,33,'c'),(4,44,'d'),(5,55,'ex'),(6,66,'f'),(7,77,'g')],formats='u1,f4,a1') + def test_recarray_slices(self): + r=rec.array([(1,11,'a'),(2,22,'b'),(3,33,'c'),(4,44,'d'),(5,55,'ex'), + (6,66,'f'),(7,77,'g')],formats='u1,f4,a1') assert_equal(r[1::2][1].item(),(4, 44.0, 'd')) - def check_recarray_fromarrays(self): + def test_recarray_fromarrays(self): x1 = array([1,2,3,4]) x2 = array(['a','dd','xyz','12']) x3 = array([1.1,2,3,4]) @@ -32,14 +36,14 @@ x1[1] = 34 assert_equal(r.a,array([1,2,3,4])) - def check_recarray_fromfile(self): + def test_recarray_fromfile(self): data_dir = path.join(path.dirname(__file__),'data') filename = path.join(data_dir,'recarray_from_file.fits') fd = open(filename) fd.seek(2880*2) r = rec.fromfile(fd, formats='f8,i4,a5', shape=3, byteorder='big') - def check_recarray_from_obj(self): + def test_recarray_from_obj(self): count = 10 a = zeros(count, dtype='O') b = zeros(count, dtype='f8') @@ -54,7 +58,7 @@ assert(mine.data1[i]==0.0) assert(mine.data2[i]==0.0) - def check_recarray_from_names(self): + def test_recarray_from_names(self): ra = rec.array([ (1, 'abc', 3.7000002861022949, 0), (2, 'xy', 6.6999998092651367, 1), @@ -70,7 +74,7 @@ for k in xrange(len(ra)): assert ra[k].item() == pa[k].item() - def check_recarray_conflict_fields(self): + def test_recarray_conflict_fields(self): ra = rec.array([(1,'abc',2.3),(2,'xyz',4.2), (3,'wrs',1.3)], names='field, shape, mean') @@ -85,7 +89,7 @@ assert_array_equal(ra['field'], [[5,5,5]]) assert callable(ra.field) -class TestRecord(NumpyTestCase): +class TestRecord(TestCase): def setUp(self): self.data = rec.fromrecords([(1,2,3),(4,5,6)], dtype=[("col1", "= rc) -class TestRegression(NumpyTestCase): - def check_invalid_round(self,level=rlevel): +class TestRegression(TestCase): + def test_invalid_round(self,level=rlevel): """Ticket #3""" v = 4.7599999999999998 assert_array_equal(np.array([v]),np.array(v)) - def check_mem_empty(self,level=rlevel): + def test_mem_empty(self,level=rlevel): """Ticket #7""" np.empty((1,),dtype=[('x',np.int64)]) - def check_pickle_transposed(self,level=rlevel): + def test_pickle_transposed(self,level=rlevel): """Ticket #16""" a = np.transpose(np.array([[2,9],[7,0],[3,8]])) f = StringIO() @@ -44,43 +42,43 @@ f.close() assert_array_equal(a,b) - def check_masked_array_create(self,level=rlevel): + def test_masked_array_create(self,level=rlevel): """Ticket #17""" x = np.ma.masked_array([0,1,2,3,0,4,5,6],mask=[0,0,0,1,1,1,0,0]) assert_array_equal(np.ma.nonzero(x),[[1,2,6,7]]) - def check_poly1d(self,level=rlevel): + def test_poly1d(self,level=rlevel): """Ticket #28""" assert_equal(np.poly1d([1]) - np.poly1d([1,0]), np.poly1d([-1,1])) - def check_typeNA(self,level=rlevel): + def test_typeNA(self,level=rlevel): """Ticket #31""" assert_equal(np.typeNA[np.int64],'Int64') assert_equal(np.typeNA[np.uint64],'UInt64') - def check_dtype_names(self,level=rlevel): + def test_dtype_names(self,level=rlevel): """Ticket #35""" dt = np.dtype([(('name','label'),np.int32,3)]) - def check_reduce(self,level=rlevel): + def test_reduce(self,level=rlevel): """Ticket #40""" assert_almost_equal(np.add.reduce([1.,.5],dtype=None), 1.5) - def check_zeros_order(self,level=rlevel): + def test_zeros_order(self,level=rlevel): """Ticket #43""" np.zeros([3], int, 'C') np.zeros([3], order='C') np.zeros([3], int, order='C') - def check_sort_bigendian(self,level=rlevel): + def test_sort_bigendian(self,level=rlevel): """Ticket #47""" a = np.linspace(0, 10, 11) c = a.astype(np.dtype('f8') b = np.arange(10.,dtype=' 0.5)) assert(np.all(b[yb] > 0.5)) - def check_mem_dot(self,level=rlevel): + def test_mem_dot(self,level=rlevel): """Ticket #106""" x = np.random.randn(0,1) y = np.random.randn(10,1) z = np.dot(x, np.transpose(y)) - def check_arange_endian(self,level=rlevel): + def test_arange_endian(self,level=rlevel): """Ticket #111""" ref = np.arange(10) x = np.arange(10,dtype=' 8: # a = np.exp(np.array([1000],dtype=np.longfloat)) # assert(str(a)[1:9] == str(a[0])[:8]) - def check_argmax(self,level=rlevel): + def test_argmax(self,level=rlevel): """Ticket #119""" a = np.random.normal(0,1,(4,5,6,7,8)) for i in xrange(a.ndim): aargmax = a.argmax(i) - def check_matrix_properties(self,level=rlevel): + def test_matrix_properties(self,level=rlevel): """Ticket #125""" a = np.matrix([1.0],dtype=float) assert(type(a.real) is np.matrix) @@ -252,34 +250,34 @@ assert(type(c) is np.matrix) assert(type(d) is np.matrix) - def check_mem_divmod(self,level=rlevel): + def test_mem_divmod(self,level=rlevel): """Ticket #126""" for i in range(10): divmod(np.array([i])[0],10) - def check_hstack_invalid_dims(self,level=rlevel): + def test_hstack_invalid_dims(self,level=rlevel): """Ticket #128""" x = np.arange(9).reshape((3,3)) y = np.array([0,0,0]) self.failUnlessRaises(ValueError,np.hstack,(x,y)) - def check_squeeze_type(self,level=rlevel): + def test_squeeze_type(self,level=rlevel): """Ticket #133""" a = np.array([3]) b = np.array(3) assert(type(a.squeeze()) is np.ndarray) assert(type(b.squeeze()) is np.ndarray) - def check_add_identity(self,level=rlevel): + def test_add_identity(self,level=rlevel): """Ticket #143""" assert_equal(0,np.add.identity) - def check_binary_repr_0(self,level=rlevel): + def test_binary_repr_0(self,level=rlevel): """Ticket #151""" assert_equal('0',np.binary_repr(0)) - def check_rec_iterate(self,level=rlevel): + def test_rec_iterate(self,level=rlevel): """Ticket #160""" descr = np.dtype([('i',int),('f',float),('s','|S3')]) x = np.rec.array([(1,1.1,'1.0'), @@ -287,19 +285,19 @@ x[0].tolist() [i for i in x[0]] - def check_unicode_string_comparison(self,level=rlevel): + def test_unicode_string_comparison(self,level=rlevel): """Ticket #190""" a = np.array('hello',np.unicode_) b = np.array('world') a == b - def check_tostring_FORTRANORDER_discontiguous(self,level=rlevel): + def test_tostring_FORTRANORDER_discontiguous(self,level=rlevel): """Fix in r2836""" # Create discontiguous Fortran-ordered array x = np.array(np.random.rand(3,3),order='F')[:,:2] assert_array_almost_equal(x.ravel(),np.fromstring(x.tostring())) - def check_flat_assignment(self,level=rlevel): + def test_flat_assignment(self,level=rlevel): """Correct behaviour of ticket #194""" x = np.empty((3,1)) x.flat = np.arange(3) @@ -307,7 +305,7 @@ x.flat = np.arange(3,dtype=float) assert_array_almost_equal(x,[[0],[1],[2]]) - def check_broadcast_flat_assignment(self,level=rlevel): + def test_broadcast_flat_assignment(self,level=rlevel): """Ticket #194""" x = np.empty((3,1)) def bfa(): x[:] = np.arange(3) @@ -315,7 +313,7 @@ self.failUnlessRaises(ValueError, bfa) self.failUnlessRaises(ValueError, bfb) - def check_unpickle_dtype_with_object(self,level=rlevel): + def test_unpickle_dtype_with_object(self,level=rlevel): """Implemented in r2840""" dt = np.dtype([('x',int),('y',np.object_),('z','O')]) f = StringIO() @@ -325,7 +323,7 @@ f.close() assert_equal(dt,dt_) - def check_mem_array_creation_invalid_specification(self,level=rlevel): + def test_mem_array_creation_invalid_specification(self,level=rlevel): """Ticket #196""" dt = np.dtype([('x',int),('y',np.object_)]) # Wrong way @@ -333,7 +331,7 @@ # Correct way np.array([(1,'object')],dt) - def check_recarray_single_element(self,level=rlevel): + def test_recarray_single_element(self,level=rlevel): """Ticket #202""" a = np.array([1,2,3],dtype=np.int32) b = a.copy() @@ -341,24 +339,24 @@ assert_array_equal(a,b) assert_equal(a,r[0][0]) - def check_zero_sized_array_indexing(self,level=rlevel): + def test_zero_sized_array_indexing(self,level=rlevel): """Ticket #205""" tmp = np.array([]) def index_tmp(): tmp[np.array(10)] self.failUnlessRaises(IndexError, index_tmp) - def check_unique_zero_sized(self,level=rlevel): + def test_unique_zero_sized(self,level=rlevel): """Ticket #205""" assert_array_equal([], np.unique(np.array([]))) - def check_chararray_rstrip(self,level=rlevel): + def test_chararray_rstrip(self,level=rlevel): """Ticket #222""" x = np.chararray((1,),5) x[0] = 'a ' x = x.rstrip() assert_equal(x[0], 'a') - def check_object_array_shape(self,level=rlevel): + def test_object_array_shape(self,level=rlevel): """Ticket #239""" assert_equal(np.array([[1,2],3,4],dtype=object).shape, (3,)) assert_equal(np.array([[1,2],[3,4]],dtype=object).shape, (2,2)) @@ -367,29 +365,29 @@ assert_equal(np.array([[],[],[]],dtype=object).shape, (3,0)) assert_equal(np.array([[3,4],[5,6],None],dtype=object).shape, (3,)) - def check_mem_around(self,level=rlevel): + def test_mem_around(self,level=rlevel): """Ticket #243""" x = np.zeros((1,)) y = [0] decimal = 6 np.around(abs(x-y),decimal) <= 10.0**(-decimal) - def check_character_array_strip(self,level=rlevel): + def test_character_array_strip(self,level=rlevel): """Ticket #246""" x = np.char.array(("x","x ","x ")) for c in x: assert_equal(c,"x") - def check_lexsort(self,level=rlevel): + def test_lexsort(self,level=rlevel): """Lexsort memory error""" v = np.array([1,2,3,4,5,6,7,8,9,10]) assert_equal(np.lexsort(v),0) - def check_pickle_dtype(self,level=rlevel): + def test_pickle_dtype(self,level=rlevel): """Ticket #251""" import pickle pickle.dumps(np.float) - def check_masked_array_multiply(self,level=rlevel): + def test_masked_array_multiply(self,level=rlevel): """Ticket #254""" a = np.ma.zeros((4,1)) a[2,0] = np.ma.masked @@ -397,41 +395,41 @@ a*b b*a - def check_swap_real(self, level=rlevel): + def test_swap_real(self, level=rlevel): """Ticket #265""" assert_equal(np.arange(4,dtype='>c8').imag.max(),0.0) assert_equal(np.arange(4,dtype=' 1 and x['two'] > 2) - def check_method_args(self, level=rlevel): + def test_method_args(self, level=rlevel): # Make sure methods and functions have same default axis # keyword and arguments funcs1= ['argmax', 'argmin', 'sum', ('product', 'prod'), @@ -470,17 +468,17 @@ res2 = getattr(np, func)(arr1, arr2) assert abs(res1-res2).max() < 1e-8, func - def check_mem_lexsort_strings(self, level=rlevel): + def test_mem_lexsort_strings(self, level=rlevel): """Ticket #298""" lst = ['abc','cde','fgh'] np.lexsort((lst,)) - def check_fancy_index(self, level=rlevel): + def test_fancy_index(self, level=rlevel): """Ticket #302""" x = np.array([1,2])[np.array([0])] assert_equal(x.shape,(1,)) - def check_recarray_copy(self, level=rlevel): + def test_recarray_copy(self, level=rlevel): """Ticket #312""" dt = [('x',np.int16),('y',np.float64)] ra = np.array([(1,2.3)], dtype=dt) @@ -488,68 +486,68 @@ rb['x'] = 2. assert ra['x'] != rb['x'] - def check_rec_fromarray(self, level=rlevel): + def test_rec_fromarray(self, level=rlevel): """Ticket #322""" x1 = np.array([[1,2],[3,4],[5,6]]) x2 = np.array(['a','dd','xyz']) x3 = np.array([1.1,2,3]) np.rec.fromarrays([x1,x2,x3], formats="(2,)i4,a3,f8") - def check_object_array_assign(self, level=rlevel): + def test_object_array_assign(self, level=rlevel): x = np.empty((2,2),object) x.flat[2] = (1,2,3) assert_equal(x.flat[2],(1,2,3)) - def check_ndmin_float64(self, level=rlevel): + def test_ndmin_float64(self, level=rlevel): """Ticket #324""" x = np.array([1,2,3],dtype=np.float64) assert_equal(np.array(x,dtype=np.float32,ndmin=2).ndim,2) assert_equal(np.array(x,dtype=np.float64,ndmin=2).ndim,2) - def check_mem_vectorise(self, level=rlevel): + def test_mem_vectorise(self, level=rlevel): """Ticket #325""" vt = np.vectorize(lambda *args: args) vt(np.zeros((1,2,1)), np.zeros((2,1,1)), np.zeros((1,1,2))) vt(np.zeros((1,2,1)), np.zeros((2,1,1)), np.zeros((1,1,2)), np.zeros((2,2))) - def check_mem_axis_minimization(self, level=rlevel): + def test_mem_axis_minimization(self, level=rlevel): """Ticket #327""" data = np.arange(5) data = np.add.outer(data,data) - def check_mem_float_imag(self, level=rlevel): + def test_mem_float_imag(self, level=rlevel): """Ticket #330""" np.float64(1.0).imag - def check_dtype_tuple(self, level=rlevel): + def test_dtype_tuple(self, level=rlevel): """Ticket #334""" assert np.dtype('i4') == np.dtype(('i4',())) - def check_dtype_posttuple(self, level=rlevel): + def test_dtype_posttuple(self, level=rlevel): """Ticket #335""" np.dtype([('col1', '()i4')]) - def check_mgrid_single_element(self, level=rlevel): + def test_mgrid_single_element(self, level=rlevel): """Ticket #339""" assert_array_equal(np.mgrid[0:0:1j],[0]) assert_array_equal(np.mgrid[0:0],[]) - def check_numeric_carray_compare(self, level=rlevel): + def test_numeric_carray_compare(self, level=rlevel): """Ticket #341""" assert_equal(np.array([ 'X' ], 'c'),'X') - def check_string_array_size(self, level=rlevel): + def test_string_array_size(self, level=rlevel): """Ticket #342""" self.failUnlessRaises(ValueError, np.array,[['X'],['X','X','X']],'|S1') - def check_dtype_repr(self, level=rlevel): + def test_dtype_repr(self, level=rlevel): """Ticket #344""" dt1=np.dtype(('uint32', 2)) dt2=np.dtype(('uint32', (2,))) assert_equal(dt1.__repr__(), dt2.__repr__()) - def check_reshape_order(self, level=rlevel): + def test_reshape_order(self, level=rlevel): """Make sure reshape order works.""" a = np.arange(6).reshape(2,3,order='F') assert_equal(a,[[0,2,4],[1,3,5]]) @@ -557,22 +555,22 @@ b = a[:,1] assert_equal(b.reshape(2,2,order='F'), [[2,6],[4,8]]) - def check_repeat_discont(self, level=rlevel): + def test_repeat_discont(self, level=rlevel): """Ticket #352""" a = np.arange(12).reshape(4,3)[:,2] assert_equal(a.repeat(3), [2,2,2,5,5,5,8,8,8,11,11,11]) - def check_array_index(self, level=rlevel): + def test_array_index(self, level=rlevel): """Make sure optimization is not called in this case.""" a = np.array([1,2,3]) a2 = np.array([[1,2,3]]) assert_equal(a[np.where(a==3)], a2[np.where(a2==3)]) - def check_object_argmax(self, level=rlevel): + def test_object_argmax(self, level=rlevel): a = np.array([1,2,3],dtype=object) assert a.argmax() == 2 - def check_recarray_fields(self, level=rlevel): + def test_recarray_fields(self, level=rlevel): """Ticket #372""" dt0 = np.dtype([('f0','i4'),('f1','i4')]) dt1 = np.dtype([('f0','i8'),('f1','i8')]) @@ -583,33 +581,33 @@ np.rec.fromarrays([(1,2),(3,4)])]: assert(a.dtype in [dt0,dt1]) - def check_random_shuffle(self, level=rlevel): + def test_random_shuffle(self, level=rlevel): """Ticket #374""" a = np.arange(5).reshape((5,1)) b = a.copy() np.random.shuffle(b) assert_equal(np.sort(b, axis=0),a) - def check_refcount_vectorize(self, level=rlevel): + def test_refcount_vectorize(self, level=rlevel): """Ticket #378""" def p(x,y): return 123 v = np.vectorize(p) assert_valid_refcount(v) - def check_poly1d_nan_roots(self, level=rlevel): + def test_poly1d_nan_roots(self, level=rlevel): """Ticket #396""" p = np.poly1d([np.nan,np.nan,1], r=0) self.failUnlessRaises(np.linalg.LinAlgError,getattr,p,"r") - def check_refcount_vdot(self, level=rlevel): + def test_refcount_vdot(self, level=rlevel): """Changeset #3443""" assert_valid_refcount(np.vdot) - def check_startswith(self, level=rlevel): + def test_startswith(self, level=rlevel): ca = np.char.array(['Hi','There']) assert_equal(ca.startswith('H'),[True,False]) - def check_noncommutative_reduce_accumulate(self, level=rlevel): + def test_noncommutative_reduce_accumulate(self, level=rlevel): """Ticket #413""" tosubtract = np.arange(5) todivide = np.array([2.0, 0.5, 0.25]) @@ -620,44 +618,44 @@ assert_array_equal(np.divide.accumulate(todivide), np.array([2., 4., 16.])) - def check_mem_polymul(self, level=rlevel): + def test_mem_polymul(self, level=rlevel): """Ticket #448""" np.polymul([],[1.]) - def check_convolve_empty(self, level=rlevel): + def test_convolve_empty(self, level=rlevel): """Convolve should raise an error for empty input array.""" self.failUnlessRaises(AssertionError,np.convolve,[],[1]) self.failUnlessRaises(AssertionError,np.convolve,[1],[]) - def check_multidim_byteswap(self, level=rlevel): + def test_multidim_byteswap(self, level=rlevel): """Ticket #449""" r=np.array([(1,(0,1,2))], dtype="i2,3i2") assert_array_equal(r.byteswap(), np.array([(256,(0,256,512))],r.dtype)) - def check_string_NULL(self, level=rlevel): + def test_string_NULL(self, level=rlevel): """Changeset 3557""" assert_equal(np.array("a\x00\x0b\x0c\x00").item(), 'a\x00\x0b\x0c') - def check_mem_string_concat(self, level=rlevel): + def test_mem_string_concat(self, level=rlevel): """Ticket #469""" x = np.array([]) np.append(x,'asdasd\tasdasd') - def check_matrix_multiply_by_1d_vector(self, level=rlevel) : + def test_matrix_multiply_by_1d_vector(self, level=rlevel) : """Ticket #473""" def mul() : np.mat(np.eye(2))*np.ones(2) self.failUnlessRaises(ValueError,mul) - def check_junk_in_string_fields_of_recarray(self, level=rlevel): + def test_junk_in_string_fields_of_recarray(self, level=rlevel): """Ticket #483""" r = np.array([['abc']], dtype=[('var1', '|S20')]) assert str(r['var1'][0][0]) == 'abc' - def check_take_output(self, level=rlevel): + def test_take_output(self, level=rlevel): """Ensure that 'take' honours output parameter.""" x = np.arange(12).reshape((3,4)) a = np.take(x,[0,2],axis=1) @@ -665,7 +663,7 @@ np.take(x,[0,2],axis=1,out=b) assert_array_equal(a,b) - def check_array_str_64bit(self, level=rlevel): + def test_array_str_64bit(self, level=rlevel): """Ticket #501""" s = np.array([1, np.nan],dtype=np.float64) errstate = np.seterr(all='raise') @@ -674,7 +672,7 @@ finally: np.seterr(**errstate) - def check_frompyfunc_endian(self, level=rlevel): + def test_frompyfunc_endian(self, level=rlevel): """Ticket #503""" from math import radians uradians = np.frompyfunc(radians, 1, 1) @@ -683,66 +681,66 @@ assert_almost_equal(uradians(big_endian).astype(float), uradians(little_endian).astype(float)) - def check_mem_string_arr(self, level=rlevel): + def test_mem_string_arr(self, level=rlevel): """Ticket #514""" s = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" t = [] np.hstack((t, s )) - def check_arr_transpose(self, level=rlevel): + def test_arr_transpose(self, level=rlevel): """Ticket #516""" x = np.random.rand(*(2,)*16) y = x.transpose(range(16)) - def check_string_mergesort(self, level=rlevel): + def test_string_mergesort(self, level=rlevel): """Ticket #540""" x = np.array(['a']*32) assert_array_equal(x.argsort(kind='m'), np.arange(32)) - def check_argmax_byteorder(self, level=rlevel): + def test_argmax_byteorder(self, level=rlevel): """Ticket #546""" a = np.arange(3, dtype='>f') assert a[a.argmax()] == a.max() - def check_numeric_random(self, level=rlevel): + def test_numeric_random(self, level=rlevel): """Ticket #552""" from numpy.oldnumeric.random_array import randint randint(0,50,[2,3]) - def check_poly_div(self, level=rlevel): + def test_poly_div(self, level=rlevel): """Ticket #553""" u = np.poly1d([1,2,3]) v = np.poly1d([1,2,3,4,5]) q,r = np.polydiv(u,v) assert_equal(q*v + r, u) - def check_poly_eq(self, level=rlevel): + def test_poly_eq(self, level=rlevel): """Ticket #554""" x = np.poly1d([1,2,3]) y = np.poly1d([3,4]) assert x != y assert x == x - def check_rand_seed(self, level=rlevel): + def test_rand_seed(self, level=rlevel): """Ticket #555""" for l in np.arange(4): np.random.seed(l) - def check_mem_deallocation_leak(self, level=rlevel): + def test_mem_deallocation_leak(self, level=rlevel): """Ticket #562""" a = np.zeros(5,dtype=float) b = np.array(a,dtype=float) del a, b - def check_mem_insert(self, level=rlevel): + def test_mem_insert(self, level=rlevel): """Ticket #572""" np.lib.place(1,1,1) - def check_mem_on_invalid_dtype(self): + def test_mem_on_invalid_dtype(self): "Ticket #583" self.failUnlessRaises(ValueError, np.fromiter, [['12',''],['13','']], str) - def check_dot_negative_stride(self, level=rlevel): + def test_dot_negative_stride(self, level=rlevel): """Ticket #588""" x = np.array([[1,5,25,125.,625]]) y = np.array([[20.],[160.],[640.],[1280.],[1024.]]) @@ -750,14 +748,14 @@ y2 = y[::-1] assert_equal(np.dot(x,z),np.dot(x,y2)) - def check_object_casting(self, level=rlevel): + def test_object_casting(self, level=rlevel): def rs(): x = np.ones([484,286]) y = np.zeros([484,286]) x |= y self.failUnlessRaises(TypeError,rs) - def check_unicode_scalar(self, level=rlevel): + def test_unicode_scalar(self, level=rlevel): """Ticket #600""" import cPickle x = np.array(["DROND", "DROND1"], dtype="U6") @@ -765,7 +763,7 @@ new = cPickle.loads(cPickle.dumps(el)) assert_equal(new, el) - def check_arange_non_native_dtype(self, level=rlevel): + def test_arange_non_native_dtype(self, level=rlevel): """Ticket #616""" for T in ('>f4','0)]=1.0 self.failUnlessRaises(ValueError,ia,x,s) - def check_mem_scalar_indexing(self, level=rlevel): + def test_mem_scalar_indexing(self, level=rlevel): """Ticket #603""" x = np.array([0],dtype=float) index = np.array(0,dtype=np.int32) x[index] - def check_binary_repr_0_width(self, level=rlevel): + def test_binary_repr_0_width(self, level=rlevel): assert_equal(np.binary_repr(0,width=3),'000') - def check_fromstring(self, level=rlevel): + def test_fromstring(self, level=rlevel): assert_equal(np.fromstring("12:09:09", dtype=int, sep=":"), [12,9,9]) - def check_searchsorted_variable_length(self, level=rlevel): + def test_searchsorted_variable_length(self, level=rlevel): x = np.array(['a','aa','b']) y = np.array(['d','e']) assert_equal(x.searchsorted(y), [3,3]) - def check_string_argsort_with_zeros(self, level=rlevel): + def test_string_argsort_with_zeros(self, level=rlevel): """Check argsort for strings containing zeros.""" x = np.fromstring("\x00\x02\x00\x01", dtype="|S2") assert_array_equal(x.argsort(kind='m'), np.array([1,0])) assert_array_equal(x.argsort(kind='q'), np.array([1,0])) - def check_string_sort_with_zeros(self, level=rlevel): + def test_string_sort_with_zeros(self, level=rlevel): """Check sort for strings containing zeros.""" x = np.fromstring("\x00\x02\x00\x01", dtype="|S2") y = np.fromstring("\x00\x01\x00\x02", dtype="|S2") assert_array_equal(np.sort(x, kind="q"), y) - def check_hist_bins_as_list(self, level=rlevel): + def test_hist_bins_as_list(self, level=rlevel): """Ticket #632""" hist,edges = np.histogram([1,2,3,4],[1,2]) assert_array_equal(hist,[1,3]) assert_array_equal(edges,[1,2]) - def check_copy_detection_zero_dim(self, level=rlevel): + def test_copy_detection_zero_dim(self, level=rlevel): """Ticket #658""" np.indices((0,3,4)).T.reshape(-1,3) - def check_flat_byteorder(self, level=rlevel): + def test_flat_byteorder(self, level=rlevel): """Ticket #657""" x = np.arange(10) assert_array_equal(x.astype('>i4'),x.astype('i4').flat[:],x.astype('i4')): x = np.array([-1,0,1],dtype=dt) assert_equal(x.flat[0].dtype, x[0].dtype) - def check_copy_detection_corner_case(self, level=rlevel): + def test_copy_detection_corner_case(self, level=rlevel): """Ticket #658""" np.indices((0,3,4)).T.reshape(-1,3) - def check_object_array_refcounting(self, level=rlevel): + def test_object_array_refcounting(self, level=rlevel): """Ticket #633""" if not hasattr(sys, 'getrefcount'): return @@ -942,7 +940,7 @@ assert cnt(a) == cnt0_a + 5 + 2 assert cnt(b) == cnt0_b + 5 + 3 - def check_mem_custom_float_to_array(self, level=rlevel): + def test_mem_custom_float_to_array(self, level=rlevel): """Ticket 702""" class MyFloat: def __float__(self): @@ -951,7 +949,7 @@ tmp = np.atleast_1d([MyFloat()]) tmp2 = tmp.astype(float) - def check_object_array_refcount_self_assign(self, level=rlevel): + def test_object_array_refcount_self_assign(self, level=rlevel): """Ticket #711""" class VictimObject(object): deleted = False @@ -966,23 +964,23 @@ arr[:] = arr # trying to induce a segfault by doing it again... assert not arr[0].deleted - def check_mem_fromiter_invalid_dtype_string(self, level=rlevel): + def test_mem_fromiter_invalid_dtype_string(self, level=rlevel): x = [1,2,3] self.failUnlessRaises(ValueError, np.fromiter, [xi for xi in x], dtype='S') - def check_reduce_big_object_array(self, level=rlevel): + def test_reduce_big_object_array(self, level=rlevel): """Ticket #713""" oldsize = np.setbufsize(10*16) a = np.array([None]*161, object) assert not np.any(a) np.setbufsize(oldsize) - def check_mem_0d_array_index(self, level=rlevel): + def test_mem_0d_array_index(self, level=rlevel): """Ticket #714""" np.zeros(10)[np.array(0)] - def check_floats_from_string(self, level=rlevel): + def test_floats_from_string(self, level=rlevel): """Ticket #640, floats from string""" fsingle = np.single('1.234') fdouble = np.double('1.234') @@ -991,7 +989,7 @@ assert_almost_equal(fdouble, 1.234) assert_almost_equal(flongdouble, 1.234) - def check_complex_dtype_printing(self, level=rlevel): + def test_complex_dtype_printing(self, level=rlevel): dt = np.dtype([('top', [('tiles', ('>f4', (64, 64)), (1,)), ('rtile', '>f4', (64, 36))], (3,)), ('bottom', [('bleft', ('>f4', (8, 64)), (1,)), @@ -1002,7 +1000,7 @@ "('bottom', [('bleft', ('>f4', (8, 64)), (1,)), " "('bright', '>f4', (8, 36))])]") - def check_nonnative_endian_fill(self, level=rlevel): + def test_nonnative_endian_fill(self, level=rlevel): """ Non-native endian arrays were incorrectly filled with scalars before r5034. """ @@ -1014,11 +1012,11 @@ x.fill(1) assert_equal(x, np.array([1], dtype=dtype)) - def check_asfarray_none(self, level=rlevel): + def test_asfarray_none(self, level=rlevel): """Test for changeset r5065""" assert_array_equal(np.array([np.nan]), np.asfarray([None])) - def check_dot_alignment_sse2(self, level=rlevel): + def test_dot_alignment_sse2(self, level=rlevel): """Test for ticket #551, changeset r5140""" x = np.zeros((30,40)) y = pickle.loads(pickle.dumps(x)) @@ -1027,7 +1025,7 @@ # This shouldn't cause a segmentation fault: np.dot(z, y) - def check_astype_copy(self, level=rlevel): + def test_astype_copy(self, level=rlevel): """Ticket #788, changeset r5155""" # The test data file was generated by scipy.io.savemat. # The dtype is float64, but the isbuiltin attribute is 0. @@ -1038,7 +1036,7 @@ assert (xp.__array_interface__['data'][0] != xpd.__array_interface__['data'][0]) - def check_compress_small_type(self, level=rlevel): + def test_compress_small_type(self, level=rlevel): """Ticket #789, changeset 5217. """ # compress with out argument segfaulted if cannot cast safely @@ -1053,7 +1051,7 @@ except TypeError: pass - def check_attributes(self, level=rlevel): + def test_attributes(self, level=rlevel): """Ticket #791 """ import numpy as np @@ -1124,8 +1122,7 @@ assert dat.var(1).info == 'jubba' assert dat.view(TestArray).info == 'jubba' - - def check_recarray_tolist(self, level=rlevel): + def test_recarray_tolist(self, level=rlevel): """Ticket #793, changeset r5215 """ a = np.recarray(2, formats="i4,f8,f8", names="id,x,y") @@ -1133,7 +1130,7 @@ assert( a[0].tolist() == b[0]) assert( a[1].tolist() == b[1]) - def check_large_fancy_indexing(self, level=rlevel): + def test_large_fancy_indexing(self, level=rlevel): # Large enough to fail on 64-bit. nbits = np.dtype(np.intp).itemsize * 8 thesize = int((2**nbits)**(1.0/5.0)+1) @@ -1150,10 +1147,11 @@ self.failUnlessRaises(ValueError, dp) self.failUnlessRaises(ValueError, dp2) - def check_char_array_creation(self, level=rlevel): + def test_char_array_creation(self, level=rlevel): a = np.array('123', dtype='c') b = np.array(['1','2','3']) assert_equal(a,b) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/core/tests/test_scalarmath.py =================================================================== --- trunk/numpy/core/tests/test_scalarmath.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/core/tests/test_scalarmath.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -10,13 +10,13 @@ # This compares scalarmath against ufuncs. -class TestTypes(NumpyTestCase): - def check_types(self, level=1): +class TestTypes(TestCase): + def test_types(self, level=1): for atype in types: a = atype(1) assert a == 1, "error with %r: got %r" % (atype,a) - def check_type_add(self, level=1): + def test_type_add(self, level=1): # list of types for k, atype in enumerate(types): vala = atype(3) @@ -30,20 +30,21 @@ val.dtype.char == valo.dtype.char, \ "error with (%d,%d)" % (k,l) - def check_type_create(self, level=1): + def test_type_create(self, level=1): for k, atype in enumerate(types): a = array([1,2,3],atype) b = atype([1,2,3]) assert_equal(a,b) -class TestPower(NumpyTestCase): - def check_small_types(self): + +class TestPower(TestCase): + def test_small_types(self): for t in [np.int8, np.int16]: a = t(3) b = a ** 4 assert b == 81, "error with %r: got %r" % (t,b) - def check_large_types(self): + def test_large_types(self): for t in [np.int32, np.int64, np.float32, np.float64, np.longdouble]: a = t(51) b = a ** 4 @@ -53,7 +54,8 @@ else: assert_almost_equal(b, 6765201, err_msg=msg) -class TestConversion(NumpyTestCase): + +class TestConversion(TestCase): def test_int_from_long(self): l = [1e6, 1e12, 1e18, -1e6, -1e12, -1e18] li = [10**6, 10**12, 10**18, -10**6, -10**12, -10**18] @@ -64,6 +66,7 @@ a = np.array(l[:3], dtype=np.uint64) assert_equal(map(int,a), li[:3]) + #class TestRepr(NumpyTestCase): # def check_repr(self): # for t in types: @@ -72,8 +75,9 @@ # val2 = eval(val_repr) # assert_equal( val, val2 ) -class TestRepr(NumpyTestCase): - def check_float_repr(self): + +class TestRepr(TestCase): + def test_float_repr(self): from numpy import nan, inf for t in [np.float32, np.float64, np.longdouble]: if t is np.longdouble: # skip it for now. @@ -82,7 +86,8 @@ last_fraction_bit_idx = finfo.nexp + finfo.nmant last_exponent_bit_idx = finfo.nexp storage_bytes = np.dtype(t).itemsize*8 - for which in ['small denorm','small norm']: # could add some more types here + # could add some more types to the list below + for which in ['small denorm','small norm']: # Values from http://en.wikipedia.org/wiki/IEEE_754 constr = array([0x00]*storage_bytes,dtype=np.uint8) if which == 'small denorm': @@ -106,5 +111,6 @@ if not (val2 == 0 and val < 1e-100): assert_equal(val, val2) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/core/tests/test_ufunc.py =================================================================== --- trunk/numpy/core/tests/test_ufunc.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/core/tests/test_ufunc.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -1,14 +1,14 @@ import numpy as np from numpy.testing import * -class TestUfunc(NumpyTestCase): +class TestUfunc(TestCase): def test_reduceat_shifting_sum(self) : L = 6 x = np.arange(L) idx = np.array(zip(np.arange(L-2), np.arange(L-2)+2)).ravel() assert_array_equal(np.add.reduceat(x,idx)[::2], [1,3,5,7]) - def check_generic_loops(self) : + def test_generic_loops(self) : """Test generic loops. The loops to be tested are: @@ -147,7 +147,7 @@ # check PyUFunc_On_Om # fixme -- I don't know how to do this yet - def check_all_ufunc(self) : + def test_all_ufunc(self) : """Try to check presence and results of all ufuncs. The list of ufuncs comes from generate_umath.py and is as follows: @@ -232,5 +232,6 @@ """ pass + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/core/tests/test_umath.py =================================================================== --- trunk/numpy/core/tests/test_umath.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/core/tests/test_umath.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -6,16 +6,16 @@ import numpy as np restore_path() -class TestDivision(NumpyTestCase): - def check_division_int(self): +class TestDivision(TestCase): + def test_division_int(self): # int division should return the floor of the result, a la Python x = array([5, 10, 90, 100, -5, -10, -90, -100, -120]) assert_equal(x / 100, [0, 0, 0, 1, -1, -1, -1, -1, -2]) assert_equal(x // 100, [0, 0, 0, 1, -1, -1, -1, -1, -2]) assert_equal(x % 100, [5, 10, 90, 0, 95, 90, 10, 0, 80]) -class TestPower(NumpyTestCase): - def check_power_float(self): +class TestPower(TestCase): + def test_power_float(self): x = array([1., 2., 3.]) assert_equal(x**0, [1., 1., 1.]) assert_equal(x**1, x) @@ -26,7 +26,7 @@ assert_almost_equal(x**(-1), [1., 0.5, 1./3]) assert_almost_equal(x**(0.5), [1., ncu.sqrt(2), ncu.sqrt(3)]) - def check_power_complex(self): + def test_power_complex(self): x = array([1+2j, 2+3j, 3+4j]) assert_equal(x**0, [1., 1., 1.]) assert_equal(x**1, x) @@ -39,41 +39,42 @@ assert_almost_equal(x**14, [-76443+16124j, 23161315+58317492j, 5583548873 + 2465133864j]) -class TestLog1p(NumpyTestCase): - def check_log1p(self): + +class TestLog1p(TestCase): + def test_log1p(self): assert_almost_equal(ncu.log1p(0.2), ncu.log(1.2)) assert_almost_equal(ncu.log1p(1e-6), ncu.log(1+1e-6)) -class TestExpm1(NumpyTestCase): - def check_expm1(self): +class TestExpm1(TestCase): + def test_expm1(self): assert_almost_equal(ncu.expm1(0.2), ncu.exp(0.2)-1) assert_almost_equal(ncu.expm1(1e-6), ncu.exp(1e-6)-1) -class TestMaximum(NumpyTestCase): - def check_reduce_complex(self): +class TestMaximum(TestCase): + def test_reduce_complex(self): assert_equal(maximum.reduce([1,2j]),1) assert_equal(maximum.reduce([1+3j,2j]),1+3j) -class TestMinimum(NumpyTestCase): - def check_reduce_complex(self): +class TestMinimum(TestCase): + def test_reduce_complex(self): assert_equal(minimum.reduce([1,2j]),2j) -class TestFloatingPoint(NumpyTestCase): - def check_floating_point(self): +class TestFloatingPoint(TestCase): + def test_floating_point(self): assert_equal(ncu.FLOATING_POINT_SUPPORT, 1) -def TestDegrees(NumpyTestCase): - def check_degrees(self): +class TestDegrees(TestCase): + def test_degrees(self): assert_almost_equal(ncu.degrees(pi), 180.0) assert_almost_equal(ncu.degrees(-0.5*pi), -90.0) -def TestRadians(NumpyTestCase): - def check_radians(self): +class TestRadians(TestCase): + def test_radians(self): assert_almost_equal(ncu.radians(180.0), pi) - assert_almost_equal(ncu.degrees(-90.0), -0.5*pi) + assert_almost_equal(ncu.radians(-90.0), -0.5*pi) -class TestSpecialMethods(NumpyTestCase): - def check_wrap(self): +class TestSpecialMethods(TestCase): + def test_wrap(self): class with_wrap(object): def __array__(self): return zeros(1) @@ -92,7 +93,7 @@ assert_equal(args[1], a) self.failUnlessEqual(i, 0) - def check_old_wrap(self): + def test_old_wrap(self): class with_wrap(object): def __array__(self): return zeros(1) @@ -104,7 +105,7 @@ x = minimum(a, a) assert_equal(x.arr, zeros(1)) - def check_priority(self): + def test_priority(self): class A(object): def __array__(self): return zeros(1) @@ -142,7 +143,7 @@ self.failUnless(type(exp(b) is B)) self.failUnless(type(exp(c) is C)) - def check_failing_wrap(self): + def test_failing_wrap(self): class A(object): def __array__(self): return zeros(1) @@ -151,7 +152,7 @@ a = A() self.failUnlessRaises(RuntimeError, maximum, a, a) - def check_array_with_context(self): + def test_array_with_context(self): class A(object): def __array__(self, dtype=None, context=None): func, args, i = context @@ -174,19 +175,20 @@ assert_equal(maximum(a, B()), 0) assert_equal(maximum(a, C()), 0) -class TestChoose(NumpyTestCase): - def check_mixed(self): + +class TestChoose(TestCase): + def test_mixed(self): c = array([True,True]) a = array([True,True]) assert_equal(choose(c, (a, 1)), array([1,1])) -class TestComplexFunctions(NumpyTestCase): +class TestComplexFunctions(TestCase): funcs = [np.arcsin , np.arccos , np.arctan, np.arcsinh, np.arccosh, np.arctanh, np.sin , np.cos , np.tan , np.exp, np.log , np.sqrt , np.log10] - def check_it(self): + def test_it(self): for f in self.funcs: if f is np.arccosh : x = 1.5 @@ -197,7 +199,7 @@ assert_almost_equal(fz.real, fr, err_msg='real part %s'%f) assert_almost_equal(fz.imag, 0., err_msg='imag part %s'%f) - def check_precisions_consistent(self) : + def test_precisions_consistent(self) : z = 1 + 1j for f in self.funcs : fcf = f(np.csingle(z)) @@ -207,8 +209,8 @@ assert_almost_equal(fcl, fcd, decimal=15, err_msg='fch-fcl %s'%f) -class TestChoose(NumpyTestCase): - def check_attributes(self): +class TestAttributes(TestCase): + def test_attributes(self): add = ncu.add assert_equal(add.__name__, 'add') assert add.__doc__.startswith('y = add(x1,x2)\n\n') @@ -218,5 +220,6 @@ assert_equal(add.nout, 1) assert_equal(add.identity, 0) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/core/tests/test_unicode.py =================================================================== --- trunk/numpy/core/tests/test_unicode.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/core/tests/test_unicode.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -18,10 +18,10 @@ # Creation tests ############################################################ -class create_zeros(NumpyTestCase): +class create_zeros: """Check the creation of zero-valued arrays""" - def content_test(self, ua, ua_scalar, nbytes): + def content_check(self, ua, ua_scalar, nbytes): # Check the length of the unicode base type self.assert_(int(ua.dtype.str[2:]) == self.ulen) @@ -37,41 +37,43 @@ else: self.assert_(len(buffer(ua_scalar)) == 0) - def check_zeros0D(self): + def test_zeros0D(self): """Check creation of 0-dimensional objects""" ua = zeros((), dtype='U%s' % self.ulen) - self.content_test(ua, ua[()], 4*self.ulen) + self.content_check(ua, ua[()], 4*self.ulen) - def check_zerosSD(self): + def test_zerosSD(self): """Check creation of single-dimensional objects""" ua = zeros((2,), dtype='U%s' % self.ulen) - self.content_test(ua, ua[0], 4*self.ulen*2) - self.content_test(ua, ua[1], 4*self.ulen*2) + self.content_check(ua, ua[0], 4*self.ulen*2) + self.content_check(ua, ua[1], 4*self.ulen*2) - def check_zerosMD(self): + def test_zerosMD(self): """Check creation of multi-dimensional objects""" ua = zeros((2,3,4), dtype='U%s' % self.ulen) - self.content_test(ua, ua[0,0,0], 4*self.ulen*2*3*4) - self.content_test(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) + self.content_check(ua, ua[0,0,0], 4*self.ulen*2*3*4) + self.content_check(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) -class test_create_zeros_1(create_zeros): +class test_create_zeros_1(create_zeros, TestCase): """Check the creation of zero-valued arrays (size 1)""" ulen = 1 -class test_create_zeros_2(create_zeros): + +class test_create_zeros_2(create_zeros, TestCase): """Check the creation of zero-valued arrays (size 2)""" ulen = 2 -class test_create_zeros_1009(create_zeros): + +class test_create_zeros_1009(create_zeros, TestCase): """Check the creation of zero-valued arrays (size 1009)""" ulen = 1009 -class create_values(NumpyTestCase): +class create_values: """Check the creation of unicode arrays with values""" - def content_test(self, ua, ua_scalar, nbytes): + def content_check(self, ua, ua_scalar, nbytes): # Check the length of the unicode base type self.assert_(int(ua.dtype.str[2:]) == self.ulen) @@ -95,50 +97,55 @@ # regular 2-byte word self.assert_(len(buffer(ua_scalar)) == 2*self.ulen) - def check_values0D(self): + def test_values0D(self): """Check creation of 0-dimensional objects with values""" ua = array(self.ucs_value*self.ulen, dtype='U%s' % self.ulen) - self.content_test(ua, ua[()], 4*self.ulen) + self.content_check(ua, ua[()], 4*self.ulen) - def check_valuesSD(self): + def test_valuesSD(self): """Check creation of single-dimensional objects with values""" ua = array([self.ucs_value*self.ulen]*2, dtype='U%s' % self.ulen) - self.content_test(ua, ua[0], 4*self.ulen*2) - self.content_test(ua, ua[1], 4*self.ulen*2) + self.content_check(ua, ua[0], 4*self.ulen*2) + self.content_check(ua, ua[1], 4*self.ulen*2) - def check_valuesMD(self): + def test_valuesMD(self): """Check creation of multi-dimensional objects with values""" ua = array([[[self.ucs_value*self.ulen]*2]*3]*4, dtype='U%s' % self.ulen) - self.content_test(ua, ua[0,0,0], 4*self.ulen*2*3*4) - self.content_test(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) + self.content_check(ua, ua[0,0,0], 4*self.ulen*2*3*4) + self.content_check(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) -class test_create_values_1_ucs2(create_values): +class test_create_values_1_ucs2(create_values, TestCase): """Check the creation of valued arrays (size 1, UCS2 values)""" ulen = 1 ucs_value = ucs2_value -class test_create_values_1_ucs4(create_values): + +class test_create_values_1_ucs4(create_values, TestCase): """Check the creation of valued arrays (size 1, UCS4 values)""" ulen = 1 ucs_value = ucs4_value -class test_create_values_2_ucs2(create_values): + +class test_create_values_2_ucs2(create_values, TestCase): """Check the creation of valued arrays (size 2, UCS2 values)""" ulen = 2 ucs_value = ucs2_value -class test_create_values_2_ucs4(create_values): + +class test_create_values_2_ucs4(create_values, TestCase): """Check the creation of valued arrays (size 2, UCS4 values)""" ulen = 2 ucs_value = ucs4_value -class test_create_values_1009_ucs2(create_values): + +class test_create_values_1009_ucs2(create_values, TestCase): """Check the creation of valued arrays (size 1009, UCS2 values)""" ulen = 1009 ucs_value = ucs2_value -class test_create_values_1009_ucs4(create_values): + +class test_create_values_1009_ucs4(create_values, TestCase): """Check the creation of valued arrays (size 1009, UCS4 values)""" ulen = 1009 ucs_value = ucs4_value @@ -148,10 +155,10 @@ # Assignment tests ############################################################ -class assign_values(NumpyTestCase): +class assign_values: """Check the assignment of unicode arrays with values""" - def content_test(self, ua, ua_scalar, nbytes): + def content_check(self, ua, ua_scalar, nbytes): # Check the length of the unicode base type self.assert_(int(ua.dtype.str[2:]) == self.ulen) @@ -175,68 +182,74 @@ # regular 2-byte word self.assert_(len(buffer(ua_scalar)) == 2*self.ulen) - def check_values0D(self): + def test_values0D(self): """Check assignment of 0-dimensional objects with values""" ua = zeros((), dtype='U%s' % self.ulen) ua[()] = self.ucs_value*self.ulen - self.content_test(ua, ua[()], 4*self.ulen) + self.content_check(ua, ua[()], 4*self.ulen) - def check_valuesSD(self): + def test_valuesSD(self): """Check assignment of single-dimensional objects with values""" ua = zeros((2,), dtype='U%s' % self.ulen) ua[0] = self.ucs_value*self.ulen - self.content_test(ua, ua[0], 4*self.ulen*2) + self.content_check(ua, ua[0], 4*self.ulen*2) ua[1] = self.ucs_value*self.ulen - self.content_test(ua, ua[1], 4*self.ulen*2) + self.content_check(ua, ua[1], 4*self.ulen*2) - def check_valuesMD(self): + def test_valuesMD(self): """Check assignment of multi-dimensional objects with values""" ua = zeros((2,3,4), dtype='U%s' % self.ulen) ua[0,0,0] = self.ucs_value*self.ulen - self.content_test(ua, ua[0,0,0], 4*self.ulen*2*3*4) + self.content_check(ua, ua[0,0,0], 4*self.ulen*2*3*4) ua[-1,-1,-1] = self.ucs_value*self.ulen - self.content_test(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) + self.content_check(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) -class test_assign_values_1_ucs2(assign_values): +class test_assign_values_1_ucs2(assign_values, TestCase): """Check the assignment of valued arrays (size 1, UCS2 values)""" ulen = 1 ucs_value = ucs2_value -class test_assign_values_1_ucs4(assign_values): + +class test_assign_values_1_ucs4(assign_values, TestCase): """Check the assignment of valued arrays (size 1, UCS4 values)""" ulen = 1 ucs_value = ucs4_value + -class test_assign_values_2_ucs2(assign_values): +class test_assign_values_2_ucs2(assign_values, TestCase): """Check the assignment of valued arrays (size 2, UCS2 values)""" ulen = 2 ucs_value = ucs2_value + -class test_assign_values_2_ucs4(assign_values): +class test_assign_values_2_ucs4(assign_values, TestCase): """Check the assignment of valued arrays (size 2, UCS4 values)""" ulen = 2 ucs_value = ucs4_value + -class test_assign_values_1009_ucs2(assign_values): +class test_assign_values_1009_ucs2(assign_values, TestCase): """Check the assignment of valued arrays (size 1009, UCS2 values)""" ulen = 1009 ucs_value = ucs2_value + -class test_assign_values_1009_ucs4(assign_values): +class test_assign_values_1009_ucs4(assign_values, TestCase): """Check the assignment of valued arrays (size 1009, UCS4 values)""" ulen = 1009 ucs_value = ucs4_value + ############################################################ # Byteorder tests ############################################################ -class byteorder_values(NumpyTestCase): +class byteorder_values: """Check the byteorder of unicode arrays in round-trip conversions""" - def check_values0D(self): + def test_values0D(self): """Check byteorder of 0-dimensional objects""" ua = array(self.ucs_value*self.ulen, dtype='U%s' % self.ulen) ua2 = ua.newbyteorder() @@ -248,7 +261,7 @@ # Arrays must be equal after the round-trip assert_equal(ua, ua3) - def check_valuesSD(self): + def test_valuesSD(self): """Check byteorder of single-dimensional objects""" ua = array([self.ucs_value*self.ulen]*2, dtype='U%s' % self.ulen) ua2 = ua.newbyteorder() @@ -258,7 +271,7 @@ # Arrays must be equal after the round-trip assert_equal(ua, ua3) - def check_valuesMD(self): + def test_valuesMD(self): """Check byteorder of multi-dimensional objects""" ua = array([[[self.ucs_value*self.ulen]*2]*3]*4, dtype='U%s' % self.ulen) @@ -269,36 +282,43 @@ # Arrays must be equal after the round-trip assert_equal(ua, ua3) -class test_byteorder_1_ucs2(byteorder_values): + +class test_byteorder_1_ucs2(byteorder_values, TestCase): """Check the byteorder in unicode (size 1, UCS2 values)""" ulen = 1 ucs_value = ucs2_value + -class test_byteorder_1_ucs4(byteorder_values): +class test_byteorder_1_ucs4(byteorder_values, TestCase): """Check the byteorder in unicode (size 1, UCS4 values)""" ulen = 1 ucs_value = ucs4_value + -class test_byteorder_2_ucs2(byteorder_values): +class test_byteorder_2_ucs2(byteorder_values, TestCase): """Check the byteorder in unicode (size 2, UCS2 values)""" ulen = 2 ucs_value = ucs2_value + -class test_byteorder_2_ucs4(byteorder_values): +class test_byteorder_2_ucs4(byteorder_values, TestCase): """Check the byteorder in unicode (size 2, UCS4 values)""" ulen = 2 ucs_value = ucs4_value + -class test_byteorder_1009_ucs2(byteorder_values): +class test_byteorder_1009_ucs2(byteorder_values, TestCase): """Check the byteorder in unicode (size 1009, UCS2 values)""" ulen = 1009 ucs_value = ucs2_value + -class test_byteorder_1009_ucs4(byteorder_values): +class test_byteorder_1009_ucs4(byteorder_values, TestCase): """Check the byteorder in unicode (size 1009, UCS4 values)""" ulen = 1009 ucs_value = ucs4_value if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) + Modified: trunk/numpy/distutils/__init__.py =================================================================== --- trunk/numpy/distutils/__init__.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/distutils/__init__.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -15,6 +15,6 @@ _INSTALLED = False if _INSTALLED: - def test(level=1, verbosity=1): - from numpy.testing import NumpyTest - return NumpyTest().test(level, verbosity) + from numpy.testing.pkgtester import Tester + test = Tester().test + bench = Tester().bench Modified: trunk/numpy/distutils/tests/f2py_ext/tests/test_fib2.py =================================================================== --- trunk/numpy/distutils/tests/f2py_ext/tests/test_fib2.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/distutils/tests/f2py_ext/tests/test_fib2.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -4,10 +4,10 @@ from f2py_ext import fib2 del sys.path[0] -class TestFib2(NumpyTestCase): +class TestFib2(TestCase): - def check_fib(self): + def test_fib(self): assert_array_equal(fib2.fib(6),[0,1,1,2,3,5]) if __name__ == "__main__": - NumpyTest(fib2).run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/distutils/tests/f2py_f90_ext/tests/test_foo.py =================================================================== --- trunk/numpy/distutils/tests/f2py_f90_ext/tests/test_foo.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/distutils/tests/f2py_f90_ext/tests/test_foo.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -4,10 +4,10 @@ from f2py_f90_ext import foo del sys.path[0] -class TestFoo(NumpyTestCase): +class TestFoo(TestCase): - def check_foo_free(self): + def test_foo_free(self): assert_equal(foo.foo_free.bar13(),13) if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/distutils/tests/gen_ext/tests/test_fib3.py =================================================================== --- trunk/numpy/distutils/tests/gen_ext/tests/test_fib3.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/distutils/tests/gen_ext/tests/test_fib3.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -4,10 +4,10 @@ from gen_ext import fib3 del sys.path[0] -class TestFib3(NumpyTestCase): +class TestFib3(TestCase): - def check_fib(self): + def test_fib(self): assert_array_equal(fib3.fib(6),[0,1,1,2,3,5]) if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/distutils/tests/pyrex_ext/tests/test_primes.py =================================================================== --- trunk/numpy/distutils/tests/pyrex_ext/tests/test_primes.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/distutils/tests/pyrex_ext/tests/test_primes.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -5,9 +5,9 @@ from pyrex_ext.primes import primes restore_path() -class TestPrimes(NumpyTestCase): - def check_simple(self, level=1): +class TestPrimes(TestCase): + def test_simple(self, level=1): l = primes(10) assert_equal(l, [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]) if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/distutils/tests/swig_ext/tests/test_example.py =================================================================== --- trunk/numpy/distutils/tests/swig_ext/tests/test_example.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/distutils/tests/swig_ext/tests/test_example.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -4,15 +4,15 @@ from swig_ext import example restore_path() -class TestExample(NumpyTestCase): +class TestExample(TestCase): - def check_fact(self): + def test_fact(self): assert_equal(example.fact(10),3628800) - def check_cvar(self): + def test_cvar(self): assert_equal(example.cvar.My_variable,3.0) example.cvar.My_variable = 5 assert_equal(example.cvar.My_variable,5.0) if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/distutils/tests/swig_ext/tests/test_example2.py =================================================================== --- trunk/numpy/distutils/tests/swig_ext/tests/test_example2.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/distutils/tests/swig_ext/tests/test_example2.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -4,9 +4,9 @@ from swig_ext import example2 restore_path() -class TestExample2(NumpyTestCase): +class TestExample2(TestCase): - def check_zoo(self): + def test_zoo(self): z = example2.Zoo() z.shut_up('Tiger') z.shut_up('Lion') @@ -14,4 +14,4 @@ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/distutils/tests/test_fcompiler_gnu.py =================================================================== --- trunk/numpy/distutils/tests/test_fcompiler_gnu.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/distutils/tests/test_fcompiler_gnu.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -21,7 +21,7 @@ ('GNU Fortran (GCC) 4.3.0 20070316 (experimental)', '4.3.0'), ] -class TestG77Versions(NumpyTestCase): +class TestG77Versions(TestCase): def test_g77_version(self): fc = numpy.distutils.fcompiler.new_fcompiler(compiler='gnu') for vs, version in g77_version_strings: @@ -34,7 +34,7 @@ v = fc.version_match(vs) assert v is None, (vs, v) -class TestGortranVersions(NumpyTestCase): +class TestGortranVersions(TestCase): def test_gfortran_version(self): fc = numpy.distutils.fcompiler.new_fcompiler(compiler='gnu95') for vs, version in gfortran_version_strings: @@ -49,4 +49,4 @@ if __name__ == '__main__': - NumpyTest.run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/distutils/tests/test_misc_util.py =================================================================== --- trunk/numpy/distutils/tests/test_misc_util.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/distutils/tests/test_misc_util.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -8,15 +8,15 @@ ajoin = lambda *paths: join(*((sep,)+paths)) -class TestAppendpath(NumpyTestCase): +class TestAppendpath(TestCase): - def check_1(self): + def test_1(self): assert_equal(appendpath('prefix','name'),join('prefix','name')) assert_equal(appendpath('/prefix','name'),ajoin('prefix','name')) assert_equal(appendpath('/prefix','/name'),ajoin('prefix','name')) assert_equal(appendpath('prefix','/name'),join('prefix','name')) - def check_2(self): + def test_2(self): assert_equal(appendpath('prefix/sub','name'), join('prefix','sub','name')) assert_equal(appendpath('prefix/sub','sup/name'), @@ -24,7 +24,7 @@ assert_equal(appendpath('/prefix/sub','/prefix/name'), ajoin('prefix','sub','name')) - def check_3(self): + def test_3(self): assert_equal(appendpath('/prefix/sub','/prefix/sup/name'), ajoin('prefix','sub','sup','name')) assert_equal(appendpath('/prefix/sub/sub2','/prefix/sup/sup2/name'), @@ -32,9 +32,9 @@ assert_equal(appendpath('/prefix/sub/sub2','/prefix/sub/sup/name'), ajoin('prefix','sub','sub2','sup','name')) -class TestMinrelpath(NumpyTestCase): +class TestMinrelpath(TestCase): - def check_1(self): + def test_1(self): import os n = lambda path: path.replace('/',os.path.sep) assert_equal(minrelpath(n('aa/bb')),n('aa/bb')) @@ -47,14 +47,15 @@ assert_equal(minrelpath(n('.././..')),n('../..')) assert_equal(minrelpath(n('aa/bb/.././../dd')),n('dd')) -class TestGpaths(NumpyTestCase): +class TestGpaths(TestCase): - def check_gpaths(self): + def test_gpaths(self): local_path = minrelpath(os.path.join(os.path.dirname(__file__),'..')) ls = gpaths('command/*.py', local_path) assert os.path.join(local_path,'command','build_src.py') in ls,`ls` f = gpaths('system_info.py', local_path) assert os.path.join(local_path,'system_info.py')==f[0],`f` + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/doc/DISTUTILS.txt =================================================================== --- trunk/numpy/doc/DISTUTILS.txt 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/doc/DISTUTILS.txt 2008-06-17 00:23:20 UTC (rev 5287) @@ -465,12 +465,10 @@ Ideally, every Python code, extension module, or subpackage in Scipy package directory should have the corresponding ``test_.py`` file in ``tests/`` directory. This file should define classes -derived from ``NumpyTestCase`` (or from ``unittest.TestCase``) class -and have names starting with ``test``. The methods of these classes -which names start with ``bench``, ``check``, or ``test``, are passed -on to unittest machinery. In addition, the value of the first optional -argument of these methods determine the level of the corresponding -test. Default level is 1. +derived from the ``numpy.testing.TestCase`` class (or from +``unittest.TestCase``) and have names starting with ``test``. The methods +of these classes whose names contain ``test`` or start with ``bench`` are +automatically picked up by the test machinery. A minimal example of a ``test_yyy.py`` file that implements tests for a Scipy package module ``numpy.xxx.yyy`` containing a function @@ -489,20 +487,16 @@ # import modules that are located in the same directory as this file. restore_path() - class test_zzz(NumpyTestCase): - def check_simple(self, level=1): + class test_zzz(TestCase): + def test_simple(self, level=1): assert zzz()=='Hello from zzz' #... if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) -``NumpyTestCase`` is derived from ``unittest.TestCase`` and it -basically only implements an additional method ``measure(self, -code_str, times=1)``. - Note that all classes that are inherited from ``TestCase`` class, are -picked up by the test runner when using ``testoob``. +automatically picked up by the test runner. ``numpy.testing`` module provides also the following convenience functions:: @@ -514,25 +508,15 @@ assert_array_almost_equal(x,y,decimal=6,err_msg='') rand(*shape) # returns random array with a given shape -``NumpyTest`` can be used for running ``tests/test_*.py`` scripts. -For instance, to run all test scripts of the module ``xxx``, execute -in Python: +To run all test scripts of the module ``xxx``, execute in Python: - >>> NumpyTest('xxx').test(level=1,verbosity=1) + >>> import numpy + >>> numpy.xxx.test() -or equivalently, - - >>> import xxx - >>> NumpyTest(xxx).test(level=1,verbosity=1) - To run only tests for ``xxx.yyy`` module, execute: >>> NumpyTest('xxx.yyy').test(level=1,verbosity=1) -To take the level and verbosity parameters for tests from -``sys.argv``, use ``NumpyTest.run()`` method (this is supported only -when ``optparse`` is installed). - Extra features in NumPy Distutils ''''''''''''''''''''''''''''''''' Modified: trunk/numpy/f2py/lib/parser/test_Fortran2003.py =================================================================== --- trunk/numpy/f2py/lib/parser/test_Fortran2003.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/f2py/lib/parser/test_Fortran2003.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -7,9 +7,9 @@ ############################### SECTION 2 #################################### ############################################################################### -class TestProgram(NumpyTestCase): # R201 +class TestProgram(TestCase): # R201 - def check_simple(self): + def test_simple(self): reader = get_reader('''\ subroutine foo end subroutine foo @@ -21,9 +21,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a), 'SUBROUTINE foo\nEND SUBROUTINE foo\nSUBROUTINE bar\nEND SUBROUTINE bar') -class TestSpecificationPart(NumpyTestCase): # R204 +class TestSpecificationPart(TestCase): # R204 - def check_simple(self): + def test_simple(self): from api import get_reader reader = get_reader('''\ integer a''') @@ -37,9 +37,9 @@ ############################### SECTION 3 #################################### ############################################################################### -class TestName(NumpyTestCase): # R304 +class TestName(TestCase): # R304 - def check_name(self): + def test_name(self): a = Name('a') assert isinstance(a,Name),`a` a = Name('a2') @@ -55,9 +55,9 @@ ############################### SECTION 4 #################################### ############################################################################### -class TestTypeParamValue(NumpyTestCase): # 402 +class TestTypeParamValue(TestCase): # 402 - def check_type_param_value(self): + def test_type_param_value(self): cls = Type_Param_Value a = cls('*') assert isinstance(a,cls),`a` @@ -72,9 +72,9 @@ assert isinstance(a,Level_2_Expr),`a` assert_equal(str(a),'1 + 2') -class TestIntrinsicTypeSpec(NumpyTestCase): # R403 +class TestIntrinsicTypeSpec(TestCase): # R403 - def check_intrinsic_type_spec(self): + def test_intrinsic_type_spec(self): cls = Intrinsic_Type_Spec a = cls('INTEGER') assert isinstance(a,cls),`a` @@ -109,9 +109,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'DOUBLE PRECISION') -class TestKindSelector(NumpyTestCase): # R404 +class TestKindSelector(TestCase): # R404 - def check_kind_selector(self): + def test_kind_selector(self): cls = Kind_Selector a = cls('(1)') assert isinstance(a,cls),`a` @@ -126,9 +126,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'*1') -class TestSignedIntLiteralConstant(NumpyTestCase): # R405 +class TestSignedIntLiteralConstant(TestCase): # R405 - def check_int_literal_constant(self): + def test_int_literal_constant(self): cls = Signed_Int_Literal_Constant a = cls('1') assert isinstance(a,cls),`a` @@ -152,9 +152,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'+1976354279568241_8') -class TestIntLiteralConstant(NumpyTestCase): # R406 +class TestIntLiteralConstant(TestCase): # R406 - def check_int_literal_constant(self): + def test_int_literal_constant(self): cls = Int_Literal_Constant a = cls('1') assert isinstance(a,cls),`a` @@ -178,9 +178,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'1976354279568241_8') -class TestBinaryConstant(NumpyTestCase): # R412 +class TestBinaryConstant(TestCase): # R412 - def check_boz_literal_constant(self): + def test_boz_literal_constant(self): cls = Boz_Literal_Constant bcls = Binary_Constant a = cls('B"01"') @@ -188,9 +188,9 @@ assert_equal(str(a),'B"01"') assert_equal(repr(a),"%s('B\"01\"')" % (bcls.__name__)) -class TestOctalConstant(NumpyTestCase): # R413 +class TestOctalConstant(TestCase): # R413 - def check_boz_literal_constant(self): + def test_boz_literal_constant(self): cls = Boz_Literal_Constant ocls = Octal_Constant a = cls('O"017"') @@ -198,9 +198,9 @@ assert_equal(str(a),'O"017"') assert_equal(repr(a),"%s('O\"017\"')" % (ocls.__name__)) -class TestHexConstant(NumpyTestCase): # R414 +class TestHexConstant(TestCase): # R414 - def check_boz_literal_constant(self): + def test_boz_literal_constant(self): cls = Boz_Literal_Constant zcls = Hex_Constant a = cls('Z"01A"') @@ -208,9 +208,9 @@ assert_equal(str(a),'Z"01A"') assert_equal(repr(a),"%s('Z\"01A\"')" % (zcls.__name__)) -class TestSignedRealLiteralConstant(NumpyTestCase): # R416 +class TestSignedRealLiteralConstant(TestCase): # R416 - def check_signed_real_literal_constant(self): + def test_signed_real_literal_constant(self): cls = Signed_Real_Literal_Constant a = cls('12.78') assert isinstance(a,cls),`a` @@ -265,9 +265,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'-10.9E-17_quad') -class TestRealLiteralConstant(NumpyTestCase): # R417 +class TestRealLiteralConstant(TestCase): # R417 - def check_real_literal_constant(self): + def test_real_literal_constant(self): cls = Real_Literal_Constant a = cls('12.78') assert isinstance(a,cls),`a` @@ -326,9 +326,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'0.0D+0') -class TestCharSelector(NumpyTestCase): # R424 +class TestCharSelector(TestCase): # R424 - def check_char_selector(self): + def test_char_selector(self): cls = Char_Selector a = cls('(len=2, kind=8)') assert isinstance(a,cls),`a` @@ -352,9 +352,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'(LEN = 2, KIND = 8)') -class TestComplexLiteralConstant(NumpyTestCase): # R421 +class TestComplexLiteralConstant(TestCase): # R421 - def check_complex_literal_constant(self): + def test_complex_literal_constant(self): cls = Complex_Literal_Constant a = cls('(1.0, -1.0)') assert isinstance(a,cls),`a` @@ -374,9 +374,9 @@ assert_equal(str(a),'(0., PI)') -class TestTypeName(NumpyTestCase): # C424 +class TestTypeName(TestCase): # C424 - def check_simple(self): + def test_simple(self): cls = Type_Name a = cls('a') assert isinstance(a,cls),`a` @@ -386,9 +386,9 @@ self.assertRaises(NoMatchError,cls,'integer') self.assertRaises(NoMatchError,cls,'doubleprecision') -class TestLengthSelector(NumpyTestCase): # R425 +class TestLengthSelector(TestCase): # R425 - def check_length_selector(self): + def test_length_selector(self): cls = Length_Selector a = cls('( len = *)') assert isinstance(a,cls),`a` @@ -399,9 +399,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'*2') -class TestCharLength(NumpyTestCase): # R426 +class TestCharLength(TestCase): # R426 - def check_char_length(self): + def test_char_length(self): cls = Char_Length a = cls('(1)') assert isinstance(a,cls),`a` @@ -420,9 +420,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'(:)') -class TestCharLiteralConstant(NumpyTestCase): # R427 +class TestCharLiteralConstant(TestCase): # R427 - def check_char_literal_constant(self): + def test_char_literal_constant(self): cls = Char_Literal_Constant a = cls('NIH_"DO"') assert isinstance(a,cls),`a` @@ -454,9 +454,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'"hey ha(ada)\t"') -class TestLogicalLiteralConstant(NumpyTestCase): # R428 +class TestLogicalLiteralConstant(TestCase): # R428 - def check_logical_literal_constant(self): + def test_logical_literal_constant(self): cls = Logical_Literal_Constant a = cls('.TRUE.') assert isinstance(a,cls),`a` @@ -475,9 +475,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'.TRUE._HA') -class TestDerivedTypeStmt(NumpyTestCase): # R430 +class TestDerivedTypeStmt(TestCase): # R430 - def check_simple(self): + def test_simple(self): cls = Derived_Type_Stmt a = cls('type a') assert isinstance(a, cls),`a` @@ -492,18 +492,18 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'TYPE, PRIVATE, ABSTRACT :: a(b, c)') -class TestTypeName(NumpyTestCase): # C423 +class TestTypeName(TestCase): # C423 - def check_simple(self): + def test_simple(self): cls = Type_Name a = cls('a') assert isinstance(a, cls),`a` assert_equal(str(a),'a') assert_equal(repr(a),"Type_Name('a')") -class TestTypeAttrSpec(NumpyTestCase): # R431 +class TestTypeAttrSpec(TestCase): # R431 - def check_simple(self): + def test_simple(self): cls = Type_Attr_Spec a = cls('abstract') assert isinstance(a, cls),`a` @@ -523,9 +523,9 @@ assert_equal(str(a),'PRIVATE') -class TestEndTypeStmt(NumpyTestCase): # R433 +class TestEndTypeStmt(TestCase): # R433 - def check_simple(self): + def test_simple(self): cls = End_Type_Stmt a = cls('end type') assert isinstance(a, cls),`a` @@ -536,18 +536,18 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'END TYPE a') -class TestSequenceStmt(NumpyTestCase): # R434 +class TestSequenceStmt(TestCase): # R434 - def check_simple(self): + def test_simple(self): cls = Sequence_Stmt a = cls('sequence') assert isinstance(a, cls),`a` assert_equal(str(a),'SEQUENCE') assert_equal(repr(a),"Sequence_Stmt('SEQUENCE')") -class TestTypeParamDefStmt(NumpyTestCase): # R435 +class TestTypeParamDefStmt(TestCase): # R435 - def check_simple(self): + def test_simple(self): cls = Type_Param_Def_Stmt a = cls('integer ,kind :: a') assert isinstance(a, cls),`a` @@ -558,9 +558,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'INTEGER*2, LEN :: a = 3, b = 2 + c') -class TestTypeParamDecl(NumpyTestCase): # R436 +class TestTypeParamDecl(TestCase): # R436 - def check_simple(self): + def test_simple(self): cls = Type_Param_Decl a = cls('a=2') assert isinstance(a, cls),`a` @@ -571,9 +571,9 @@ assert isinstance(a, Name),`a` assert_equal(str(a),'a') -class TestTypeParamAttrSpec(NumpyTestCase): # R437 +class TestTypeParamAttrSpec(TestCase): # R437 - def check_simple(self): + def test_simple(self): cls = Type_Param_Attr_Spec a = cls('kind') assert isinstance(a, cls),`a` @@ -584,9 +584,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'LEN') -class TestComponentAttrSpec(NumpyTestCase): # R441 +class TestComponentAttrSpec(TestCase): # R441 - def check_simple(self): + def test_simple(self): cls = Component_Attr_Spec a = cls('pointer') assert isinstance(a, cls),`a` @@ -605,9 +605,9 @@ assert isinstance(a, Access_Spec),`a` assert_equal(str(a),'PRIVATE') -class TestComponentDecl(NumpyTestCase): # R442 +class TestComponentDecl(TestCase): # R442 - def check_simple(self): + def test_simple(self): cls = Component_Decl a = cls('a(1)') assert isinstance(a, cls),`a` @@ -626,9 +626,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'a(1) => NULL') -class TestFinalBinding(NumpyTestCase): # R454 +class TestFinalBinding(TestCase): # R454 - def check_simple(self): + def test_simple(self): cls = Final_Binding a = cls('final a, b') assert isinstance(a,cls),`a` @@ -639,9 +639,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'FINAL :: a') -class TestDerivedTypeSpec(NumpyTestCase): # R455 +class TestDerivedTypeSpec(TestCase): # R455 - def check_simple(self): + def test_simple(self): cls = Derived_Type_Spec a = cls('a(b)') assert isinstance(a,cls),`a` @@ -660,9 +660,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'a()') -class TestTypeParamSpec(NumpyTestCase): # R456 +class TestTypeParamSpec(TestCase): # R456 - def check_type_param_spec(self): + def test_type_param_spec(self): cls = Type_Param_Spec a = cls('a=1') assert isinstance(a,cls),`a` @@ -677,9 +677,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'k = :') -class TestTypeParamSpecList(NumpyTestCase): # R456-list +class TestTypeParamSpecList(TestCase): # R456-list - def check_type_param_spec_list(self): + def test_type_param_spec_list(self): cls = Type_Param_Spec_List a = cls('a,b') @@ -694,9 +694,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'k = a, c, g = 1') -class TestStructureConstructor2(NumpyTestCase): # R457.b +class TestStructureConstructor2(TestCase): # R457.b - def check_simple(self): + def test_simple(self): cls = Structure_Constructor_2 a = cls('k=a') assert isinstance(a,cls),`a` @@ -707,9 +707,9 @@ assert isinstance(a,Name),`a` assert_equal(str(a),'a') -class TestStructureConstructor(NumpyTestCase): # R457 +class TestStructureConstructor(TestCase): # R457 - def check_structure_constructor(self): + def test_structure_constructor(self): cls = Structure_Constructor a = cls('t()') assert isinstance(a,cls),`a` @@ -729,9 +729,9 @@ assert isinstance(a,Name),`a` assert_equal(str(a),'a') -class TestComponentSpec(NumpyTestCase): # R458 +class TestComponentSpec(TestCase): # R458 - def check_simple(self): + def test_simple(self): cls = Component_Spec a = cls('k=a') assert isinstance(a,cls),`a` @@ -750,9 +750,9 @@ assert isinstance(a, Component_Spec),`a` assert_equal(str(a),'s = a % b') -class TestComponentSpecList(NumpyTestCase): # R458-list +class TestComponentSpecList(TestCase): # R458-list - def check_simple(self): + def test_simple(self): cls = Component_Spec_List a = cls('k=a, b') assert isinstance(a,cls),`a` @@ -763,9 +763,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'k = a, c') -class TestArrayConstructor(NumpyTestCase): # R465 +class TestArrayConstructor(TestCase): # R465 - def check_simple(self): + def test_simple(self): cls = Array_Constructor a = cls('(/a/)') assert isinstance(a,cls),`a` @@ -785,9 +785,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'[INTEGER :: a, b]') -class TestAcSpec(NumpyTestCase): # R466 +class TestAcSpec(TestCase): # R466 - def check_ac_spec(self): + def test_ac_spec(self): cls = Ac_Spec a = cls('integer ::') assert isinstance(a,cls),`a` @@ -806,9 +806,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'INTEGER :: a, (a, b, n = 1, 5)') -class TestAcValueList(NumpyTestCase): # R469-list +class TestAcValueList(TestCase): # R469-list - def check_ac_value_list(self): + def test_ac_value_list(self): cls = Ac_Value_List a = cls('a, b') assert isinstance(a,cls),`a` @@ -819,18 +819,18 @@ assert isinstance(a,Name),`a` assert_equal(str(a),'a') -class TestAcImpliedDo(NumpyTestCase): # R470 +class TestAcImpliedDo(TestCase): # R470 - def check_ac_implied_do(self): + def test_ac_implied_do(self): cls = Ac_Implied_Do a = cls('( a, b, n = 1, 5 )') assert isinstance(a,cls),`a` assert_equal(str(a),'(a, b, n = 1, 5)') assert_equal(repr(a),"Ac_Implied_Do(Ac_Value_List(',', (Name('a'), Name('b'))), Ac_Implied_Do_Control(Name('n'), [Int_Literal_Constant('1', None), Int_Literal_Constant('5', None)]))") -class TestAcImpliedDoControl(NumpyTestCase): # R471 +class TestAcImpliedDoControl(TestCase): # R471 - def check_ac_implied_do_control(self): + def test_ac_implied_do_control(self): cls = Ac_Implied_Do_Control a = cls('n = 3, 5') assert isinstance(a,cls),`a` @@ -845,9 +845,9 @@ ############################### SECTION 5 #################################### ############################################################################### -class TestTypeDeclarationStmt(NumpyTestCase): # R501 +class TestTypeDeclarationStmt(TestCase): # R501 - def check_simple(self): + def test_simple(self): cls = Type_Declaration_Stmt a = cls('integer a') assert isinstance(a, cls),`a` @@ -869,9 +869,9 @@ a = cls('DOUBLE PRECISION ALPHA, BETA') assert isinstance(a, cls),`a` -class TestDeclarationTypeSpec(NumpyTestCase): # R502 +class TestDeclarationTypeSpec(TestCase): # R502 - def check_simple(self): + def test_simple(self): cls = Declaration_Type_Spec a = cls('Integer*2') assert isinstance(a, Intrinsic_Type_Spec),`a` @@ -882,9 +882,9 @@ assert_equal(str(a), 'TYPE(foo)') assert_equal(repr(a), "Declaration_Type_Spec('TYPE', Type_Name('foo'))") -class TestAttrSpec(NumpyTestCase): # R503 +class TestAttrSpec(TestCase): # R503 - def check_simple(self): + def test_simple(self): cls = Attr_Spec a = cls('allocatable') assert isinstance(a, cls),`a` @@ -894,27 +894,27 @@ assert isinstance(a, Dimension_Attr_Spec),`a` assert_equal(str(a),'DIMENSION(a)') -class TestDimensionAttrSpec(NumpyTestCase): # R503.d +class TestDimensionAttrSpec(TestCase): # R503.d - def check_simple(self): + def test_simple(self): cls = Dimension_Attr_Spec a = cls('dimension(a)') assert isinstance(a, cls),`a` assert_equal(str(a),'DIMENSION(a)') assert_equal(repr(a),"Dimension_Attr_Spec('DIMENSION', Explicit_Shape_Spec(None, Name('a')))") -class TestIntentAttrSpec(NumpyTestCase): # R503.f +class TestIntentAttrSpec(TestCase): # R503.f - def check_simple(self): + def test_simple(self): cls = Intent_Attr_Spec a = cls('intent(in)') assert isinstance(a, cls),`a` assert_equal(str(a),'INTENT(IN)') assert_equal(repr(a),"Intent_Attr_Spec('INTENT', Intent_Spec('IN'))") -class TestEntityDecl(NumpyTestCase): # 504 +class TestEntityDecl(TestCase): # 504 - def check_simple(self): + def test_simple(self): cls = Entity_Decl a = cls('a(1)') assert isinstance(a, cls),`a` @@ -929,9 +929,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'a(1)*(3) = 2') -class TestAccessSpec(NumpyTestCase): # R508 +class TestAccessSpec(TestCase): # R508 - def check_simple(self): + def test_simple(self): cls = Access_Spec a = cls('private') assert isinstance(a, cls),`a` @@ -942,9 +942,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'PUBLIC') -class TestLanguageBindingSpec(NumpyTestCase): # R509 +class TestLanguageBindingSpec(TestCase): # R509 - def check_simple(self): + def test_simple(self): cls = Language_Binding_Spec a = cls('bind(c)') assert isinstance(a, cls),`a` @@ -955,9 +955,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'BIND(C, NAME = "hey")') -class TestExplicitShapeSpec(NumpyTestCase): # R511 +class TestExplicitShapeSpec(TestCase): # R511 - def check_simple(self): + def test_simple(self): cls = Explicit_Shape_Spec a = cls('a:b') assert isinstance(a, cls),`a` @@ -968,9 +968,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'a') -class TestUpperBound(NumpyTestCase): # R513 +class TestUpperBound(TestCase): # R513 - def check_simple(self): + def test_simple(self): cls = Upper_Bound a = cls('a') assert isinstance(a, Name),`a` @@ -978,9 +978,9 @@ self.assertRaises(NoMatchError,cls,'*') -class TestAssumedShapeSpec(NumpyTestCase): # R514 +class TestAssumedShapeSpec(TestCase): # R514 - def check_simple(self): + def test_simple(self): cls = Assumed_Shape_Spec a = cls(':') assert isinstance(a, cls),`a` @@ -991,9 +991,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'a :') -class TestDeferredShapeSpec(NumpyTestCase): # R515 +class TestDeferredShapeSpec(TestCase): # R515 - def check_simple(self): + def test_simple(self): cls = Deferred_Shape_Spec a = cls(':') assert isinstance(a, cls),`a` @@ -1001,9 +1001,9 @@ assert_equal(repr(a),'Deferred_Shape_Spec(None, None)') -class TestAssumedSizeSpec(NumpyTestCase): # R516 +class TestAssumedSizeSpec(TestCase): # R516 - def check_simple(self): + def test_simple(self): cls = Assumed_Size_Spec a = cls('*') assert isinstance(a, cls),`a` @@ -1022,9 +1022,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'a : b, 1 : *') -class TestAccessStmt(NumpyTestCase): # R518 +class TestAccessStmt(TestCase): # R518 - def check_simple(self): + def test_simple(self): cls = Access_Stmt a = cls('private') assert isinstance(a, cls),`a` @@ -1039,9 +1039,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'PUBLIC :: a') -class TestParameterStmt(NumpyTestCase): # R538 +class TestParameterStmt(TestCase): # R538 - def check_simple(self): + def test_simple(self): cls = Parameter_Stmt a = cls('parameter(a=1)') assert isinstance(a, cls),`a` @@ -1056,18 +1056,18 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'PARAMETER(ONE = 1.0D+0, ZERO = 0.0D+0)') -class TestNamedConstantDef(NumpyTestCase): # R539 +class TestNamedConstantDef(TestCase): # R539 - def check_simple(self): + def test_simple(self): cls = Named_Constant_Def a = cls('a=1') assert isinstance(a, cls),`a` assert_equal(str(a),'a = 1') assert_equal(repr(a),"Named_Constant_Def(Name('a'), Int_Literal_Constant('1', None))") -class TestPointerDecl(NumpyTestCase): # R541 +class TestPointerDecl(TestCase): # R541 - def check_simple(self): + def test_simple(self): cls = Pointer_Decl a = cls('a(:)') assert isinstance(a, cls),`a` @@ -1078,9 +1078,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'a(:, :)') -class TestImplicitStmt(NumpyTestCase): # R549 +class TestImplicitStmt(TestCase): # R549 - def check_simple(self): + def test_simple(self): cls = Implicit_Stmt a = cls('implicitnone') assert isinstance(a, cls),`a` @@ -1091,9 +1091,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'IMPLICIT REAL(A - D), DOUBLE PRECISION(R - T, X), TYPE(a)(Y - Z)') -class TestImplicitSpec(NumpyTestCase): # R550 +class TestImplicitSpec(TestCase): # R550 - def check_simple(self): + def test_simple(self): cls = Implicit_Spec a = cls('integer (a-z)') assert isinstance(a, cls),`a` @@ -1104,9 +1104,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'DOUBLE COMPLEX(R, D - G)') -class TestLetterSpec(NumpyTestCase): # R551 +class TestLetterSpec(TestCase): # R551 - def check_simple(self): + def test_simple(self): cls = Letter_Spec a = cls('a-z') assert isinstance(a, cls),`a` @@ -1117,9 +1117,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'D') -class TestEquivalenceStmt(NumpyTestCase): # R554 +class TestEquivalenceStmt(TestCase): # R554 - def check_simple(self): + def test_simple(self): cls = Equivalence_Stmt a = cls('equivalence (a, b ,z)') assert isinstance(a, cls),`a` @@ -1130,9 +1130,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'EQUIVALENCE(a, b, z), (b, l)') -class TestCommonStmt(NumpyTestCase): # R557 +class TestCommonStmt(TestCase): # R557 - def check_simple(self): + def test_simple(self): cls = Common_Stmt a = cls('common a') assert isinstance(a, cls),`a` @@ -1151,9 +1151,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'COMMON /name/ a, b(4, 5) // c /ljuks/ g(2)') -class TestCommonBlockObject(NumpyTestCase): # R558 +class TestCommonBlockObject(TestCase): # R558 - def check_simple(self): + def test_simple(self): cls = Common_Block_Object a = cls('a(2)') assert isinstance(a, cls),`a` @@ -1169,9 +1169,9 @@ ############################### SECTION 6 #################################### ############################################################################### -class TestSubstring(NumpyTestCase): # R609 +class TestSubstring(TestCase): # R609 - def check_simple(self): + def test_simple(self): cls = Substring a = cls('a(:)') assert isinstance(a, cls),`a` @@ -1184,9 +1184,9 @@ assert_equal(repr(a),"Substring(Name('a'), Substring_Range(Int_Literal_Constant('1', None), Int_Literal_Constant('2', None)))") -class TestSubstringRange(NumpyTestCase): # R611 +class TestSubstringRange(TestCase): # R611 - def check_simple(self): + def test_simple(self): cls = Substring_Range a = cls(':') assert isinstance(a, cls),`a` @@ -1215,9 +1215,9 @@ assert_equal(str(a),': b') -class TestDataRef(NumpyTestCase): # R612 +class TestDataRef(TestCase): # R612 - def check_data_ref(self): + def test_data_ref(self): cls = Data_Ref a = cls('a%b') assert isinstance(a,cls),`a` @@ -1228,17 +1228,17 @@ assert isinstance(a,Name),`a` assert_equal(str(a),'a') -class TestPartRef(NumpyTestCase): # R613 +class TestPartRef(TestCase): # R613 - def check_part_ref(self): + def test_part_ref(self): cls = Part_Ref a = cls('a') assert isinstance(a, Name),`a` assert_equal(str(a),'a') -class TestTypeParamInquiry(NumpyTestCase): # R615 +class TestTypeParamInquiry(TestCase): # R615 - def check_simple(self): + def test_simple(self): cls = Type_Param_Inquiry a = cls('a % b') assert isinstance(a,cls),`a` @@ -1246,9 +1246,9 @@ assert_equal(repr(a),"Type_Param_Inquiry(Name('a'), '%', Name('b'))") -class TestArraySection(NumpyTestCase): # R617 +class TestArraySection(TestCase): # R617 - def check_array_section(self): + def test_array_section(self): cls = Array_Section a = cls('a(:)') assert isinstance(a,cls),`a` @@ -1260,9 +1260,9 @@ assert_equal(str(a),'a(2 :)') -class TestSectionSubscript(NumpyTestCase): # R619 +class TestSectionSubscript(TestCase): # R619 - def check_simple(self): + def test_simple(self): cls = Section_Subscript a = cls('1:2') @@ -1273,9 +1273,9 @@ assert isinstance(a, Name),`a` assert_equal(str(a),'zzz') -class TestSectionSubscriptList(NumpyTestCase): # R619-list +class TestSectionSubscriptList(TestCase): # R619-list - def check_simple(self): + def test_simple(self): cls = Section_Subscript_List a = cls('a,2') assert isinstance(a,cls),`a` @@ -1290,9 +1290,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),': : 1, 3') -class TestSubscriptTriplet(NumpyTestCase): # R620 +class TestSubscriptTriplet(TestCase): # R620 - def check_simple(self): + def test_simple(self): cls = Subscript_Triplet a = cls('a:b') assert isinstance(a,cls),`a` @@ -1319,18 +1319,18 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'a + 1 :') -class TestAllocOpt(NumpyTestCase): # R624 +class TestAllocOpt(TestCase): # R624 - def check_simple(self): + def test_simple(self): cls = Alloc_Opt a = cls('stat=a') assert isinstance(a, cls),`a` assert_equal(str(a),'STAT = a') assert_equal(repr(a),"Alloc_Opt('STAT', Name('a'))") -class TestNullifyStmt(NumpyTestCase): # R633 +class TestNullifyStmt(TestCase): # R633 - def check_simple(self): + def test_simple(self): cls = Nullify_Stmt a = cls('nullify (a)') assert isinstance(a, cls),`a` @@ -1345,9 +1345,9 @@ ############################### SECTION 7 #################################### ############################################################################### -class TestPrimary(NumpyTestCase): # R701 +class TestPrimary(TestCase): # R701 - def check_simple(self): + def test_simple(self): cls = Primary a = cls('a') assert isinstance(a,Name),`a` @@ -1401,9 +1401,9 @@ assert isinstance(a,Real_Literal_Constant),`a` assert_equal(str(a),'0.0E-1') -class TestParenthesis(NumpyTestCase): # R701.h +class TestParenthesis(TestCase): # R701.h - def check_simple(self): + def test_simple(self): cls = Parenthesis a = cls('(a)') assert isinstance(a,cls),`a` @@ -1422,9 +1422,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'(a + (a + c))') -class TestLevel1Expr(NumpyTestCase): # R702 +class TestLevel1Expr(TestCase): # R702 - def check_simple(self): + def test_simple(self): cls = Level_1_Expr a = cls('.hey. a') assert isinstance(a,cls),`a` @@ -1433,9 +1433,9 @@ self.assertRaises(NoMatchError,cls,'.not. a') -class TestMultOperand(NumpyTestCase): # R704 +class TestMultOperand(TestCase): # R704 - def check_simple(self): + def test_simple(self): cls = Mult_Operand a = cls('a**b') assert isinstance(a,cls),`a` @@ -1454,9 +1454,9 @@ assert isinstance(a,Real_Literal_Constant),`a` assert_equal(str(a),'0.0E-1') -class TestAddOperand(NumpyTestCase): # R705 +class TestAddOperand(TestCase): # R705 - def check_simple(self): + def test_simple(self): cls = Add_Operand a = cls('a*b') assert isinstance(a,cls),`a` @@ -1475,9 +1475,9 @@ assert isinstance(a,Real_Literal_Constant),`a` assert_equal(str(a),'0.0E-1') -class TestLevel2Expr(NumpyTestCase): # R706 +class TestLevel2Expr(TestCase): # R706 - def check_simple(self): + def test_simple(self): cls = Level_2_Expr a = cls('a+b') assert isinstance(a,cls),`a` @@ -1509,9 +1509,9 @@ assert_equal(str(a),'0.0E-1') -class TestLevel2UnaryExpr(NumpyTestCase): +class TestLevel2UnaryExpr(TestCase): - def check_simple(self): + def test_simple(self): cls = Level_2_Unary_Expr a = cls('+a') assert isinstance(a,cls),`a` @@ -1531,9 +1531,9 @@ assert_equal(str(a),'0.0E-1') -class TestLevel3Expr(NumpyTestCase): # R710 +class TestLevel3Expr(TestCase): # R710 - def check_simple(self): + def test_simple(self): cls = Level_3_Expr a = cls('a//b') assert isinstance(a,cls),`a` @@ -1544,9 +1544,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'"a" // "b"') -class TestLevel4Expr(NumpyTestCase): # R712 +class TestLevel4Expr(TestCase): # R712 - def check_simple(self): + def test_simple(self): cls = Level_4_Expr a = cls('a.eq.b') assert isinstance(a,cls),`a` @@ -1593,18 +1593,18 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'a > b') -class TestAndOperand(NumpyTestCase): # R714 +class TestAndOperand(TestCase): # R714 - def check_simple(self): + def test_simple(self): cls = And_Operand a = cls('.not.a') assert isinstance(a,cls),`a` assert_equal(str(a),'.NOT. a') assert_equal(repr(a),"And_Operand('.NOT.', Name('a'))") -class TestOrOperand(NumpyTestCase): # R715 +class TestOrOperand(TestCase): # R715 - def check_simple(self): + def test_simple(self): cls = Or_Operand a = cls('a.and.b') assert isinstance(a,cls),`a` @@ -1612,9 +1612,9 @@ assert_equal(repr(a),"Or_Operand(Name('a'), '.AND.', Name('b'))") -class TestEquivOperand(NumpyTestCase): # R716 +class TestEquivOperand(TestCase): # R716 - def check_simple(self): + def test_simple(self): cls = Equiv_Operand a = cls('a.or.b') assert isinstance(a,cls),`a` @@ -1622,9 +1622,9 @@ assert_equal(repr(a),"Equiv_Operand(Name('a'), '.OR.', Name('b'))") -class TestLevel5Expr(NumpyTestCase): # R717 +class TestLevel5Expr(TestCase): # R717 - def check_simple(self): + def test_simple(self): cls = Level_5_Expr a = cls('a.eqv.b') assert isinstance(a,cls),`a` @@ -1639,9 +1639,9 @@ assert isinstance(a,Level_4_Expr),`a` assert_equal(str(a),'a .EQ. b') -class TestExpr(NumpyTestCase): # R722 +class TestExpr(TestCase): # R722 - def check_simple(self): + def test_simple(self): cls = Expr a = cls('a .op. b') assert isinstance(a,cls),`a` @@ -1661,9 +1661,9 @@ self.assertRaises(NoMatchError,Scalar_Int_Expr,'a,b') -class TestAssignmentStmt(NumpyTestCase): # R734 +class TestAssignmentStmt(TestCase): # R734 - def check_simple(self): + def test_simple(self): cls = Assignment_Stmt a = cls('a = b') assert isinstance(a, cls),`a` @@ -1678,27 +1678,27 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'a % c = b + c') -class TestProcComponentRef(NumpyTestCase): # R741 +class TestProcComponentRef(TestCase): # R741 - def check_proc_component_ref(self): + def test_proc_component_ref(self): cls = Proc_Component_Ref a = cls('a % b') assert isinstance(a,cls),`a` assert_equal(str(a),'a % b') assert_equal(repr(a),"Proc_Component_Ref(Name('a'), '%', Name('b'))") -class TestWhereStmt(NumpyTestCase): # R743 +class TestWhereStmt(TestCase): # R743 - def check_simple(self): + def test_simple(self): cls = Where_Stmt a = cls('where (a) c=2') assert isinstance(a,cls),`a` assert_equal(str(a),'WHERE (a) c = 2') assert_equal(repr(a),"Where_Stmt(Name('a'), Assignment_Stmt(Name('c'), '=', Int_Literal_Constant('2', None)))") -class TestWhereConstructStmt(NumpyTestCase): # R745 +class TestWhereConstructStmt(TestCase): # R745 - def check_simple(self): + def test_simple(self): cls = Where_Construct_Stmt a = cls('where (a)') assert isinstance(a,cls),`a` @@ -1710,9 +1710,9 @@ ############################### SECTION 8 #################################### ############################################################################### -class TestContinueStmt(NumpyTestCase): # R848 +class TestContinueStmt(TestCase): # R848 - def check_simple(self): + def test_simple(self): cls = Continue_Stmt a = cls('continue') assert isinstance(a, cls),`a` @@ -1723,9 +1723,9 @@ ############################### SECTION 9 #################################### ############################################################################### -class TestIoUnit(NumpyTestCase): # R901 +class TestIoUnit(TestCase): # R901 - def check_simple(self): + def test_simple(self): cls = Io_Unit a = cls('*') assert isinstance(a, cls),`a` @@ -1735,18 +1735,18 @@ assert isinstance(a, Name),`a` assert_equal(str(a),'a') -class TestWriteStmt(NumpyTestCase): # R911 +class TestWriteStmt(TestCase): # R911 - def check_simple(self): + def test_simple(self): cls = Write_Stmt a = cls('write (123)"hey"') assert isinstance(a, cls),`a` assert_equal(str(a),'WRITE(UNIT = 123) "hey"') assert_equal(repr(a),'Write_Stmt(Io_Control_Spec_List(\',\', (Io_Control_Spec(\'UNIT\', Int_Literal_Constant(\'123\', None)),)), Char_Literal_Constant(\'"hey"\', None))') -class TestPrintStmt(NumpyTestCase): # R912 +class TestPrintStmt(TestCase): # R912 - def check_simple(self): + def test_simple(self): cls = Print_Stmt a = cls('print 123') assert isinstance(a, cls),`a` @@ -1757,18 +1757,18 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'PRINT *, "a=", a') -class TestIoControlSpec(NumpyTestCase): # R913 +class TestIoControlSpec(TestCase): # R913 - def check_simple(self): + def test_simple(self): cls = Io_Control_Spec a = cls('end=123') assert isinstance(a, cls),`a` assert_equal(str(a),'END = 123') assert_equal(repr(a),"Io_Control_Spec('END', Label('123'))") -class TestIoControlSpecList(NumpyTestCase): # R913-list +class TestIoControlSpecList(TestCase): # R913-list - def check_simple(self): + def test_simple(self): cls = Io_Control_Spec_List a = cls('end=123') assert isinstance(a, cls),`a` @@ -1793,9 +1793,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'UNIT = 123, NML = a') -class TestFormat(NumpyTestCase): # R914 +class TestFormat(TestCase): # R914 - def check_simple(self): + def test_simple(self): cls = Format a = cls('*') assert isinstance(a, cls),`a` @@ -1810,17 +1810,17 @@ assert isinstance(a, Label),`a` assert_equal(str(a),'123') -class TestWaitStmt(NumpyTestCase): # R921 +class TestWaitStmt(TestCase): # R921 - def check_simple(self): + def test_simple(self): cls = Wait_Stmt a = cls('wait (123)') assert isinstance(a, cls),`a` assert_equal(str(a),'WAIT(UNIT = 123)') -class TestWaitSpec(NumpyTestCase): # R922 +class TestWaitSpec(TestCase): # R922 - def check_simple(self): + def test_simple(self): cls = Wait_Spec a = cls('123') assert isinstance(a, cls),`a` @@ -1840,9 +1840,9 @@ ############################### SECTION 11 #################################### ############################################################################### -class TestUseStmt(NumpyTestCase): # R1109 +class TestUseStmt(TestCase): # R1109 - def check_simple(self): + def test_simple(self): cls = Use_Stmt a = cls('use a') assert isinstance(a, cls),`a` @@ -1861,9 +1861,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'USE, INTRINSIC :: a, OPERATOR(.HEY.) => OPERATOR(.HOO.), c => g') -class TestModuleNature(NumpyTestCase): # R1110 +class TestModuleNature(TestCase): # R1110 - def check_simple(self): + def test_simple(self): cls = Module_Nature a = cls('intrinsic') assert isinstance(a, cls),`a` @@ -1878,9 +1878,9 @@ ############################### SECTION 12 #################################### ############################################################################### -class TestFunctionReference(NumpyTestCase): # R1217 +class TestFunctionReference(TestCase): # R1217 - def check_simple(self): + def test_simple(self): cls = Function_Reference a = cls('f()') assert isinstance(a,cls),`a` @@ -1892,18 +1892,18 @@ assert_equal(str(a),'f(2, k = 1, a)') -class TestProcedureDesignator(NumpyTestCase): # R1219 +class TestProcedureDesignator(TestCase): # R1219 - def check_procedure_designator(self): + def test_procedure_designator(self): cls = Procedure_Designator a = cls('a%b') assert isinstance(a,cls),`a` assert_equal(str(a),'a % b') assert_equal(repr(a),"Procedure_Designator(Name('a'), '%', Name('b'))") -class TestActualArgSpec(NumpyTestCase): # R1220 +class TestActualArgSpec(TestCase): # R1220 - def check_simple(self): + def test_simple(self): cls = Actual_Arg_Spec a = cls('k=a') assert isinstance(a,cls),`a` @@ -1914,9 +1914,9 @@ assert isinstance(a,Name),`a` assert_equal(str(a),'a') -class TestActualArgSpecList(NumpyTestCase): +class TestActualArgSpecList(TestCase): - def check_simple(self): + def test_simple(self): cls = Actual_Arg_Spec_List a = cls('a,b') assert isinstance(a,cls),`a` @@ -1935,18 +1935,18 @@ assert isinstance(a,Name),`a` assert_equal(str(a),'a') -class TestAltReturnSpec(NumpyTestCase): # R1222 +class TestAltReturnSpec(TestCase): # R1222 - def check_alt_return_spec(self): + def test_alt_return_spec(self): cls = Alt_Return_Spec a = cls('* 123') assert isinstance(a,cls),`a` assert_equal(str(a),'*123') assert_equal(repr(a),"Alt_Return_Spec(Label('123'))") -class TestPrefix(NumpyTestCase): # R1227 +class TestPrefix(TestCase): # R1227 - def check_simple(self): + def test_simple(self): cls = Prefix a = cls('pure recursive') assert isinstance(a, cls),`a` @@ -1957,9 +1957,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'INTEGER*2 PURE') -class TestPrefixSpec(NumpyTestCase): # R1228 +class TestPrefixSpec(TestCase): # R1228 - def check_simple(self): + def test_simple(self): cls = Prefix_Spec a = cls('pure') assert isinstance(a, cls),`a` @@ -1978,9 +1978,9 @@ assert isinstance(a, Intrinsic_Type_Spec),`a` assert_equal(str(a),'INTEGER*2') -class TestSubroutineSubprogram(NumpyTestCase): # R1231 +class TestSubroutineSubprogram(TestCase): # R1231 - def check_simple(self): + def test_simple(self): from api import get_reader reader = get_reader('''\ subroutine foo @@ -2000,9 +2000,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'SUBROUTINE foo\n INTEGER :: a\nEND SUBROUTINE foo') -class TestSubroutineStmt(NumpyTestCase): # R1232 +class TestSubroutineStmt(TestCase): # R1232 - def check_simple(self): + def test_simple(self): cls = Subroutine_Stmt a = cls('subroutine foo') assert isinstance(a, cls),`a` @@ -2021,9 +2021,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'SUBROUTINE foo BIND(C)') -class TestEndSubroutineStmt(NumpyTestCase): # R1234 +class TestEndSubroutineStmt(TestCase): # R1234 - def check_simple(self): + def test_simple(self): cls = End_Subroutine_Stmt a = cls('end subroutine foo') assert isinstance(a, cls),`a` @@ -2038,18 +2038,18 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'END SUBROUTINE') -class TestReturnStmt(NumpyTestCase): # R1236 +class TestReturnStmt(TestCase): # R1236 - def check_simple(self): + def test_simple(self): cls = Return_Stmt a = cls('return') assert isinstance(a, cls),`a` assert_equal(str(a), 'RETURN') assert_equal(repr(a), 'Return_Stmt(None)') -class TestContains(NumpyTestCase): # R1237 +class TestContains(TestCase): # R1237 - def check_simple(self): + def test_simple(self): cls = Contains_Stmt a = cls('Contains') assert isinstance(a, cls),`a` @@ -2098,4 +2098,4 @@ print '-----' if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/f2py/lib/parser/test_parser.py =================================================================== --- trunk/numpy/f2py/lib/parser/test_parser.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/f2py/lib/parser/test_parser.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -34,25 +34,25 @@ return r raise ValueError, 'parsing %r with %s pattern failed' % (line, cls.__name__) -class TestStatements(NumpyTestCase): +class TestStatements(TestCase): - def check_assignment(self): + def test_assignment(self): assert_equal(parse(Assignment,'a=b'), 'a = b') assert_equal(parse(PointerAssignment,'a=>b'), 'a => b') assert_equal(parse(Assignment,'a (2)=b(n,m)'), 'a(2) = b(n,m)') assert_equal(parse(Assignment,'a % 2(2,4)=b(a(i))'), 'a%2(2,4) = b(a(i))') - def check_assign(self): + def test_assign(self): assert_equal(parse(Assign,'assign 10 to a'),'ASSIGN 10 TO a') - def check_call(self): + def test_call(self): assert_equal(parse(Call,'call a'),'CALL a') assert_equal(parse(Call,'call a()'),'CALL a') assert_equal(parse(Call,'call a(1)'),'CALL a(1)') assert_equal(parse(Call,'call a(1,2)'),'CALL a(1, 2)') assert_equal(parse(Call,'call a % 2 ( n , a+1 )'),'CALL a % 2(n, a+1)') - def check_goto(self): + def test_goto(self): assert_equal(parse(Goto,'go to 19'),'GO TO 19') assert_equal(parse(Goto,'goto 19'),'GO TO 19') assert_equal(parse(ComputedGoto,'goto (1, 2 ,3) a+b(2)'), @@ -63,29 +63,29 @@ assert_equal(parse(AssignedGoto,'goto a ( 1 )'),'GO TO a (1)') assert_equal(parse(AssignedGoto,'goto a ( 1 ,2)'),'GO TO a (1, 2)') - def check_continue(self): + def test_continue(self): assert_equal(parse(Continue,'continue'),'CONTINUE') - def check_return(self): + def test_return(self): assert_equal(parse(Return,'return'),'RETURN') assert_equal(parse(Return,'return a'),'RETURN a') assert_equal(parse(Return,'return a+1'),'RETURN a+1') assert_equal(parse(Return,'return a(c, a)'),'RETURN a(c, a)') - def check_stop(self): + def test_stop(self): assert_equal(parse(Stop,'stop'),'STOP') assert_equal(parse(Stop,'stop 1'),'STOP 1') assert_equal(parse(Stop,'stop "a"'),'STOP "a"') assert_equal(parse(Stop,'stop "a b"'),'STOP "a b"') - def check_print(self): + def test_print(self): assert_equal(parse(Print, 'print*'),'PRINT *') assert_equal(parse(Print, 'print "a b( c )"'),'PRINT "a b( c )"') assert_equal(parse(Print, 'print 12, a'),'PRINT 12, a') assert_equal(parse(Print, 'print 12, a , b'),'PRINT 12, a, b') assert_equal(parse(Print, 'print 12, a(c,1) , b'),'PRINT 12, a(c,1), b') - def check_read(self): + def test_read(self): assert_equal(parse(Read, 'read ( 10 )'),'READ (10)') assert_equal(parse(Read, 'read ( 10 ) a '),'READ (10) a') assert_equal(parse(Read, 'read ( 10 ) a , b'),'READ (10) a, b') @@ -98,44 +98,44 @@ assert_equal(parse(Read, 'read * , a , b'),'READ *, a, b') assert_equal(parse(Read, 'read ( unit =10 )'),'READ (UNIT = 10)') - def check_write(self): + def test_write(self): assert_equal(parse(Write, 'write ( 10 )'),'WRITE (10)') assert_equal(parse(Write, 'write ( 10 , a )'),'WRITE (10, a)') assert_equal(parse(Write, 'write ( 10 ) b'),'WRITE (10) b') assert_equal(parse(Write, 'write ( 10 ) a(1) , b+2'),'WRITE (10) a(1), b+2') assert_equal(parse(Write, 'write ( unit=10 )'),'WRITE (UNIT = 10)') - def check_flush(self): + def test_flush(self): assert_equal(parse(Flush, 'flush 10'),'FLUSH (10)') assert_equal(parse(Flush, 'flush (10)'),'FLUSH (10)') assert_equal(parse(Flush, 'flush (UNIT = 10)'),'FLUSH (UNIT = 10)') assert_equal(parse(Flush, 'flush (10, err= 23)'),'FLUSH (10, ERR = 23)') - def check_wait(self): + def test_wait(self): assert_equal(parse(Wait, 'wait(10)'),'WAIT (10)') assert_equal(parse(Wait, 'wait(10,err=129)'),'WAIT (10, ERR = 129)') - def check_contains(self): + def test_contains(self): assert_equal(parse(Contains, 'contains'),'CONTAINS') - def check_allocate(self): + def test_allocate(self): assert_equal(parse(Allocate, 'allocate (a)'), 'ALLOCATE (a)') assert_equal(parse(Allocate, \ 'allocate (a, stat=b)'), 'ALLOCATE (a, STAT = b)') assert_equal(parse(Allocate, 'allocate (a,b(:1))'), 'ALLOCATE (a, b(:1))') assert_equal(parse(Allocate, \ 'allocate (real(8)::a)'), 'ALLOCATE (REAL(KIND=8) :: a)') - def check_deallocate(self): + def test_deallocate(self): assert_equal(parse(Deallocate, 'deallocate (a)'), 'DEALLOCATE (a)') assert_equal(parse(Deallocate, 'deallocate (a, stat=b)'), 'DEALLOCATE (a, STAT = b)') - def check_moduleprocedure(self): + def test_moduleprocedure(self): assert_equal(parse(ModuleProcedure,\ 'ModuleProcedure a'), 'MODULE PROCEDURE a') assert_equal(parse(ModuleProcedure,\ 'module procedure a , b'), 'MODULE PROCEDURE a, b') - def check_access(self): + def test_access(self): assert_equal(parse(Public,'Public'),'PUBLIC') assert_equal(parse(Public,'public a'),'PUBLIC a') assert_equal(parse(Public,'public :: a'),'PUBLIC a') @@ -144,45 +144,45 @@ assert_equal(parse(Private,'private'),'PRIVATE') assert_equal(parse(Private,'private :: a'),'PRIVATE a') - def check_close(self): + def test_close(self): assert_equal(parse(Close,'close (12)'),'CLOSE (12)') assert_equal(parse(Close,'close (12, err=99)'),'CLOSE (12, ERR = 99)') assert_equal(parse(Close,'close (12, status = a(1,2))'),'CLOSE (12, STATUS = a(1,2))') - def check_cycle(self): + def test_cycle(self): assert_equal(parse(Cycle,'cycle'),'CYCLE') assert_equal(parse(Cycle,'cycle ab'),'CYCLE ab') - def check_rewind(self): + def test_rewind(self): assert_equal(parse(Rewind,'rewind 1'),'REWIND (1)') assert_equal(parse(Rewind,'rewind (1)'),'REWIND (1)') assert_equal(parse(Rewind,'rewind (1, err = 123)'),'REWIND (1, ERR = 123)') - def check_backspace(self): + def test_backspace(self): assert_equal(parse(Backspace,'backspace 1'),'BACKSPACE (1)') assert_equal(parse(Backspace,'backspace (1)'),'BACKSPACE (1)') assert_equal(parse(Backspace,'backspace (1, err = 123)'),'BACKSPACE (1, ERR = 123)') - def check_endfile(self): + def test_endfile(self): assert_equal(parse(Endfile,'endfile 1'),'ENDFILE (1)') assert_equal(parse(Endfile,'endfile (1)'),'ENDFILE (1)') assert_equal(parse(Endfile,'endfile (1, err = 123)'),'ENDFILE (1, ERR = 123)') - def check_open(self): + def test_open(self): assert_equal(parse(Open,'open (1)'),'OPEN (1)') assert_equal(parse(Open,'open (1, err = 123)'),'OPEN (1, ERR = 123)') - def check_format(self): + def test_format(self): assert_equal(parse(Format,'1: format ()'),'1: FORMAT ()') assert_equal(parse(Format,'199 format (1)'),'199: FORMAT (1)') assert_equal(parse(Format,'2 format (1 , SS)'),'2: FORMAT (1, ss)') - def check_save(self): + def test_save(self): assert_equal(parse(Save,'save'), 'SAVE') assert_equal(parse(Save,'save :: a'), 'SAVE a') assert_equal(parse(Save,'save a,b'), 'SAVE a, b') - def check_data(self): + def test_data(self): assert_equal(parse(Data,'data a /b/'), 'DATA a / b /') assert_equal(parse(Data,'data a , c /b/'), 'DATA a, c / b /') assert_equal(parse(Data,'data a /b ,c/'), 'DATA a / b, c /') @@ -190,11 +190,11 @@ assert_equal(parse(Data,'data a(1,2) /b/'), 'DATA a(1,2) / b /') assert_equal(parse(Data,'data a /b, c(1)/'), 'DATA a / b, c(1) /') - def check_nullify(self): + def test_nullify(self): assert_equal(parse(Nullify,'nullify(a)'),'NULLIFY (a)') assert_equal(parse(Nullify,'nullify(a ,b)'),'NULLIFY (a, b)') - def check_use(self): + def test_use(self): assert_equal(parse(Use, 'use a'), 'USE a') assert_equal(parse(Use, 'use :: a'), 'USE a') assert_equal(parse(Use, 'use, intrinsic:: a'), 'USE INTRINSIC :: a') @@ -205,79 +205,79 @@ 'use :: a , only: operator(+) , b'),\ 'USE a, ONLY: operator(+), b') - def check_exit(self): + def test_exit(self): assert_equal(parse(Exit,'exit'),'EXIT') assert_equal(parse(Exit,'exit ab'),'EXIT ab') - def check_parameter(self): + def test_parameter(self): assert_equal(parse(Parameter,'parameter (a = b(1,2))'), 'PARAMETER (a = b(1,2))') assert_equal(parse(Parameter,'parameter (a = b(1,2) , b=1)'), 'PARAMETER (a = b(1,2), b=1)') - def check_equivalence(self): + def test_equivalence(self): assert_equal(parse(Equivalence,'equivalence (a , b)'),'EQUIVALENCE (a, b)') assert_equal(parse(Equivalence,'equivalence (a , b) , ( c, d(1) , g )'), 'EQUIVALENCE (a, b), (c, d(1), g)') - def check_dimension(self): + def test_dimension(self): assert_equal(parse(Dimension,'dimension a(b)'),'DIMENSION a(b)') assert_equal(parse(Dimension,'dimension::a(b)'),'DIMENSION a(b)') assert_equal(parse(Dimension,'dimension a(b) , c(d)'),'DIMENSION a(b), c(d)') assert_equal(parse(Dimension,'dimension a(b,c)'),'DIMENSION a(b,c)') - def check_target(self): + def test_target(self): assert_equal(parse(Target,'target a(b)'),'TARGET a(b)') assert_equal(parse(Target,'target::a(b)'),'TARGET a(b)') assert_equal(parse(Target,'target a(b) , c(d)'),'TARGET a(b), c(d)') assert_equal(parse(Target,'target a(b,c)'),'TARGET a(b,c)') - def check_pointer(self): + def test_pointer(self): assert_equal(parse(Pointer,'pointer a=b'),'POINTER a=b') assert_equal(parse(Pointer,'pointer :: a=b'),'POINTER a=b') assert_equal(parse(Pointer,'pointer a=b, c=d(1,2)'),'POINTER a=b, c=d(1,2)') - def check_protected(self): + def test_protected(self): assert_equal(parse(Protected,'protected a'),'PROTECTED a') assert_equal(parse(Protected,'protected::a'),'PROTECTED a') assert_equal(parse(Protected,'protected a , b'),'PROTECTED a, b') - def check_volatile(self): + def test_volatile(self): assert_equal(parse(Volatile,'volatile a'),'VOLATILE a') assert_equal(parse(Volatile,'volatile::a'),'VOLATILE a') assert_equal(parse(Volatile,'volatile a , b'),'VOLATILE a, b') - def check_value(self): + def test_value(self): assert_equal(parse(Value,'value a'),'VALUE a') assert_equal(parse(Value,'value::a'),'VALUE a') assert_equal(parse(Value,'value a , b'),'VALUE a, b') - def check_arithmeticif(self): + def test_arithmeticif(self): assert_equal(parse(ArithmeticIf,'if (a) 1,2,3'),'IF (a) 1, 2, 3') assert_equal(parse(ArithmeticIf,'if (a(1)) 1,2,3'),'IF (a(1)) 1, 2, 3') assert_equal(parse(ArithmeticIf,'if (a(1,2)) 1,2,3'),'IF (a(1,2)) 1, 2, 3') - def check_intrinsic(self): + def test_intrinsic(self): assert_equal(parse(Intrinsic,'intrinsic a'),'INTRINSIC a') assert_equal(parse(Intrinsic,'intrinsic::a'),'INTRINSIC a') assert_equal(parse(Intrinsic,'intrinsic a , b'),'INTRINSIC a, b') - def check_inquire(self): + def test_inquire(self): assert_equal(parse(Inquire, 'inquire (1)'),'INQUIRE (1)') assert_equal(parse(Inquire, 'inquire (1, err=123)'),'INQUIRE (1, ERR = 123)') assert_equal(parse(Inquire, 'inquire (iolength=a) b'),'INQUIRE (IOLENGTH = a) b') assert_equal(parse(Inquire, 'inquire (iolength=a) b ,c(1,2)'), 'INQUIRE (IOLENGTH = a) b, c(1,2)') - def check_sequence(self): + def test_sequence(self): assert_equal(parse(Sequence, 'sequence'),'SEQUENCE') - def check_external(self): + def test_external(self): assert_equal(parse(External,'external a'),'EXTERNAL a') assert_equal(parse(External,'external::a'),'EXTERNAL a') assert_equal(parse(External,'external a , b'),'EXTERNAL a, b') - def check_common(self): + def test_common(self): assert_equal(parse(Common, 'common a'),'COMMON a') assert_equal(parse(Common, 'common a , b'),'COMMON a, b') assert_equal(parse(Common, 'common a , b(1,2)'),'COMMON a, b(1,2)') @@ -289,18 +289,18 @@ assert_equal(parse(Common, 'common / name/ a, /foo/ c(1) ,d'), 'COMMON / name / a / foo / c(1), d') - def check_optional(self): + def test_optional(self): assert_equal(parse(Optional,'optional a'),'OPTIONAL a') assert_equal(parse(Optional,'optional::a'),'OPTIONAL a') assert_equal(parse(Optional,'optional a , b'),'OPTIONAL a, b') - def check_intent(self): + def test_intent(self): assert_equal(parse(Intent,'intent (in) a'),'INTENT (IN) a') assert_equal(parse(Intent,'intent(in)::a'),'INTENT (IN) a') assert_equal(parse(Intent,'intent(in) a , b'),'INTENT (IN) a, b') assert_equal(parse(Intent,'intent (in, out) a'),'INTENT (IN, OUT) a') - def check_entry(self): + def test_entry(self): assert_equal(parse(Entry,'entry a'), 'ENTRY a') assert_equal(parse(Entry,'entry a()'), 'ENTRY a') assert_equal(parse(Entry,'entry a(b)'), 'ENTRY a (b)') @@ -315,13 +315,13 @@ assert_equal(parse(Entry,'entry a(b,*) result (g)'), 'ENTRY a (b, *) RESULT (g)') - def check_import(self): + def test_import(self): assert_equal(parse(Import,'import'),'IMPORT') assert_equal(parse(Import,'import a'),'IMPORT a') assert_equal(parse(Import,'import::a'),'IMPORT a') assert_equal(parse(Import,'import a , b'),'IMPORT a, b') - def check_forall(self): + def test_forall(self): assert_equal(parse(ForallStmt,'forall (i = 1:n(k,:) : 2) a(i) = i*i*b(i)'), 'FORALL (i = 1 : n(k,:) : 2) a(i) = i*i*b(i)') assert_equal(parse(ForallStmt,'forall (i=1:n,j=2:3) a(i) = b(i,i)'), @@ -329,7 +329,7 @@ assert_equal(parse(ForallStmt,'forall (i=1:n,j=2:3, 1+a(1,2)) a(i) = b(i,i)'), 'FORALL (i = 1 : n, j = 2 : 3, 1+a(1,2)) a(i) = b(i,i)') - def check_specificbinding(self): + def test_specificbinding(self): assert_equal(parse(SpecificBinding,'procedure a'),'PROCEDURE a') assert_equal(parse(SpecificBinding,'procedure :: a'),'PROCEDURE a') assert_equal(parse(SpecificBinding,'procedure , NOPASS :: a'),'PROCEDURE , NOPASS :: a') @@ -343,29 +343,29 @@ assert_equal(parse(SpecificBinding,'procedure(n),pass :: a =>c'), 'PROCEDURE (n) , PASS :: a => c') - def check_genericbinding(self): + def test_genericbinding(self): assert_equal(parse(GenericBinding,'generic :: a=>b'),'GENERIC :: a => b') assert_equal(parse(GenericBinding,'generic, public :: a=>b'),'GENERIC, PUBLIC :: a => b') assert_equal(parse(GenericBinding,'generic, public :: a(1,2)=>b ,c'), 'GENERIC, PUBLIC :: a(1,2) => b, c') - def check_finalbinding(self): + def test_finalbinding(self): assert_equal(parse(FinalBinding,'final a'),'FINAL a') assert_equal(parse(FinalBinding,'final::a'),'FINAL a') assert_equal(parse(FinalBinding,'final a , b'),'FINAL a, b') - def check_allocatable(self): + def test_allocatable(self): assert_equal(parse(Allocatable,'allocatable a'),'ALLOCATABLE a') assert_equal(parse(Allocatable,'allocatable :: a'),'ALLOCATABLE a') assert_equal(parse(Allocatable,'allocatable a (1,2)'),'ALLOCATABLE a (1,2)') assert_equal(parse(Allocatable,'allocatable a (1,2) ,b'),'ALLOCATABLE a (1,2), b') - def check_asynchronous(self): + def test_asynchronous(self): assert_equal(parse(Asynchronous,'asynchronous a'),'ASYNCHRONOUS a') assert_equal(parse(Asynchronous,'asynchronous::a'),'ASYNCHRONOUS a') assert_equal(parse(Asynchronous,'asynchronous a , b'),'ASYNCHRONOUS a, b') - def check_bind(self): + def test_bind(self): assert_equal(parse(Bind,'bind(c) a'),'BIND (C) a') assert_equal(parse(Bind,'bind(c) :: a'),'BIND (C) a') assert_equal(parse(Bind,'bind(c) a ,b'),'BIND (C) a, b') @@ -373,13 +373,13 @@ assert_equal(parse(Bind,'bind(c) /a/ ,b'),'BIND (C) / a /, b') assert_equal(parse(Bind,'bind(c,name="hey") a'),'BIND (C, NAME = "hey") a') - def check_else(self): + def test_else(self): assert_equal(parse(Else,'else'),'ELSE') assert_equal(parse(ElseIf,'else if (a) then'),'ELSE IF (a) THEN') assert_equal(parse(ElseIf,'else if (a.eq.b(1,2)) then'), 'ELSE IF (a.eq.b(1,2)) THEN') - def check_case(self): + def test_case(self): assert_equal(parse(Case,'case (1)'),'CASE ( 1 )') assert_equal(parse(Case,'case (1:)'),'CASE ( 1 : )') assert_equal(parse(Case,'case (:1)'),'CASE ( : 1 )') @@ -391,56 +391,56 @@ assert_equal(parse(Case,'case (a(1,:):)'),'CASE ( a(1,:) : )') assert_equal(parse(Case,'case default'),'CASE DEFAULT') - def check_where(self): + def test_where(self): assert_equal(parse(WhereStmt,'where (1) a=1'),'WHERE ( 1 ) a = 1') assert_equal(parse(WhereStmt,'where (a(1,2)) a=1'),'WHERE ( a(1,2) ) a = 1') - def check_elsewhere(self): + def test_elsewhere(self): assert_equal(parse(ElseWhere,'else where'),'ELSE WHERE') assert_equal(parse(ElseWhere,'elsewhere (1)'),'ELSE WHERE ( 1 )') assert_equal(parse(ElseWhere,'elsewhere(a(1,2))'),'ELSE WHERE ( a(1,2) )') - def check_enumerator(self): + def test_enumerator(self): assert_equal(parse(Enumerator,'enumerator a'), 'ENUMERATOR a') assert_equal(parse(Enumerator,'enumerator:: a'), 'ENUMERATOR a') assert_equal(parse(Enumerator,'enumerator a,b'), 'ENUMERATOR a, b') assert_equal(parse(Enumerator,'enumerator a=1'), 'ENUMERATOR a=1') assert_equal(parse(Enumerator,'enumerator a=1 , b=c(1,2)'), 'ENUMERATOR a=1, b=c(1,2)') - def check_fortranname(self): + def test_fortranname(self): assert_equal(parse(FortranName,'fortranname a'),'FORTRANNAME a') - def check_threadsafe(self): + def test_threadsafe(self): assert_equal(parse(Threadsafe,'threadsafe'),'THREADSAFE') - def check_depend(self): + def test_depend(self): assert_equal(parse(Depend,'depend( a) b'), 'DEPEND ( a ) b') assert_equal(parse(Depend,'depend( a) ::b'), 'DEPEND ( a ) b') assert_equal(parse(Depend,'depend( a,c) b,e'), 'DEPEND ( a, c ) b, e') - def check_check(self): + def test_check(self): assert_equal(parse(Check,'check(1) a'), 'CHECK ( 1 ) a') assert_equal(parse(Check,'check(1) :: a'), 'CHECK ( 1 ) a') assert_equal(parse(Check,'check(b(1,2)) a'), 'CHECK ( b(1,2) ) a') assert_equal(parse(Check,'check(a>1) :: a'), 'CHECK ( a>1 ) a') - def check_callstatement(self): + def test_callstatement(self): assert_equal(parse(CallStatement,'callstatement (*func)()',isstrict=1), 'CALLSTATEMENT (*func)()') assert_equal(parse(CallStatement,'callstatement i=1;(*func)()',isstrict=1), 'CALLSTATEMENT i=1;(*func)()') - def check_callprotoargument(self): + def test_callprotoargument(self): assert_equal(parse(CallProtoArgument,'callprotoargument int(*), double'), 'CALLPROTOARGUMENT int(*), double') - def check_pause(self): + def test_pause(self): assert_equal(parse(Pause,'pause'),'PAUSE') assert_equal(parse(Pause,'pause 1'),'PAUSE 1') assert_equal(parse(Pause,'pause "hey"'),'PAUSE "hey"') assert_equal(parse(Pause,'pause "hey pa"'),'PAUSE "hey pa"') - def check_integer(self): + def test_integer(self): assert_equal(parse(Integer,'integer'),'INTEGER') assert_equal(parse(Integer,'integer*4'),'INTEGER*4') assert_equal(parse(Integer,'integer*4 a'),'INTEGER*4 a') @@ -460,7 +460,7 @@ assert_equal(parse(Integer,'integer(kind=2+2)'),'INTEGER(KIND=2+2)') assert_equal(parse(Integer,'integer(kind=f(4,5))'),'INTEGER(KIND=f(4,5))') - def check_character(self): + def test_character(self): assert_equal(parse(Character,'character'),'CHARACTER') assert_equal(parse(Character,'character*2'),'CHARACTER(LEN=2)') assert_equal(parse(Character,'character**'),'CHARACTER(LEN=*)') @@ -482,7 +482,7 @@ assert_equal(parse(Character,'character(len=3,kind=fA(1,2))'), 'CHARACTER(LEN=3, KIND=fa(1,2))') - def check_implicit(self): + def test_implicit(self): assert_equal(parse(Implicit,'implicit none'),'IMPLICIT NONE') assert_equal(parse(Implicit,'implicit'),'IMPLICIT NONE') assert_equal(parse(Implicit,'implicit integer (i-m)'), @@ -492,5 +492,6 @@ assert_equal(parse(Implicit,'implicit integer (i-m), real (z)'), 'IMPLICIT INTEGER ( i-m ), REAL ( z )') + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/f2py/lib/tests/test_derived_scalar.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_derived_scalar.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/f2py/lib/tests/test_derived_scalar.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -42,9 +42,9 @@ from numpy import * -class TestM(NumpyTestCase): +class TestM(TestCase): - def check_foo_simple(self, level=1): + def test_foo_simple(self, level=1): a = m.myt(2) assert_equal(a.flag,2) assert isinstance(a,m.myt),`a` @@ -59,7 +59,7 @@ #s = m.foo((5,)) - def check_foo2_simple(self, level=1): + def test_foo2_simple(self, level=1): a = m.myt(2) assert_equal(a.flag,2) assert isinstance(a,m.myt),`a` @@ -71,4 +71,4 @@ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/f2py/lib/tests/test_module_module.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_module_module.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/f2py/lib/tests/test_module_module.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -51,11 +51,11 @@ from numpy import * -class TestM(NumpyTestCase): +class TestM(TestCase): - def check_foo_simple(self, level=1): + def test_foo_simple(self, level=1): foo = m.foo foo() if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/f2py/lib/tests/test_module_scalar.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_module_scalar.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/f2py/lib/tests/test_module_scalar.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -40,19 +40,19 @@ from numpy import * -class TestM(NumpyTestCase): +class TestM(TestCase): - def check_foo_simple(self, level=1): + def test_foo_simple(self, level=1): foo = m.foo r = foo(2) assert isinstance(r,int32),`type(r)` assert_equal(r,3) - def check_foo2_simple(self, level=1): + def test_foo2_simple(self, level=1): foo2 = m.foo2 r = foo2(2) assert isinstance(r,int32),`type(r)` assert_equal(r,4) if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/f2py/lib/tests/test_scalar_function_in.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_scalar_function_in.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/f2py/lib/tests/test_scalar_function_in.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -107,9 +107,9 @@ from numpy import * -class TestM(NumpyTestCase): +class TestM(TestCase): - def check_foo_integer1(self, level=1): + def test_foo_integer1(self, level=1): i = int8(2) e = int8(3) func = m.fooint1 @@ -144,7 +144,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_integer2(self, level=1): + def test_foo_integer2(self, level=1): i = int16(2) e = int16(3) func = m.fooint2 @@ -179,7 +179,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_integer4(self, level=1): + def test_foo_integer4(self, level=1): i = int32(2) e = int32(3) func = m.fooint4 @@ -214,7 +214,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_integer8(self, level=1): + def test_foo_integer8(self, level=1): i = int64(2) e = int64(3) func = m.fooint8 @@ -249,7 +249,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_real4(self, level=1): + def test_foo_real4(self, level=1): i = float32(2) e = float32(3) func = m.foofloat4 @@ -283,7 +283,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_real8(self, level=1): + def test_foo_real8(self, level=1): i = float64(2) e = float64(3) func = m.foofloat8 @@ -317,7 +317,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_complex8(self, level=1): + def test_foo_complex8(self, level=1): i = complex64(2) e = complex64(3) func = m.foocomplex8 @@ -358,7 +358,7 @@ self.assertRaises(TypeError,lambda :func([2,1,3])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_complex16(self, level=1): + def test_foo_complex16(self, level=1): i = complex128(2) e = complex128(3) func = m.foocomplex16 @@ -399,7 +399,7 @@ self.assertRaises(TypeError,lambda :func([2,1,3])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_bool1(self, level=1): + def test_foo_bool1(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool1 @@ -419,7 +419,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_bool2(self, level=1): + def test_foo_bool2(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool2 @@ -439,7 +439,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_bool4(self, level=1): + def test_foo_bool4(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool4 @@ -459,7 +459,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_bool8(self, level=1): + def test_foo_bool8(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool8 @@ -479,7 +479,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_string1(self, level=1): + def test_foo_string1(self, level=1): i = string0('a') e = string0('1') func = m.foostring1 @@ -497,7 +497,7 @@ assert isinstance(r,string0),`type(r)` assert_equal(r,e) - def check_foo_string5(self, level=1): + def test_foo_string5(self, level=1): i = string0('abcde') e = string0('12cde') func = m.foostring5 @@ -528,5 +528,6 @@ r = func('') assert_equal(r,'') + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/f2py/lib/tests/test_scalar_in_out.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_scalar_in_out.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/f2py/lib/tests/test_scalar_in_out.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -104,9 +104,9 @@ from numpy import * -class TestM(NumpyTestCase): +class TestM(TestCase): - def check_foo_integer1(self, level=1): + def test_foo_integer1(self, level=1): i = int8(2) e = int8(3) func = m.fooint1 @@ -141,7 +141,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_integer2(self, level=1): + def test_foo_integer2(self, level=1): i = int16(2) e = int16(3) func = m.fooint2 @@ -176,7 +176,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_integer4(self, level=1): + def test_foo_integer4(self, level=1): i = int32(2) e = int32(3) func = m.fooint4 @@ -211,7 +211,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_integer8(self, level=1): + def test_foo_integer8(self, level=1): i = int64(2) e = int64(3) func = m.fooint8 @@ -246,7 +246,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_real4(self, level=1): + def test_foo_real4(self, level=1): i = float32(2) e = float32(3) func = m.foofloat4 @@ -280,7 +280,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_real8(self, level=1): + def test_foo_real8(self, level=1): i = float64(2) e = float64(3) func = m.foofloat8 @@ -314,7 +314,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_complex8(self, level=1): + def test_foo_complex8(self, level=1): i = complex64(2) e = complex64(3) func = m.foocomplex8 @@ -355,7 +355,7 @@ self.assertRaises(TypeError,lambda :func([2,1,3])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_complex16(self, level=1): + def test_foo_complex16(self, level=1): i = complex128(2) e = complex128(3) func = m.foocomplex16 @@ -396,7 +396,7 @@ self.assertRaises(TypeError,lambda :func([2,1,3])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_bool1(self, level=1): + def test_foo_bool1(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool1 @@ -416,7 +416,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_bool2(self, level=1): + def test_foo_bool2(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool2 @@ -436,7 +436,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_bool4(self, level=1): + def test_foo_bool4(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool4 @@ -456,7 +456,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_bool8(self, level=1): + def test_foo_bool8(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool8 @@ -476,7 +476,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_string1(self, level=1): + def test_foo_string1(self, level=1): i = string0('a') e = string0('1') func = m.foostring1 @@ -494,7 +494,7 @@ assert isinstance(r,string0),`type(r)` assert_equal(r,e) - def check_foo_string5(self, level=1): + def test_foo_string5(self, level=1): i = string0('abcde') e = string0('12cde') func = m.foostring5 @@ -516,7 +516,7 @@ assert isinstance(r,string0),`type(r)` assert_equal(r,'12] ') - def check_foo_string0(self, level=1): + def test_foo_string0(self, level=1): i = string0('abcde') e = string0('12cde') func = m.foostringstar @@ -525,5 +525,6 @@ r = func('') assert_equal(r,'') + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/f2py/tests/array_from_pyobj/tests/test_array_from_pyobj.py =================================================================== --- trunk/numpy/f2py/tests/array_from_pyobj/tests/test_array_from_pyobj.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/f2py/tests/array_from_pyobj/tests/test_array_from_pyobj.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -8,7 +8,7 @@ set_package_path() from array_from_pyobj import wrap -del sys.path[0] +restore_path() def flags_info(arr): flags = wrap.array_attrs(arr)[6] @@ -240,7 +240,7 @@ ################################################## class test_intent(unittest.TestCase): - def check_in_out(self): + def test_in_out(self): assert_equal(str(intent.in_.out),'intent(in,out)') assert intent.in_.c.is_intent('c') assert not intent.in_.c.is_intent_exact('c') @@ -251,11 +251,11 @@ class _test_shared_memory: num2seq = [1,2] num23seq = [[1,2,3],[4,5,6]] - def check_in_from_2seq(self): + def test_in_from_2seq(self): a = self.array([2],intent.in_,self.num2seq) assert not a.has_shared_memory() - def check_in_from_2casttype(self): + def test_in_from_2casttype(self): for t in self.type.cast_types(): obj = array(self.num2seq,dtype=t.dtype) a = self.array([len(self.num2seq)],intent.in_,obj) @@ -264,7 +264,7 @@ else: assert not a.has_shared_memory(),`t.dtype` - def check_inout_2seq(self): + def test_inout_2seq(self): obj = array(self.num2seq,dtype=self.type.dtype) a = self.array([len(self.num2seq)],intent.inout,obj) assert a.has_shared_memory() @@ -277,7 +277,7 @@ else: raise SystemError,'intent(inout) should have failed on sequence' - def check_f_inout_23seq(self): + def test_f_inout_23seq(self): obj = array(self.num23seq,dtype=self.type.dtype,fortran=1) shape = (len(self.num23seq),len(self.num23seq[0])) a = self.array(shape,intent.in_.inout,obj) @@ -293,31 +293,31 @@ else: raise SystemError,'intent(inout) should have failed on improper array' - def check_c_inout_23seq(self): + def test_c_inout_23seq(self): obj = array(self.num23seq,dtype=self.type.dtype) shape = (len(self.num23seq),len(self.num23seq[0])) a = self.array(shape,intent.in_.c.inout,obj) assert a.has_shared_memory() - def check_in_copy_from_2casttype(self): + def test_in_copy_from_2casttype(self): for t in self.type.cast_types(): obj = array(self.num2seq,dtype=t.dtype) a = self.array([len(self.num2seq)],intent.in_.copy,obj) assert not a.has_shared_memory(),`t.dtype` - def check_c_in_from_23seq(self): + def test_c_in_from_23seq(self): a = self.array([len(self.num23seq),len(self.num23seq[0])], intent.in_,self.num23seq) assert not a.has_shared_memory() - def check_in_from_23casttype(self): + def test_in_from_23casttype(self): for t in self.type.cast_types(): obj = array(self.num23seq,dtype=t.dtype) a = self.array([len(self.num23seq),len(self.num23seq[0])], intent.in_,obj) assert not a.has_shared_memory(),`t.dtype` - def check_f_in_from_23casttype(self): + def test_f_in_from_23casttype(self): for t in self.type.cast_types(): obj = array(self.num23seq,dtype=t.dtype,fortran=1) a = self.array([len(self.num23seq),len(self.num23seq[0])], @@ -327,7 +327,7 @@ else: assert not a.has_shared_memory(),`t.dtype` - def check_c_in_from_23casttype(self): + def test_c_in_from_23casttype(self): for t in self.type.cast_types(): obj = array(self.num23seq,dtype=t.dtype) a = self.array([len(self.num23seq),len(self.num23seq[0])], @@ -337,21 +337,21 @@ else: assert not a.has_shared_memory(),`t.dtype` - def check_f_copy_in_from_23casttype(self): + def test_f_copy_in_from_23casttype(self): for t in self.type.cast_types(): obj = array(self.num23seq,dtype=t.dtype,fortran=1) a = self.array([len(self.num23seq),len(self.num23seq[0])], intent.in_.copy,obj) assert not a.has_shared_memory(),`t.dtype` - def check_c_copy_in_from_23casttype(self): + def test_c_copy_in_from_23casttype(self): for t in self.type.cast_types(): obj = array(self.num23seq,dtype=t.dtype) a = self.array([len(self.num23seq),len(self.num23seq[0])], intent.in_.c.copy,obj) assert not a.has_shared_memory(),`t.dtype` - def check_in_cache_from_2casttype(self): + def test_in_cache_from_2casttype(self): for t in self.type.all_types(): if t.elsize != self.type.elsize: continue @@ -377,7 +377,7 @@ raise else: raise SystemError,'intent(cache) should have failed on multisegmented array' - def check_in_cache_from_2casttype_failure(self): + def test_in_cache_from_2casttype_failure(self): for t in self.type.all_types(): if t.elsize >= self.type.elsize: continue @@ -391,7 +391,7 @@ else: raise SystemError,'intent(cache) should have failed on smaller array' - def check_cache_hidden(self): + def test_cache_hidden(self): shape = (2,) a = self.array(shape,intent.cache.hide,None) assert a.arr.shape==shape @@ -409,7 +409,7 @@ else: raise SystemError,'intent(cache) should have failed on undefined dimensions' - def check_hidden(self): + def test_hidden(self): shape = (2,) a = self.array(shape,intent.hide,None) assert a.arr.shape==shape @@ -436,7 +436,7 @@ else: raise SystemError,'intent(hide) should have failed on undefined dimensions' - def check_optional_none(self): + def test_optional_none(self): shape = (2,) a = self.array(shape,intent.optional,None) assert a.arr.shape==shape @@ -454,14 +454,14 @@ assert a.arr_equal(a.arr,zeros(shape,dtype=self.type.dtype)) assert not a.arr.flags['FORTRAN'] and a.arr.flags['CONTIGUOUS'] - def check_optional_from_2seq(self): + def test_optional_from_2seq(self): obj = self.num2seq shape = (len(obj),) a = self.array(shape,intent.optional,obj) assert a.arr.shape==shape assert not a.has_shared_memory() - def check_optional_from_23seq(self): + def test_optional_from_23seq(self): obj = self.num23seq shape = (len(obj),len(obj[0])) a = self.array(shape,intent.optional,obj) @@ -472,7 +472,7 @@ assert a.arr.shape==shape assert not a.has_shared_memory() - def check_inplace(self): + def test_inplace(self): obj = array(self.num23seq,dtype=self.type.dtype) assert not obj.flags['FORTRAN'] and obj.flags['CONTIGUOUS'] shape = obj.shape @@ -484,7 +484,7 @@ assert obj.flags['FORTRAN'] # obj attributes are changed inplace! assert not obj.flags['CONTIGUOUS'] - def check_inplace_from_casttype(self): + def test_inplace_from_casttype(self): for t in self.type.cast_types(): if t is self.type: continue @@ -502,6 +502,7 @@ assert not obj.flags['CONTIGUOUS'] assert obj.dtype.type is self.type.dtype # obj type is changed inplace! + for t in Type._type_names: exec '''\ class test_%s_gen(unittest.TestCase, @@ -512,4 +513,4 @@ ''' % (t,t,t) if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/fft/__init__.py =================================================================== --- trunk/numpy/fft/__init__.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/fft/__init__.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -4,6 +4,6 @@ from fftpack import * from helper import * -def test(level=1, verbosity=1): - from numpy.testing import NumpyTest - return NumpyTest().test(level, verbosity) +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().bench Modified: trunk/numpy/fft/tests/test_fftpack.py =================================================================== --- trunk/numpy/fft/tests/test_fftpack.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/fft/tests/test_fftpack.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -10,15 +10,17 @@ phase = np.arange(L).reshape(-1,1) * phase return np.sum(x*np.exp(phase),axis=1) -class TestFFTShift(NumpyTestCase): - def check_fft_n(self): +class TestFFTShift(TestCase): + def test_fft_n(self): self.failUnlessRaises(ValueError,np.fft.fft,[1,2,3],0) -class TestFFT1D(NumpyTestCase): - def check_basic(self): + +class TestFFT1D(TestCase): + def test_basic(self): rand = np.random.random x = rand(30) + 1j*rand(30) assert_array_almost_equal(fft1(x), np.fft.fft(x)) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/fft/tests/test_helper.py =================================================================== --- trunk/numpy/fft/tests/test_helper.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/fft/tests/test_helper.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -7,16 +7,15 @@ from numpy.testing import * set_package_path() from numpy.fft import fftshift,ifftshift,fftfreq -del sys.path[0] +restore_path() from numpy import pi def random(size): return rand(*size) -class TestFFTShift(NumpyTestCase): - - def check_definition(self): +class TestFFTShift(TestCase): + def test_definition(self): x = [0,1,2,3,4,-4,-3,-2,-1] y = [-4,-3,-2,-1,0,1,2,3,4] assert_array_almost_equal(fftshift(x),y) @@ -26,14 +25,14 @@ assert_array_almost_equal(fftshift(x),y) assert_array_almost_equal(ifftshift(y),x) - def check_inverse(self): + def test_inverse(self): for n in [1,4,9,100,211]: x = random((n,)) assert_array_almost_equal(ifftshift(fftshift(x)),x) -class TestFFTFreq(NumpyTestCase): - def check_definition(self): +class TestFFTFreq(TestCase): + def test_definition(self): x = [0,1,2,3,4,-4,-3,-2,-1] assert_array_almost_equal(9*fftfreq(9),x) assert_array_almost_equal(9*pi*fftfreq(9,pi),x) @@ -41,5 +40,6 @@ assert_array_almost_equal(10*fftfreq(10),x) assert_array_almost_equal(10*pi*fftfreq(10,pi),x) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/lib/__init__.py =================================================================== --- trunk/numpy/lib/__init__.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/lib/__init__.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -34,6 +34,7 @@ __all__ += io.__all__ __all__ += financial.__all__ -def test(level=1, verbosity=1): - from numpy.testing import NumpyTest - return NumpyTest().test(level, verbosity) +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().bench + Modified: trunk/numpy/lib/tests/test__datasource.py =================================================================== --- trunk/numpy/lib/tests/test__datasource.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/lib/tests/test__datasource.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -67,7 +67,7 @@ def invalid_httpfile(): return http_fakefile -class TestDataSourceOpen(NumpyTestCase): +class TestDataSourceOpen(TestCase): def setUp(self): self.tmpdir = mkdtemp() self.ds = datasource.DataSource(self.tmpdir) @@ -127,7 +127,7 @@ self.assertEqual(magic_line, result) -class TestDataSourceExists(NumpyTestCase): +class TestDataSourceExists(TestCase): def setUp(self): self.tmpdir = mkdtemp() self.ds = datasource.DataSource(self.tmpdir) @@ -157,7 +157,7 @@ self.assertEqual(self.ds.exists(tmpfile), False) -class TestDataSourceAbspath(NumpyTestCase): +class TestDataSourceAbspath(TestCase): def setUp(self): self.tmpdir = os.path.abspath(mkdtemp()) self.ds = datasource.DataSource(self.tmpdir) @@ -222,7 +222,7 @@ os.sep = orig_os_sep -class TestRepositoryAbspath(NumpyTestCase): +class TestRepositoryAbspath(TestCase): def setUp(self): self.tmpdir = os.path.abspath(mkdtemp()) self.repos = datasource.Repository(valid_baseurl(), self.tmpdir) @@ -255,7 +255,7 @@ os.sep = orig_os_sep -class TestRepositoryExists(NumpyTestCase): +class TestRepositoryExists(TestCase): def setUp(self): self.tmpdir = mkdtemp() self.repos = datasource.Repository(valid_baseurl(), self.tmpdir) @@ -288,7 +288,7 @@ assert self.repos.exists(tmpfile) -class TestOpenFunc(NumpyTestCase): +class TestOpenFunc(TestCase): def setUp(self): self.tmpdir = mkdtemp() @@ -304,4 +304,4 @@ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/lib/tests/test_arraysetops.py =================================================================== --- trunk/numpy/lib/tests/test_arraysetops.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/lib/tests/test_arraysetops.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -6,15 +6,14 @@ set_package_path() import numpy from numpy.lib.arraysetops import * -from numpy.lib.arraysetops import ediff1d restore_path() ################################################## -class TestAso(NumpyTestCase): +class TestAso(TestCase): ## # 03.11.2005, c - def check_unique1d( self ): + def test_unique1d( self ): a = numpy.array( [5, 7, 1, 2, 1, 5, 7] ) @@ -26,7 +25,7 @@ ## # 03.11.2005, c - def check_intersect1d( self ): + def test_intersect1d( self ): a = numpy.array( [5, 7, 1, 2] ) b = numpy.array( [2, 4, 3, 1, 5] ) @@ -39,7 +38,7 @@ ## # 03.11.2005, c - def check_intersect1d_nu( self ): + def test_intersect1d_nu( self ): a = numpy.array( [5, 5, 7, 1, 2] ) b = numpy.array( [2, 1, 4, 3, 3, 1, 5] ) @@ -52,7 +51,7 @@ ## # 03.11.2005, c - def check_setxor1d( self ): + def test_setxor1d( self ): a = numpy.array( [5, 7, 1, 2] ) b = numpy.array( [2, 4, 3, 1, 5] ) @@ -77,7 +76,7 @@ assert_array_equal([], setxor1d([],[])) - def check_ediff1d(self): + def test_ediff1d(self): zero_elem = numpy.array([]) one_elem = numpy.array([1]) two_elem = numpy.array([1,2]) @@ -91,7 +90,7 @@ ## # 03.11.2005, c - def check_setmember1d( self ): + def test_setmember1d( self ): a = numpy.array( [5, 7, 1, 2] ) b = numpy.array( [2, 4, 3, 1, 5] ) @@ -114,7 +113,7 @@ ## # 03.11.2005, c - def check_union1d( self ): + def test_union1d( self ): a = numpy.array( [5, 4, 7, 1, 2] ) b = numpy.array( [2, 4, 3, 3, 2, 1, 5] ) @@ -128,7 +127,7 @@ ## # 03.11.2005, c # 09.01.2006 - def check_setdiff1d( self ): + def test_setdiff1d( self ): a = numpy.array( [6, 5, 4, 7, 1, 2] ) b = numpy.array( [2, 4, 3, 3, 2, 1, 5] ) @@ -145,14 +144,14 @@ assert_array_equal([], setdiff1d([],[])) - def check_setdiff1d_char_array(self): + def test_setdiff1d_char_array(self): a = numpy.array(['a','b','c']) b = numpy.array(['a','b','s']) assert_array_equal(setdiff1d(a,b),numpy.array(['c'])) ## # 03.11.2005, c - def check_manyways( self ): + def test_manyways( self ): nItem = 100 a = numpy.fix( nItem / 10 * numpy.random.random( nItem ) ) @@ -171,5 +170,6 @@ c2 = setdiff1d( aux2, aux1 ) assert_array_equal( c1, c2 ) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/lib/tests/test_financial.py =================================================================== --- trunk/numpy/lib/tests/test_financial.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/lib/tests/test_financial.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -32,8 +32,9 @@ from numpy.testing import * import numpy as np -class TestDocs(NumpyTestCase): - def check_doctests(self): return self.rundocs() +def test(): + import doctest + doctest.testmod() if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/lib/tests/test_format.py =================================================================== --- trunk/numpy/lib/tests/test_format.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/lib/tests/test_format.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -506,3 +506,7 @@ for magic in bad_version_magic + malformed_magic: f = StringIO(magic) yield raises(ValueError)(format.read_array), f + + +if __name__ == "__main__": + nose.run(argv=['', __file__]) Modified: trunk/numpy/lib/tests/test_function_base.py =================================================================== --- trunk/numpy/lib/tests/test_function_base.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/lib/tests/test_function_base.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -5,11 +5,10 @@ import numpy.lib;reload(numpy.lib) from numpy.lib import * from numpy.core import * +restore_path() -del sys.path[0] - -class TestAny(NumpyTestCase): - def check_basic(self): +class TestAny(TestCase): + def test_basic(self): y1 = [0,0,1,0] y2 = [0,0,0,0] y3 = [1,0,1,0] @@ -17,14 +16,14 @@ assert(any(y3)) assert(not any(y2)) - def check_nd(self): + def test_nd(self): y1 = [[0,0,0],[0,1,0],[1,1,0]] assert(any(y1)) assert_array_equal(sometrue(y1,axis=0),[1,1,0]) assert_array_equal(sometrue(y1,axis=1),[0,1,1]) -class TestAll(NumpyTestCase): - def check_basic(self): +class TestAll(TestCase): + def test_basic(self): y1 = [0,1,1,0] y2 = [0,0,0,0] y3 = [1,1,1,1] @@ -33,14 +32,14 @@ assert(not all(y2)) assert(all(~array(y2))) - def check_nd(self): + def test_nd(self): y1 = [[0,0,1],[0,1,1],[1,1,1]] assert(not all(y1)) assert_array_equal(alltrue(y1,axis=0),[0,0,1]) assert_array_equal(alltrue(y1,axis=1),[0,0,1]) -class TestAverage(NumpyTestCase): - def check_basic(self): +class TestAverage(TestCase): + def test_basic(self): y1 = array([1,2,3]) assert(average(y1,axis=0) == 2.) y2 = array([1.,2.,3.]) @@ -61,7 +60,7 @@ y6 = matrix(rand(5,5)) assert_array_equal(y6.mean(0), average(y6,0)) - def check_weights(self): + def test_weights(self): y = arange(10) w = arange(10) assert_almost_equal(average(y, weights=w), (arange(10)**2).sum()*1./arange(10).sum()) @@ -89,7 +88,7 @@ assert_equal(average(y1, weights=w2), 5.) - def check_returned(self): + def test_returned(self): y = array([[1,2,3],[4,5,6]]) # No weights @@ -116,14 +115,14 @@ assert_array_equal(scl, array([1.,6.])) -class TestSelect(NumpyTestCase): +class TestSelect(TestCase): def _select(self,cond,values,default=0): output = [] for m in range(len(cond)): output += [V[m] for V,C in zip(values,cond) if C[m]] or [default] return output - def check_basic(self): + def test_basic(self): choices = [array([1,2,3]), array([4,5,6]), array([7,8,9])] @@ -136,8 +135,8 @@ assert_equal(len(choices),3) assert_equal(len(conditions),3) -class TestLogspace(NumpyTestCase): - def check_basic(self): +class TestLogspace(TestCase): + def test_basic(self): y = logspace(0,6) assert(len(y)==50) y = logspace(0,6,num=100) @@ -147,8 +146,8 @@ y = logspace(0,6,num=7) assert_array_equal(y,[1,10,100,1e3,1e4,1e5,1e6]) -class TestLinspace(NumpyTestCase): - def check_basic(self): +class TestLinspace(TestCase): + def test_basic(self): y = linspace(0,10) assert(len(y)==50) y = linspace(2,10,num=100) @@ -159,28 +158,28 @@ assert_almost_equal(st,8/49.0) assert_array_almost_equal(y,mgrid[2:10:50j],13) - def check_corner(self): + def test_corner(self): y = list(linspace(0,1,1)) assert y == [0.0], y y = list(linspace(0,1,2.5)) assert y == [0.0, 1.0] - def check_type(self): + def test_type(self): t1 = linspace(0,1,0).dtype t2 = linspace(0,1,1).dtype t3 = linspace(0,1,2).dtype assert_equal(t1, t2) assert_equal(t2, t3) -class TestInsert(NumpyTestCase): - def check_basic(self): +class TestInsert(TestCase): + def test_basic(self): a = [1,2,3] assert_equal(insert(a,0,1), [1,1,2,3]) assert_equal(insert(a,3,1), [1,2,3,1]) assert_equal(insert(a,[1,1,1],[1,2,3]), [1,1,2,3,2,3]) -class TestAmax(NumpyTestCase): - def check_basic(self): +class TestAmax(TestCase): + def test_basic(self): a = [3,4,5,10,-3,-5,6.0] assert_equal(amax(a),10.0) b = [[3,6.0, 9.0], @@ -189,8 +188,8 @@ assert_equal(amax(b,axis=0),[8.0,10.0,9.0]) assert_equal(amax(b,axis=1),[9.0,10.0,8.0]) -class TestAmin(NumpyTestCase): - def check_basic(self): +class TestAmin(TestCase): + def test_basic(self): a = [3,4,5,10,-3,-5,6.0] assert_equal(amin(a),-5.0) b = [[3,6.0, 9.0], @@ -199,8 +198,8 @@ assert_equal(amin(b,axis=0),[3.0,3.0,2.0]) assert_equal(amin(b,axis=1),[3.0,4.0,2.0]) -class TestPtp(NumpyTestCase): - def check_basic(self): +class TestPtp(TestCase): + def test_basic(self): a = [3,4,5,10,-3,-5,6.0] assert_equal(ptp(a,axis=0),15.0) b = [[3,6.0, 9.0], @@ -209,8 +208,8 @@ assert_equal(ptp(b,axis=0),[5.0,7.0,7.0]) assert_equal(ptp(b,axis=-1),[6.0,6.0,6.0]) -class TestCumsum(NumpyTestCase): - def check_basic(self): +class TestCumsum(TestCase): + def test_basic(self): ba = [1,2,10,11,6,5,4] ba2 = [[1,2,3,4],[5,6,7,9],[10,3,4,5]] for ctype in [int8,uint8,int16,uint16,int32,uint32, @@ -225,8 +224,8 @@ [5,11,18,27], [10,13,17,22]],ctype)) -class TestProd(NumpyTestCase): - def check_basic(self): +class TestProd(TestCase): + def test_basic(self): ba = [1,2,10,11,6,5,4] ba2 = [[1,2,3,4],[5,6,7,9],[10,3,4,5]] for ctype in [int16,uint16,int32,uint32, @@ -243,8 +242,8 @@ array([50,36,84,180],ctype)) assert_array_equal(prod(a2,axis=-1),array([24, 1890, 600],ctype)) -class TestCumprod(NumpyTestCase): - def check_basic(self): +class TestCumprod(TestCase): + def test_basic(self): ba = [1,2,10,11,6,5,4] ba2 = [[1,2,3,4],[5,6,7,9],[10,3,4,5]] for ctype in [int16,uint16,int32,uint32, @@ -268,8 +267,8 @@ [ 5, 30, 210, 1890], [10, 30, 120, 600]],ctype)) -class TestDiff(NumpyTestCase): - def check_basic(self): +class TestDiff(TestCase): + def test_basic(self): x = [1,4,6,7,12] out = array([3,2,1,5]) out2 = array([-1,-1,4]) @@ -278,7 +277,7 @@ assert_array_equal(diff(x,n=2),out2) assert_array_equal(diff(x,n=3),out3) - def check_nd(self): + def test_nd(self): x = 20*rand(10,20,30) out1 = x[:,:,1:] - x[:,:,:-1] out2 = out1[:,:,1:] - out1[:,:,:-1] @@ -289,8 +288,8 @@ assert_array_equal(diff(x,axis=0),out3) assert_array_equal(diff(x,n=2,axis=0),out4) -class TestAngle(NumpyTestCase): - def check_basic(self): +class TestAngle(TestCase): + def test_basic(self): x = [1+3j,sqrt(2)/2.0+1j*sqrt(2)/2,1,1j,-1,-1j,1-3j,-1+3j] y = angle(x) yo = [arctan(3.0/1.0),arctan(1.0),0,pi/2,pi,-pi/2.0, @@ -300,33 +299,33 @@ assert_array_almost_equal(y,yo,11) assert_array_almost_equal(z,zo,11) -class TestTrimZeros(NumpyTestCase): +class TestTrimZeros(TestCase): """ only testing for integer splits. """ - def check_basic(self): + def test_basic(self): a= array([0,0,1,2,3,4,0]) res = trim_zeros(a) assert_array_equal(res,array([1,2,3,4])) - def check_leading_skip(self): + def test_leading_skip(self): a= array([0,0,1,0,2,3,4,0]) res = trim_zeros(a) assert_array_equal(res,array([1,0,2,3,4])) - def check_trailing_skip(self): + def test_trailing_skip(self): a= array([0,0,1,0,2,3,0,4,0]) res = trim_zeros(a) assert_array_equal(res,array([1,0,2,3,0,4])) -class TestExtins(NumpyTestCase): - def check_basic(self): +class TestExtins(TestCase): + def test_basic(self): a = array([1,3,2,1,2,3,3]) b = extract(a>1,a) assert_array_equal(b,[3,2,2,3,3]) - def check_place(self): + def test_place(self): a = array([1,4,3,2,5,8,7]) place(a,[0,1,0,1,0,1,0],[2,4,6]) assert_array_equal(a,[1,2,3,4,5,6,7]) - def check_both(self): + def test_both(self): a = rand(10) mask = a > 0.5 ac = a.copy() @@ -335,8 +334,8 @@ place(a,mask,c) assert_array_equal(a,ac) -class TestVectorize(NumpyTestCase): - def check_simple(self): +class TestVectorize(TestCase): + def test_simple(self): def addsubtract(a,b): if a > b: return a - b @@ -345,7 +344,7 @@ f = vectorize(addsubtract) r = f([0,3,6,9],[1,3,5,7]) assert_array_equal(r,[1,6,1,2]) - def check_scalar(self): + def test_scalar(self): def addsubtract(a,b): if a > b: return a - b @@ -354,59 +353,59 @@ f = vectorize(addsubtract) r = f([0,3,6,9],5) assert_array_equal(r,[5,8,1,4]) - def check_large(self): + def test_large(self): x = linspace(-3,2,10000) f = vectorize(lambda x: x) y = f(x) assert_array_equal(y, x) -class TestDigitize(NumpyTestCase): - def check_forward(self): +class TestDigitize(TestCase): + def test_forward(self): x = arange(-6,5) bins = arange(-5,5) assert_array_equal(digitize(x,bins),arange(11)) - def check_reverse(self): + def test_reverse(self): x = arange(5,-6,-1) bins = arange(5,-5,-1) assert_array_equal(digitize(x,bins),arange(11)) - def check_random(self): + def test_random(self): x = rand(10) bin = linspace(x.min(), x.max(), 10) assert all(digitize(x,bin) != 0) -class TestUnwrap(NumpyTestCase): - def check_simple(self): +class TestUnwrap(TestCase): + def test_simple(self): #check that unwrap removes jumps greather that 2*pi assert_array_equal(unwrap([1,1+2*pi]),[1,1]) #check that unwrap maintans continuity assert(all(diff(unwrap(rand(10)*100)) 1e10) and assert_all(isfinite(vals[2])) - def check_integer(self): + def test_integer(self): vals = nan_to_num(1) assert_all(vals == 1) - def check_complex_good(self): + def test_complex_good(self): vals = nan_to_num(1+1j) assert_all(vals == 1+1j) - def check_complex_bad(self): + def test_complex_bad(self): v = 1+1j olderr = seterr(divide='ignore', invalid='ignore') v += array(0+1.j)/0. @@ -245,7 +245,7 @@ vals = nan_to_num(v) # !! This is actually (unexpectedly) zero assert_all(isfinite(vals)) - def check_complex_bad2(self): + def test_complex_bad2(self): v = 1+1j olderr = seterr(divide='ignore', invalid='ignore') v += array(-1+1.j)/0. @@ -259,8 +259,8 @@ #assert_all(vals.real < -1e10) and assert_all(isfinite(vals)) -class TestRealIfClose(NumpyTestCase): - def check_basic(self): +class TestRealIfClose(TestCase): + def test_basic(self): a = rand(10) b = real_if_close(a+1e-15j) assert_all(isrealobj(b)) @@ -270,11 +270,12 @@ b = real_if_close(a+1e-7j,tol=1e-6) assert_all(isrealobj(b)) -class TestArrayConversion(NumpyTestCase): - def check_asfarray(self): +class TestArrayConversion(TestCase): + def test_asfarray(self): a = asfarray(array([1,2,3])) assert_equal(a.__class__,ndarray) assert issubdtype(a.dtype,float) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/lib/tests/test_ufunclike.py =================================================================== --- trunk/numpy/lib/tests/test_ufunclike.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/lib/tests/test_ufunclike.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -59,8 +59,10 @@ from numpy.testing import * -class TestDocs(NumpyTestCase): - def check_doctests(self): return self.rundocs() +class TestDocs(TestCase): + def test_doctests(self): + return rundocs() + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/linalg/__init__.py =================================================================== --- trunk/numpy/linalg/__init__.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/linalg/__init__.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -3,6 +3,6 @@ from linalg import * -def test(level=1, verbosity=1): - from numpy.testing import NumpyTest - return NumpyTest().test(level, verbosity) +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().test Modified: trunk/numpy/linalg/tests/test_linalg.py =================================================================== --- trunk/numpy/linalg/tests/test_linalg.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/linalg/tests/test_linalg.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -23,28 +23,28 @@ decimal = 12 old_assert_almost_equal(a, b, decimal=decimal, **kw) -class LinalgTestCase(NumpyTestCase): - def check_single(self): +class LinalgTestCase: + def test_single(self): a = array([[1.,2.], [3.,4.]], dtype=single) b = array([2., 1.], dtype=single) self.do(a, b) - def check_double(self): + def test_double(self): a = array([[1.,2.], [3.,4.]], dtype=double) b = array([2., 1.], dtype=double) self.do(a, b) - def check_csingle(self): + def test_csingle(self): a = array([[1.+2j,2+3j], [3+4j,4+5j]], dtype=csingle) b = array([2.+1j, 1.+2j], dtype=csingle) self.do(a, b) - def check_cdouble(self): + def test_cdouble(self): a = array([[1.+2j,2+3j], [3+4j,4+5j]], dtype=cdouble) b = array([2.+1j, 1.+2j], dtype=cdouble) self.do(a, b) - def check_empty(self): + def test_empty(self): a = atleast_2d(array([], dtype = double)) b = atleast_2d(array([], dtype = double)) try: @@ -53,79 +53,79 @@ except linalg.LinAlgError, e: pass - def check_nonarray(self): + def test_nonarray(self): a = [[1,2], [3,4]] b = [2, 1] self.do(a,b) - def check_matrix_b_only(self): + def test_matrix_b_only(self): """Check that matrix type is preserved.""" a = array([[1.,2.], [3.,4.]]) b = matrix([2., 1.]).T self.do(a, b) - def check_matrix_a_and_b(self): + def test_matrix_a_and_b(self): """Check that matrix type is preserved.""" a = matrix([[1.,2.], [3.,4.]]) b = matrix([2., 1.]).T self.do(a, b) -class TestSolve(LinalgTestCase): +class TestSolve(LinalgTestCase, TestCase): def do(self, a, b): x = linalg.solve(a, b) assert_almost_equal(b, dot(a, x)) assert imply(isinstance(b, matrix), isinstance(x, matrix)) -class TestInv(LinalgTestCase): +class TestInv(LinalgTestCase, TestCase): def do(self, a, b): a_inv = linalg.inv(a) assert_almost_equal(dot(a, a_inv), identity(asarray(a).shape[0])) assert imply(isinstance(a, matrix), isinstance(a_inv, matrix)) -class TestEigvals(LinalgTestCase): +class TestEigvals(LinalgTestCase, TestCase): def do(self, a, b): ev = linalg.eigvals(a) evalues, evectors = linalg.eig(a) assert_almost_equal(ev, evalues) -class TestEig(LinalgTestCase): +class TestEig(LinalgTestCase, TestCase): def do(self, a, b): evalues, evectors = linalg.eig(a) assert_almost_equal(dot(a, evectors), multiply(evectors, evalues)) assert imply(isinstance(a, matrix), isinstance(evectors, matrix)) -class TestSVD(LinalgTestCase): +class TestSVD(LinalgTestCase, TestCase): def do(self, a, b): u, s, vt = linalg.svd(a, 0) assert_almost_equal(a, dot(multiply(u, s), vt)) assert imply(isinstance(a, matrix), isinstance(u, matrix)) assert imply(isinstance(a, matrix), isinstance(vt, matrix)) -class TestCondSVD(LinalgTestCase): +class TestCondSVD(LinalgTestCase, TestCase): def do(self, a, b): c = asarray(a) # a might be a matrix s = linalg.svd(c, compute_uv=False) old_assert_almost_equal(s[0]/s[-1], linalg.cond(a), decimal=5) -class TestCond2(LinalgTestCase): +class TestCond2(LinalgTestCase, TestCase): def do(self, a, b): c = asarray(a) # a might be a matrix s = linalg.svd(c, compute_uv=False) old_assert_almost_equal(s[0]/s[-1], linalg.cond(a,2), decimal=5) -class TestCondInf(NumpyTestCase): +class TestCondInf(TestCase): def test(self): A = array([[1.,0,0],[0,-2.,0],[0,0,3.]]) assert_almost_equal(linalg.cond(A,inf),3.) -class TestPinv(LinalgTestCase): +class TestPinv(LinalgTestCase, TestCase): def do(self, a, b): a_ginv = linalg.pinv(a) assert_almost_equal(dot(a, a_ginv), identity(asarray(a).shape[0])) assert imply(isinstance(a, matrix), isinstance(a_ginv, matrix)) -class TestDet(LinalgTestCase): +class TestDet(LinalgTestCase, TestCase): def do(self, a, b): d = linalg.det(a) if asarray(a).dtype.type in (single, double): @@ -135,7 +135,7 @@ ev = linalg.eigvals(ad) assert_almost_equal(d, multiply.reduce(ev)) -class TestLstsq(LinalgTestCase): +class TestLstsq(LinalgTestCase, TestCase): def do(self, a, b): u, s, vt = linalg.svd(a, 0) x, residuals, rank, sv = linalg.lstsq(a, b) @@ -145,7 +145,7 @@ assert imply(isinstance(b, matrix), isinstance(x, matrix)) assert imply(isinstance(b, matrix), isinstance(residuals, matrix)) -class TestMatrixPower(ParametricTestCase): +class TestMatrixPower(TestCase): R90 = array([[0,1],[-1,0]]) Arb22 = array([[4,-7],[-2,10]]) noninv = array([[1,0],[0,0]]) @@ -158,6 +158,7 @@ def test_large_power(self): assert_equal(matrix_power(self.R90,2L**100+2**10+2**5+1),self.R90) + def test_large_power_trailing_zero(self): assert_equal(matrix_power(self.R90,2L**100+2**10+2**5),identity(2)) @@ -197,10 +198,11 @@ self.assertRaises(numpy.linalg.linalg.LinAlgError, lambda: matrix_power(self.noninv,-1)) -class TestBoolPower(NumpyTestCase): - def check_square(self): +class TestBoolPower(TestCase): + def test_square(self): A = array([[True,False],[True,True]]) assert_equal(matrix_power(A,2),A) -if __name__ == '__main__': - NumpyTest().run() + +if __name__ == "__main__": + nose.run(argv=['', __file__]) Modified: trunk/numpy/linalg/tests/test_regression.py =================================================================== --- trunk/numpy/linalg/tests/test_regression.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/linalg/tests/test_regression.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -9,7 +9,7 @@ rlevel = 1 -class TestRegression(NumpyTestCase): +class TestRegression(TestCase): def test_eig_build(self, level = rlevel): """Ticket #652""" rva = array([1.03221168e+02 +0.j, @@ -54,5 +54,6 @@ assert_array_almost_equal(b, np.zeros((2, 2))) + if __name__ == '__main__': - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/ma/__init__.py =================================================================== --- trunk/numpy/ma/__init__.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/ma/__init__.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -20,3 +20,7 @@ __all__ = ['core', 'extras'] __all__ += core.__all__ __all__ += extras.__all__ + +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().bench Modified: trunk/numpy/ma/tests/test_core.py =================================================================== --- trunk/numpy/ma/tests/test_core.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/ma/tests/test_core.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -12,34 +12,17 @@ import numpy as np import numpy.core.fromnumeric as fromnumeric from numpy import ndarray +from numpy.ma.testutils import * - -from numpy.testing import NumpyTest, NumpyTestCase -from numpy.testing import set_local_path, restore_path -from numpy.testing.utils import build_err_msg - -import numpy.ma.testutils -from numpy.ma.testutils import NumpyTestCase, \ - assert_equal, assert_array_equal, fail_if_equal, assert_not_equal, \ - assert_almost_equal, assert_mask_equal, assert_equal_records - -import numpy.ma.core as coremodule +import numpy.ma.core from numpy.ma.core import * pi = np.pi -set_local_path() -from test_old_ma import * -restore_path() - #.............................................................................. -class TestMaskedArray(NumpyTestCase): +class TestMaskedArray(TestCase): "Base test class for MaskedArrays." - def __init__(self, *args, **kwds): - NumpyTestCase.__init__(self, *args, **kwds) - self.setUp() - def setUp (self): "Base data definition." x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) @@ -443,7 +426,7 @@ #------------------------------------------------------------------------------ -class TestMaskedArrayArithmetic(NumpyTestCase): +class TestMaskedArrayArithmetic(TestCase): "Base test class for MaskedArrays." def setUp (self): @@ -616,7 +599,7 @@ for funcname in ('min', 'max'): # Initialize npfunc = getattr(np, funcname) - mafunc = getattr(coremodule, funcname) + mafunc = getattr(numpy.ma.core, funcname) # Use the np version nout = np.empty((4,), dtype=int) result = npfunc(xm,axis=0,out=nout) @@ -730,7 +713,7 @@ #------------------------------------------------------------------------------ -class TestMaskedArrayAttributes(NumpyTestCase): +class TestMaskedArrayAttributes(TestCase): def test_keepmask(self): @@ -828,7 +811,7 @@ #------------------------------------------------------------------------------ -class TestFillingValues(NumpyTestCase): +class TestFillingValues(TestCase): # def test_check_on_scalar(self): "Test _check_fill_value" @@ -922,7 +905,7 @@ assert_equal(series._fill_value, data._fill_value) # mtype = [('f',float_),('s','|S3')] - x = array([(1,'a'),(2,'b'),(np.pi,'pi')], dtype=mtype) + x = array([(1,'a'),(2,'b'),(pi,'pi')], dtype=mtype) x.fill_value=999 assert_equal(x.fill_value.item(),[999.,'999']) assert_equal(x['f'].fill_value, 999) @@ -938,9 +921,10 @@ assert_equal(np.asarray(x.fill_value).dtype, float_) assert_equal(x.fill_value, 999.) + #------------------------------------------------------------------------------ -class TestUfuncs(NumpyTestCase): +class TestUfuncs(TestCase): "Test class for the application of ufuncs on MaskedArrays." def setUp(self): "Base data definition." @@ -972,7 +956,7 @@ uf = getattr(umath, f) except AttributeError: uf = getattr(fromnumeric, f) - mf = getattr(coremodule, f) + mf = getattr(numpy.ma.core, f) args = self.d[:uf.nin] ur = uf(*args) mr = mf(*args) @@ -1002,7 +986,7 @@ #------------------------------------------------------------------------------ -class TestMaskedArrayInPlaceArithmetics(NumpyTestCase): +class TestMaskedArrayInPlaceArithmetics(TestCase): "Test MaskedArray Arithmetics" def setUp(self): @@ -1134,7 +1118,7 @@ #------------------------------------------------------------------------------ -class TestMaskedArrayMethods(NumpyTestCase): +class TestMaskedArrayMethods(TestCase): "Test class for miscellaneous MaskedArrays methods." def setUp(self): "Base data definition." @@ -1630,7 +1614,7 @@ #------------------------------------------------------------------------------ -class TestMaskArrayMathMethod(NumpyTestCase): +class TestMaskArrayMathMethod(TestCase): def setUp(self): "Base data definition." @@ -1781,7 +1765,7 @@ #------------------------------------------------------------------------------ -class TestMaskedArrayMathMethodsComplex(NumpyTestCase): +class TestMaskedArrayMathMethodsComplex(TestCase): "Test class for miscellaneous MaskedArrays methods." def setUp(self): "Base data definition." @@ -1834,7 +1818,7 @@ #------------------------------------------------------------------------------ -class TestMaskedArrayFunctions(NumpyTestCase): +class TestMaskedArrayFunctions(TestCase): "Test class for miscellaneous functions." # def setUp(self): @@ -2090,7 +2074,7 @@ #------------------------------------------------------------------------------ -class TestMaskedFields(NumpyTestCase): +class TestMaskedFields(TestCase): # def setUp(self): ilist = [1,2,3,4,5] @@ -2125,13 +2109,13 @@ "Check setting an element of a record)" base = self.data['base'] (base_a, base_b, base_c) = (base['a'], base['b'], base['c']) - base[0] = (np.pi, np.pi, 'pi') + base[0] = (pi, pi, 'pi') assert_equal(base_a.dtype, int) assert_equal(base_a.data, [3,2,3,4,5]) assert_equal(base_b.dtype, float) - assert_equal(base_b.data, [np.pi, 2.2, 3.3, 4.4, 5.5]) + assert_equal(base_b.data, [pi, 2.2, 3.3, 4.4, 5.5]) assert_equal(base_c.dtype, '|S8') assert_equal(base_c.data, ['pi','two','three','four','five']) @@ -2139,13 +2123,13 @@ def test_set_record_slice(self): base = self.data['base'] (base_a, base_b, base_c) = (base['a'], base['b'], base['c']) - base[:3] = (np.pi, np.pi, 'pi') + base[:3] = (pi, pi, 'pi') assert_equal(base_a.dtype, int) assert_equal(base_a.data, [3,3,3,4,5]) assert_equal(base_b.dtype, float) - assert_equal(base_b.data, [np.pi, np.pi, np.pi, 4.4, 5.5]) + assert_equal(base_b.data, [pi, pi, pi, 4.4, 5.5]) assert_equal(base_c.dtype, '|S8') assert_equal(base_c.data, ['pi','pi','pi','four','five']) @@ -2163,4 +2147,4 @@ ############################################################################### #------------------------------------------------------------------------------ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/ma/tests/test_extras.py =================================================================== --- trunk/numpy/ma/tests/test_extras.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/ma/tests/test_extras.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -11,21 +11,15 @@ __revision__ = "$Revision: 3473 $" __date__ = '$Date: 2007-10-29 17:18:13 +0200 (Mon, 29 Oct 2007) $' -import numpy as np -from numpy.testing import NumpyTest, NumpyTestCase -from numpy.testing.utils import build_err_msg - -import numpy.ma.testutils +import numpy +from numpy.testing import * from numpy.ma.testutils import * - -import numpy.ma.core from numpy.ma.core import * -import numpy.ma.extras from numpy.ma.extras import * -class TestAverage(NumpyTestCase): +class TestAverage(TestCase): "Several tests of average. Why so many ? Good point..." - def check_testAverage1(self): + def test_testAverage1(self): "Test of average." ott = array([0.,1.,2.,3.], mask=[1,0,0,0]) assert_equal(2.0, average(ott,axis=0)) @@ -44,7 +38,7 @@ result, wts = average(ott, axis=0, returned=1) assert_equal(wts, [1., 0.]) - def check_testAverage2(self): + def test_testAverage2(self): "More tests of average." w1 = [0,1,1,1,1,0] w2 = [[0,1,1,1,1,0],[1,0,0,0,0,1]] @@ -77,7 +71,7 @@ assert_equal(average(z, axis=1), [2.5, 5.0]) assert_equal(average(z,axis=0, weights=w2), [0.,1., 99., 99., 4.0, 10.0]) - def check_testAverage3(self): + def test_testAverage3(self): "Yet more tests of average!" a = arange(6) b = arange(6) * 3 @@ -101,9 +95,9 @@ a2dma = average(a2dm, axis=1) assert_equal(a2dma, [1.5, 4.0]) -class TestConcatenator(NumpyTestCase): +class TestConcatenator(TestCase): "Tests for mr_, the equivalent of r_ for masked arrays." - def check_1d(self): + def test_1d(self): "Tests mr_ on 1D arrays." assert_array_equal(mr_[1,2,3,4,5,6],array([1,2,3,4,5,6])) b = ones(5) @@ -114,7 +108,7 @@ assert_array_equal(c,[1,1,1,1,1,0,0,1,1,1,1,1]) assert_array_equal(c.mask, mr_[m,0,0,m]) - def check_2d(self): + def test_2d(self): "Tests mr_ on 2D arrays." a_1 = rand(5,5) a_2 = rand(5,5) @@ -133,9 +127,9 @@ assert_array_equal(d[5:,:],b_2) assert_array_equal(d.mask, np.r_[m_1,m_2]) -class TestNotMasked(NumpyTestCase): +class TestNotMasked(TestCase): "Tests notmasked_edges and notmasked_contiguous." - def check_edges(self): + def test_edges(self): "Tests unmasked_edges" a = masked_array(np.arange(24).reshape(3,8), mask=[[0,0,0,0,1,1,1,0], @@ -152,7 +146,7 @@ assert_equal(tmp[0], (array([0,2,]), array([0,0]))) assert_equal(tmp[1], (array([0,2,]), array([7,7]))) - def check_contiguous(self): + def test_contiguous(self): "Tests notmasked_contiguous" a = masked_array(np.arange(24).reshape(3,8), mask=[[0,0,0,0,1,1,1,1], @@ -175,9 +169,9 @@ assert_equal(tmp[2][-1], slice(7,7,None)) assert_equal(tmp[2][-2], slice(0,5,None)) -class Test2DFunctions(NumpyTestCase): +class Test2DFunctions(TestCase): "Tests 2D functions" - def check_compress2d(self): + def test_compress2d(self): "Tests compress2d" x = array(np.arange(9).reshape(3,3), mask=[[1,0,0],[0,0,0],[0,0,0]]) assert_equal(compress_rowcols(x), [[4,5],[7,8]] ) @@ -196,7 +190,7 @@ assert_equal(compress_rowcols(x,0).size, 0 ) assert_equal(compress_rowcols(x,1).size, 0 ) # - def check_mask_rowcols(self): + def test_mask_rowcols(self): "Tests mask_rowcols." x = array(np.arange(9).reshape(3,3), mask=[[1,0,0],[0,0,0],[0,0,0]]) assert_equal(mask_rowcols(x).mask, [[1,1,1],[1,0,0],[1,0,0]] ) @@ -322,19 +316,16 @@ assert_equal(dx._data, np.r_[0,difx_d,0]) assert_equal(dx._mask, np.r_[1,0,0,0,0,1]) -class TestApplyAlongAxis(NumpyTestCase): +class TestApplyAlongAxis(TestCase): "Tests 2D functions" - def check_3d(self): + def test_3d(self): a = arange(12.).reshape(2,2,3) def myfunc(b): return b[1] xa = apply_along_axis(myfunc,2,a) assert_equal(xa,[[1,4],[7,10]]) -class TestMedian(NumpyTestCase): - def __init__(self, *args, **kwds): - NumpyTestCase.__init__(self, *args, **kwds) - # +class TestMedian(TestCase): def test_2d(self): "Tests median w/ 2D" (n,p) = (101,30) @@ -361,7 +352,7 @@ assert_equal(median(x,0), [[12,10],[8,9],[16,17]]) -class TestPolynomial(NumpyTestCase): +class TestPolynomial(TestCase): # def test_polyfit(self): "Tests polyfit" @@ -394,4 +385,4 @@ ############################################################################### #------------------------------------------------------------------------------ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/ma/tests/test_mrecords.py =================================================================== --- trunk/numpy/ma/tests/test_mrecords.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/ma/tests/test_mrecords.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -27,10 +27,10 @@ fromarrays, fromtextfile, fromrecords, addfield #.............................................................................. -class TestMRecords(NumpyTestCase): +class TestMRecords(TestCase): "Base test class for MaskedArrays." def __init__(self, *args, **kwds): - NumpyTestCase.__init__(self, *args, **kwds) + TestCase.__init__(self, *args, **kwds) self.setup() def setup(self): @@ -318,10 +318,10 @@ [(1,1.1,None),(2,2.2,'two'),(None,None,'three')]) ################################################################################ -class TestMRecordsImport(NumpyTestCase): +class TestMRecordsImport(TestCase): "Base test class for MaskedArrays." def __init__(self, *args, **kwds): - NumpyTestCase.__init__(self, *args, **kwds) + TestCase.__init__(self, *args, **kwds) self.setup() def setup(self): @@ -429,4 +429,4 @@ ############################################################################### #------------------------------------------------------------------------------ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/ma/tests/test_old_ma.py =================================================================== --- trunk/numpy/ma/tests/test_old_ma.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/ma/tests/test_old_ma.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -3,7 +3,8 @@ from numpy.ma import * from numpy.core.numerictypes import float32 from numpy.ma.core import umath -from numpy.testing import NumpyTestCase, NumpyTest +from numpy.testing import * + pi = numpy.pi def eq(v,w, msg=''): result = allclose(v,w) @@ -14,11 +15,7 @@ %s"""% (msg, str(v), str(w)) return result -class TestMa(NumpyTestCase): - def __init__(self, *args, **kwds): - NumpyTestCase.__init__(self, *args, **kwds) - self.setUp() - +class TestMa(TestCase): def setUp (self): x=numpy.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) y=numpy.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) @@ -34,7 +31,7 @@ xm.set_fill_value(1.e+20) self.d = (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) - def check_testBasic1d(self): + def test_testBasic1d(self): "Test of basic array creation and properties in 1 dimension." (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d self.failIf(isMaskedArray(x)) @@ -48,7 +45,7 @@ self.failUnless(eq(filled(xm, 1.e20), xf)) self.failUnless(eq(x, xm)) - def check_testBasic2d(self): + def test_testBasic2d(self): "Test of basic array creation and properties in 2 dimensions." for s in [(4,3), (6,2)]: (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d @@ -69,7 +66,7 @@ self.failUnless(eq(x, xm)) self.setUp() - def check_testArithmetic (self): + def test_testArithmetic (self): "Test of basic arithmetic." (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d a2d = array([[1,2],[0,4]]) @@ -111,13 +108,13 @@ numpy.seterr(**olderr) - def check_testMixedArithmetic(self): + def test_testMixedArithmetic(self): na = numpy.array([1]) ma = array([1]) self.failUnless(isinstance(na + ma, MaskedArray)) self.failUnless(isinstance(ma + na, MaskedArray)) - def check_testUfuncs1 (self): + def test_testUfuncs1 (self): "Test various functions such as sin, cos." (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d self.failUnless (eq(numpy.cos(x), cos(xm))) @@ -149,7 +146,7 @@ self.failUnless (eq(numpy.concatenate((x,y)), concatenate((xm,y)))) self.failUnless (eq(numpy.concatenate((x,y,x)), concatenate((x,ym,x)))) - def check_xtestCount (self): + def test_xtestCount (self): "Test count" ott = array([0.,1.,2.,3.], mask=[1,0,0,0]) self.failUnless( isinstance(count(ott), types.IntType)) @@ -163,15 +160,19 @@ assert getmask(count(ott,0)) is nomask self.failUnless (eq([1,2],count(ott,0))) - def check_testMinMax (self): + def test_testMinMax (self): "Test minimum and maximum." (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d xr = numpy.ravel(x) #max doesn't work if shaped xmr = ravel(xm) - self.failUnless (eq(max(xr), maximum(xmr))) #true because of careful selection of data - self.failUnless (eq(min(xr), minimum(xmr))) #true because of careful selection of data - def check_testAddSumProd (self): + #true because of careful selection of data + self.failUnless(eq(max(xr), maximum(xmr))) + + #true because of careful selection of data + self.failUnless(eq(min(xr), minimum(xmr))) + + def test_testAddSumProd (self): "Test add, sum, product." (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d self.failUnless (eq(numpy.add.reduce(x), add.reduce(x))) @@ -183,15 +184,17 @@ self.failUnless (eq(numpy.sum(x,0), sum(x,0))) self.failUnless (eq(numpy.product(x,axis=0), product(x,axis=0))) self.failUnless (eq(numpy.product(x,0), product(x,0))) - self.failUnless (eq(numpy.product(filled(xm,1),axis=0), product(xm,axis=0))) + self.failUnless (eq(numpy.product(filled(xm,1),axis=0), + product(xm,axis=0))) if len(s) > 1: - self.failUnless (eq(numpy.concatenate((x,y),1), concatenate((xm,ym),1))) + self.failUnless (eq(numpy.concatenate((x,y),1), + concatenate((xm,ym),1))) self.failUnless (eq(numpy.add.reduce(x,1), add.reduce(x,1))) self.failUnless (eq(numpy.sum(x,1), sum(x,1))) self.failUnless (eq(numpy.product(x,1), product(x,1))) - def check_testCI(self): + def test_testCI(self): "Test of conversions and indexing" x1 = numpy.array([1,2,4,3]) x2 = array(x1, mask = [1,0,0,0]) @@ -240,7 +243,7 @@ self.assertEqual(s1, s2) assert x1[1:1].shape == (0,) - def check_testCopySize(self): + def test_testCopySize(self): "Tests of some subtle points of copying and sizing." n = [0,0,1,0,0] m = make_mask(n) @@ -279,7 +282,7 @@ y6 = repeat(x4, 2, axis=0) self.failUnless( eq(y5, y6)) - def check_testPut(self): + def test_testPut(self): "Test of put" d = arange(5) n = [0,0,0,1,1] @@ -299,14 +302,14 @@ self.failUnless( x[3] is masked) self.failUnless( x[4] is masked) - def check_testMaPut(self): + def test_testMaPut(self): (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d m = [1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1] i = numpy.nonzero(m)[0] put(ym, i, zm) assert all(take(ym, i, axis=0) == zm) - def check_testOddFeatures(self): + def test_testOddFeatures(self): "Test of other odd features" x = arange(20); x=x.reshape(4,5) x.flat[5] = 12 @@ -358,7 +361,8 @@ assert z[1] is not masked assert z[2] is masked assert eq(masked_where(greater(x, 2), x), masked_greater(x,2)) - assert eq(masked_where(greater_equal(x, 2), x), masked_greater_equal(x,2)) + assert eq(masked_where(greater_equal(x, 2), x), + masked_greater_equal(x,2)) assert eq(masked_where(less(x, 2), x), masked_less(x,2)) assert eq(masked_where(less_equal(x, 2), x), masked_less_equal(x,2)) assert eq(masked_where(not_equal(x, 2), x), masked_not_equal(x,2)) @@ -366,10 +370,14 @@ assert eq(masked_where(not_equal(x,2), x), masked_not_equal(x,2)) assert eq(masked_inside(range(5), 1, 3), [0, 199, 199, 199, 4]) assert eq(masked_outside(range(5), 1, 3),[199,1,2,3,199]) - assert eq(masked_inside(array(range(5), mask=[1,0,0,0,0]), 1, 3).mask, [1,1,1,1,0]) - assert eq(masked_outside(array(range(5), mask=[0,1,0,0,0]), 1, 3).mask, [1,1,0,0,1]) - assert eq(masked_equal(array(range(5), mask=[1,0,0,0,0]), 2).mask, [1,0,1,0,0]) - assert eq(masked_not_equal(array([2,2,1,2,1], mask=[1,0,0,0,0]), 2).mask, [1,0,1,0,1]) + assert eq(masked_inside(array(range(5), mask=[1,0,0,0,0]), 1, 3).mask, + [1,1,1,1,0]) + assert eq(masked_outside(array(range(5), mask=[0,1,0,0,0]), 1, 3).mask, + [1,1,0,0,1]) + assert eq(masked_equal(array(range(5), mask=[1,0,0,0,0]), 2).mask, + [1,0,1,0,0]) + assert eq(masked_not_equal(array([2,2,1,2,1], mask=[1,0,0,0,0]), 2).mask, + [1,0,1,0,1]) assert eq(masked_where([1,1,0,0,0], [1,2,3,4,5]), [99,99,3,4,5]) atest = ones((10,10,10), dtype=float32) btest = zeros(atest.shape, MaskType) @@ -396,7 +404,7 @@ z = where(c, 1, masked) assert eq(z, [99, 1, 1, 99, 99, 99]) - def check_testMinMax(self): + def test_testMinMax(self): "Test of minumum, maximum." assert eq(minimum([1,2,3],[4,0,9]), [1,0,3]) assert eq(maximum([1,2,3],[4,0,9]), [4,2,9]) @@ -409,7 +417,7 @@ assert minimum(x) == 0 assert maximum(x) == 4 - def check_testTakeTransposeInnerOuter(self): + def test_testTakeTransposeInnerOuter(self): "Test of take, transpose, inner, outer products" x = arange(24) y = numpy.arange(24) @@ -429,7 +437,7 @@ assert t[1] == 2 assert t[2] == 3 - def check_testInplace(self): + def test_testInplace(self): """Test of inplace operations and rich comparisons""" y = arange(10) @@ -479,7 +487,7 @@ x += 1. assert eq(x, y+1.) - def check_testPickle(self): + def test_testPickle(self): "Test of pickling" import pickle x = arange(12) @@ -489,7 +497,7 @@ y = pickle.loads(s) assert eq(x,y) - def check_testMasked(self): + def test_testMasked(self): "Test of masked element" xx=arange(6) xx[1] = masked @@ -502,7 +510,7 @@ #self.failUnlessRaises(Exception, lambda x,y: x+y, masked, xx) #self.failUnlessRaises(Exception, lambda x,y: x+y, xx, masked) - def check_testAverage1(self): + def test_testAverage1(self): "Test of average." ott = array([0.,1.,2.,3.], mask=[1,0,0,0]) self.failUnless(eq(2.0, average(ott,axis=0))) @@ -521,7 +529,7 @@ result, wts = average(ott, axis=0, returned=1) self.failUnless(eq(wts, [1., 0.])) - def check_testAverage2(self): + def test_testAverage2(self): "More tests of average." w1 = [0,1,1,1,1,0] w2 = [[0,1,1,1,1,0],[1,0,0,0,0,1]] @@ -529,12 +537,16 @@ self.failUnless(allclose(average(x, axis=0), 2.5)) self.failUnless(allclose(average(x, axis=0, weights=w1), 2.5)) y=array([arange(6), 2.0*arange(6)]) - self.failUnless(allclose(average(y, None), numpy.add.reduce(numpy.arange(6))*3./12.)) + self.failUnless(allclose(average(y, None), + numpy.add.reduce(numpy.arange(6))*3./12.)) self.failUnless(allclose(average(y, axis=0), numpy.arange(6) * 3./2.)) - self.failUnless(allclose(average(y, axis=1), [average(x,axis=0), average(x,axis=0) * 2.0])) + self.failUnless(allclose(average(y, axis=1), + [average(x,axis=0), average(x,axis=0) * 2.0])) self.failUnless(allclose(average(y, None, weights=w2), 20./6.)) - self.failUnless(allclose(average(y, axis=0, weights=w2), [0.,1.,2.,3.,4.,10.])) - self.failUnless(allclose(average(y, axis=1), [average(x,axis=0), average(x,axis=0) * 2.0])) + self.failUnless(allclose(average(y, axis=0, weights=w2), + [0.,1.,2.,3.,4.,10.])) + self.failUnless(allclose(average(y, axis=1), + [average(x,axis=0), average(x,axis=0) * 2.0])) m1 = zeros(6) m2 = [0,0,1,1,0,0] m3 = [[0,0,1,1,0,0],[0,1,1,1,1,0]] @@ -549,7 +561,8 @@ self.failUnless(allclose(average(z, None), 20./6.)) self.failUnless(allclose(average(z, axis=0), [0.,1.,99.,99.,4.0, 7.5])) self.failUnless(allclose(average(z, axis=1), [2.5, 5.0])) - self.failUnless(allclose( average(z,axis=0, weights=w2), [0.,1., 99., 99., 4.0, 10.0])) + self.failUnless(allclose( average(z,axis=0, weights=w2), + [0.,1., 99., 99., 4.0, 10.0])) a = arange(6) b = arange(6) * 3 @@ -573,7 +586,7 @@ a2dma = average(a2dm, axis=1) self.failUnless(eq(a2dma, [1.5, 4.0])) - def check_testToPython(self): + def test_testToPython(self): self.assertEqual(1, int(array(1))) self.assertEqual(1.0, float(array(1))) self.assertEqual(1, int(array([[[1]]]))) @@ -582,7 +595,7 @@ self.failUnlessRaises(ValueError, bool, array([0,1])) self.failUnlessRaises(ValueError, bool, array([0,0],mask=[0,1])) - def check_testScalarArithmetic(self): + def test_testScalarArithmetic(self): xm = array(0, mask=1) self.failUnless((1/array(0)).mask) self.failUnless((1 + xm).mask) @@ -595,7 +608,7 @@ self.failUnless(x.filled() == x.data) self.failUnlessEqual(str(xm), str(masked_print_option)) - def check_testArrayMethods(self): + def test_testArrayMethods(self): a = array([1,3,2]) b = array([1,3,2], mask=[1,0,1]) self.failUnless(eq(a.any(), a.data.any())) @@ -612,29 +625,29 @@ self.failUnless(eq(a.take([1,2]), a.data.take([1,2]))) self.failUnless(eq(m.transpose(), m.data.transpose())) - def check_testArrayAttributes(self): + def test_testArrayAttributes(self): a = array([1,3,2]) b = array([1,3,2], mask=[1,0,1]) self.failUnlessEqual(a.ndim, 1) - def check_testAPI(self): + def test_testAPI(self): self.failIf([m for m in dir(numpy.ndarray) if m not in dir(MaskedArray) and not m.startswith('_')]) - def check_testSingleElementSubscript(self): + def test_testSingleElementSubscript(self): a = array([1,3,2]) b = array([1,3,2], mask=[1,0,1]) self.failUnlessEqual(a[0].shape, ()) self.failUnlessEqual(b[0].shape, ()) self.failUnlessEqual(b[1].shape, ()) -class TestUfuncs(NumpyTestCase): +class TestUfuncs(TestCase): def setUp(self): self.d = (array([1.0, 0, -1, pi/2]*2, mask=[0,1]+[0]*6), array([1.0, 0, -1, pi/2]*2, mask=[1,0]+[0]*6),) - def check_testUfuncRegression(self): + def test_testUfuncRegression(self): for f in ['sqrt', 'log', 'log10', 'exp', 'conjugate', 'sin', 'cos', 'tan', 'arcsin', 'arccos', 'arctan', @@ -661,8 +674,11 @@ mf = getattr(numpy.ma, f) args = self.d[:uf.nin] olderr = numpy.geterr() - if f in ['sqrt', 'arctanh', 'arcsin', 'arccos', 'arccosh', 'arctanh', 'log', - 'log10','divide','true_divide', 'floor_divide', 'remainder', 'fmod']: + f_invalid_ignore = ['sqrt', 'arctanh', 'arcsin', 'arccos', + 'arccosh', 'arctanh', 'log', 'log10','divide', + 'true_divide', 'floor_divide', 'remainder', + 'fmod'] + if f in f_invalid_ignore: numpy.seterr(invalid='ignore') if f in ['arctanh', 'log', 'log10']: numpy.seterr(divide='ignore') @@ -695,7 +711,7 @@ self.failUnless(eq(nonzero(x), [0])) -class TestArrayMethods(NumpyTestCase): +class TestArrayMethods(TestCase): def setUp(self): x = numpy.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, @@ -799,56 +815,55 @@ return m1 is nomask return (m1 == m2).all() -def timingTest(): - for f in [testf, testinplace]: - for n in [1000,10000,50000]: - t = testta(n, f) - t1 = testtb(n, f) - t2 = testtc(n, f) - print f.test_name - print """\ -n = %7d -numpy time (ms) %6.1f -MA maskless ratio %6.1f -MA masked ratio %6.1f -""" % (n, t*1000.0, t1/t, t2/t) +#def timingTest(): +# for f in [testf, testinplace]: +# for n in [1000,10000,50000]: +# t = testta(n, f) +# t1 = testtb(n, f) +# t2 = testtc(n, f) +# print f.test_name +# print """\ +#n = %7d +#numpy time (ms) %6.1f +#MA maskless ratio %6.1f +#MA masked ratio %6.1f +#""" % (n, t*1000.0, t1/t, t2/t) -def testta(n, f): - x=numpy.arange(n) + 1.0 - tn0 = time.time() - z = f(x) - return time.time() - tn0 +#def testta(n, f): +# x=numpy.arange(n) + 1.0 +# tn0 = time.time() +# z = f(x) +# return time.time() - tn0 -def testtb(n, f): - x=arange(n) + 1.0 - tn0 = time.time() - z = f(x) - return time.time() - tn0 +#def testtb(n, f): +# x=arange(n) + 1.0 +# tn0 = time.time() +# z = f(x) +# return time.time() - tn0 -def testtc(n, f): - x=arange(n) + 1.0 - x[0] = masked - tn0 = time.time() - z = f(x) - return time.time() - tn0 +#def testtc(n, f): +# x=arange(n) + 1.0 +# x[0] = masked +# tn0 = time.time() +# z = f(x) +# return time.time() - tn0 -def testf(x): - for i in range(25): - y = x **2 + 2.0 * x - 1.0 - w = x **2 + 1.0 - z = (y / w) ** 2 - return z -testf.test_name = 'Simple arithmetic' +#def testf(x): +# for i in range(25): +# y = x **2 + 2.0 * x - 1.0 +# w = x **2 + 1.0 +# z = (y / w) ** 2 +# return z +#testf.test_name = 'Simple arithmetic' -def testinplace(x): - for i in range(25): - y = x**2 - y += 2.0*x - y -= 1.0 - y /= x - return y -testinplace.test_name = 'Inplace operations' +#def testinplace(x): +# for i in range(25): +# y = x**2 +# y += 2.0*x +# y -= 1.0 +# y /= x +# return y +#testinplace.test_name = 'Inplace operations' if __name__ == "__main__": - NumpyTest('numpy.ma').run() - #timingTest() + nose.run(argv=['', __file__]) Modified: trunk/numpy/ma/tests/test_subclassing.py =================================================================== --- trunk/numpy/ma/tests/test_subclassing.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/ma/tests/test_subclassing.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -13,15 +13,10 @@ import numpy as N import numpy.core.numeric as numeric -from numpy.testing import NumpyTest, NumpyTestCase - -import numpy.ma.testutils +from numpy.testing import * from numpy.ma.testutils import * - -import numpy.ma.core as coremodule from numpy.ma.core import * - class SubArray(N.ndarray): """Defines a generic N.ndarray subclass, that stores some metadata in the dictionary `info`.""" @@ -36,6 +31,7 @@ result = N.ndarray.__add__(self, other) result.info.update({'added':result.info.pop('added',0)+1}) return result + subarray = SubArray class MSubArray(SubArray,MaskedArray): @@ -53,6 +49,7 @@ _view._sharedmask = False return _view _series = property(fget=_get_series) + msubarray = MSubArray class MMatrix(MaskedArray, N.matrix,): @@ -69,14 +66,13 @@ _view._sharedmask = False return _view _series = property(fget=_get_series) + mmatrix = MMatrix - - -class TestSubclassing(NumpyTestCase): +class TestSubclassing(TestCase): """Test suite for masked subclasses of ndarray.""" - def check_data_subclassing(self): + def test_data_subclassing(self): "Tests whether the subclass is kept." x = N.arange(5) m = [0,0,1,0,0] @@ -86,7 +82,7 @@ assert_equal(xmsub._data, xsub) assert isinstance(xmsub._data, SubArray) - def check_maskedarray_subclassing(self): + def test_maskedarray_subclassing(self): "Tests subclassing MaskedArray" x = N.arange(5) mx = mmatrix(x,mask=[0,1,0,0,0]) @@ -101,7 +97,7 @@ assert isinstance(hypot(mx,mx), mmatrix) assert isinstance(hypot(mx,x), mmatrix) - def check_attributepropagation(self): + def test_attributepropagation(self): x = array(arange(5), mask=[0]+[1]*4) my = masked_array(subarray(x)) ym = msubarray(x) @@ -128,7 +124,7 @@ assert hasattr(mxsub, 'info') assert_equal(mxsub.info, xsub.info) - def check_subclasspreservation(self): + def test_subclasspreservation(self): "Checks that masked_array(...,subok=True) preserves the class." x = N.arange(5) m = [0,0,1,0,0] @@ -158,8 +154,8 @@ ################################################################################ if __name__ == '__main__': - NumpyTest().run() - # + nose.run(argv=['', __file__]) + if 0: x = array(arange(5), mask=[0]+[1]*4) my = masked_array(subarray(x)) Modified: trunk/numpy/ma/testutils.py =================================================================== --- trunk/numpy/ma/testutils.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/ma/testutils.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -15,9 +15,9 @@ import numpy as np from numpy import ndarray, float_ import numpy.core.umath as umath -from numpy.testing import NumpyTest, NumpyTestCase +from numpy.testing import * +from numpy.testing.utils import build_err_msg, rand import numpy.testing.utils as utils -from numpy.testing.utils import build_err_msg, rand import core from core import mask_or, getmask, getmaskarray, masked_array, nomask, masked @@ -166,9 +166,9 @@ raise ValueError(msg) # OK, now run the basic tests on filled versions return utils.assert_array_compare(comparison, - x.filled(fill_value), y.filled(fill_value), - err_msg=err_msg, - verbose=verbose, header=header) + x.filled(fill_value), y.filled(fill_value), + err_msg=err_msg, + verbose=verbose, header=header) def assert_array_equal(x, y, err_msg='', verbose=True): Modified: trunk/numpy/numarray/__init__.py =================================================================== --- trunk/numpy/numarray/__init__.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/numarray/__init__.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -24,3 +24,7 @@ del functions del ufuncs del compat + +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().bench Modified: trunk/numpy/oldnumeric/__init__.py =================================================================== --- trunk/numpy/oldnumeric/__init__.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/oldnumeric/__init__.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -39,3 +39,7 @@ del precision del ufuncs del misc + +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().bench Modified: trunk/numpy/oldnumeric/tests/test_oldnumeric.py =================================================================== --- trunk/numpy/oldnumeric/tests/test_oldnumeric.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/oldnumeric/tests/test_oldnumeric.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -6,7 +6,7 @@ int16, int32, int64, uint, uint8, uint16, uint32, uint64 class test_oldtypes(NumPyTestCase): - def check_oldtypes(self, level=1): + def test_oldtypes(self, level=1): a1 = array([0,1,0], Float) a2 = array([0,1,0], float) assert_array_equal(a1, a2) @@ -83,4 +83,4 @@ if __name__ == "__main__": - NumPyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/random/__init__.py =================================================================== --- trunk/numpy/random/__init__.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/random/__init__.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -13,6 +13,6 @@ """ return RandomState() -def test(level=1, verbosity=1): - from numpy.testing import NumpyTest - return NumpyTest().test(level, verbosity) +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().bench Modified: trunk/numpy/random/tests/test_random.py =================================================================== --- trunk/numpy/random/tests/test_random.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/random/tests/test_random.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -2,7 +2,7 @@ from numpy import random import numpy as np -class TestMultinomial(NumpyTestCase): +class TestMultinomial(TestCase): def test_basic(self): random.multinomial(100, [0.2, 0.8]) @@ -16,7 +16,7 @@ assert np.all(x < -1) -class TestSetState(NumpyTestCase): +class TestSetState(TestCase): def setUp(self): self.seed = 1234567890 self.prng = random.RandomState(self.seed) @@ -62,4 +62,4 @@ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: trunk/numpy/testing/__init__.py =================================================================== --- trunk/numpy/testing/__init__.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/testing/__init__.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -1,5 +1,23 @@ +"""Common test support for all numpy test scripts. -from info import __doc__ +This single module should provide all the common functionality for numpy tests +in a single location, so that test scripts can just import it and work right +away. +""" + +#import unittest +from unittest import TestCase + +import decorators as dec +from utils import * + +try: + import nose + from nose.tools import raises +except ImportError: + pass + from numpytest import * -from utils import * -from parametric import ParametricTestCase + +from pkgtester import Tester +test = Tester().test Added: trunk/numpy/testing/decorators.py =================================================================== --- trunk/numpy/testing/decorators.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/testing/decorators.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -0,0 +1,92 @@ +"""Decorators for labeling test objects + +Decorators that merely return a modified version of the original +function object are straightforward. Decorators that return a new +function object need to use +nose.tools.make_decorator(original_function)(decorator) in returning +the decorator, in order to preserve metadata such as function name, +setup and teardown functions and so on - see nose.tools for more +information. + +""" + +try: + import nose +except ImportError: + pass + +def slow(t): + """Labels a test as 'slow'. + + The exact definition of a slow test is obviously both subjective and + hardware-dependent, but in general any individual test that requires more + than a second or two should be labeled as slow (the whole suite consits of + thousands of tests, so even a second is significant).""" + + t.slow = True + return t + +def setastest(tf=True): + ''' Signals to nose that this function is or is not a test + + Parameters + ---------- + tf : bool + If True specifies this is a test, not a test otherwise + + e.g + >>> @setastest(False) + >>> def func_with_test_in_name(arg1, arg2): pass + ... + >>> + + This decorator cannot use the nose namespace, because it can be + called from a non-test module. See also istest and nottest in + nose.tools + + ''' + def set_test(t): + t.__test__ = tf + return t + return set_test + +def skipif(skip_condition, msg=None): + ''' Make function raise SkipTest exception if skip_condition is true + + Parameters + --------- + skip_condition : bool + Flag to determine whether to skip test (True) or not (False) + msg : string + Message to give on raising a SkipTest exception + + Returns + ------- + decorator : function + Decorator, which, when applied to a function, causes SkipTest + to be raised when the skip_condition was True, and the function + to be called normally otherwise. + + Notes + ----- + You will see from the code that we had to further decorate the + decorator with the nose.tools.make_decorator function in order to + transmit function name, and various other metadata. + ''' + if msg is None: + msg = 'Test skipped due to test condition' + def skip_decorator(f): + def skipper(*args, **kwargs): + if skip_condition: + raise nose.SkipTest, msg + else: + return f(*args, **kwargs) + return nose.tools.make_decorator(f)(skipper) + return skip_decorator + +def skipknownfailure(f): + ''' Decorator to raise SkipTest for test known to fail + ''' + def skipper(*args, **kwargs): + raise nose.SkipTest, 'This test is known to fail' + return nose.tools.make_decorator(f)(skipper) Deleted: trunk/numpy/testing/info.py =================================================================== --- trunk/numpy/testing/info.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/testing/info.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -1,30 +0,0 @@ -""" -Numpy testing tools -=================== - -Numpy-style unit-testing ------------------------- - - NumpyTest -- Numpy tests site manager - NumpyTestCase -- unittest.TestCase with measure method - IgnoreException -- raise when checking disabled feature, it'll be ignored - set_package_path -- prepend package build directory to path - set_local_path -- prepend local directory (to tests files) to path - restore_path -- restore path after set_package_path - -Utility functions ------------------ - - jiffies -- return 1/100ths of a second that the current process has used - memusage -- virtual memory size in bytes of the running python [linux] - rand -- array of random numbers from given shape - assert_equal -- assert equality - assert_almost_equal -- assert equality with decimal tolerance - assert_approx_equal -- assert equality with significant digits tolerance - assert_array_equal -- assert arrays equality - assert_array_almost_equal -- assert arrays equality with decimal tolerance - assert_array_less -- assert arrays less-ordering - -""" - -global_symbols = ['ScipyTest','NumpyTest'] Added: trunk/numpy/testing/nosetester.py =================================================================== --- trunk/numpy/testing/nosetester.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/testing/nosetester.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -0,0 +1,140 @@ +''' Nose test running + +Implements test and bench functions for modules. + +''' +import os +import sys +import re + +import nose + +class NoseTester(object): + """ Nose test runner. + + Usage: NoseTester().test() + + is package path or module Default for package is None. A + value of None finds calling module path. + + Typical call is from module __init__, and corresponds to this: + + >>> test = NoseTester().test + + In practice, because nose may not be importable, the __init__ + files actually have: + + >>> from scipy.testing.pkgtester import Tester + >>> test = Tester().test + + The pkgtester module checks for the presence of nose on the path, + returning this class if nose is present, and a null class + otherwise. + """ + + def __init__(self, package=None): + ''' Test class init + + Parameters + ---------- + package : string or module + If string, gives full path to package + If None, extract calling module path + Default is None + ''' + if package is None: + f = sys._getframe(1) + package = f.f_locals.get('__file__', None) + assert package is not None + package = os.path.dirname(package) + elif isinstance(package, type(os)): + package = os.path.dirname(package.__file__) + self.package_path = package + + def _add_doc(testtype): + ''' Decorator to add docstring to functions using test labels + + Parameters + ---------- + testtype : string + Type of test for function docstring + ''' + def docit(func): + test_header = \ + '''Parameters + ---------- + label : {'fast', 'full', '', attribute identifer} + Identifies %(testtype)s to run. This can be a string to pass to + the nosetests executable with the'-A' option, or one of + several special values. + Special values are: + 'fast' - the default - which corresponds to + nosetests -A option of + 'not slow'. + 'full' - fast (as above) and slow %(testtype)s as in + no -A option to nosetests - same as '' + None or '' - run all %(testtype)ss + attribute_identifier - string passed directly to + nosetests as '-A' + verbose : integer + verbosity value for test outputs, 1-10 + extra_argv : list + List with any extra args to pass to nosetests''' \ + % {'testtype': testtype} + func.__doc__ = func.__doc__ % { + 'test_header': test_header} + return func + return docit + + @_add_doc('(testtype)') + def _test_argv(self, label, verbose, extra_argv): + ''' Generate argv for nosetest command + + %(test_header)s + ''' + argv = [__file__, self.package_path, '-s'] + if label and label != 'full': + if not isinstance(label, basestring): + raise TypeError, 'Selection label should be a string' + if label == 'fast': + label = 'not slow' + argv += ['-A', label] + argv += ['--verbosity', str(verbose)] + if extra_argv: + argv += extra_argv + return argv + + @_add_doc('test') + def test(self, label='fast', verbose=1, extra_argv=None, doctests=False, + coverage=False): + ''' Run tests for module using nose + + %(test_header)s + doctests : boolean + If True, run doctests in module, default False + ''' + argv = self._test_argv(label, verbose, extra_argv) + if doctests: + argv+=['--with-doctest','--doctest-tests'] + + if coverage: + argv+=['--cover-package=numpy','--with-coverage', + '--cover-tests','--cover-inclusive','--cover-erase'] + + # bypass these samples under distutils + argv += ['--exclude','f2py_ext'] + argv += ['--exclude','f2py_f90_ext'] + argv += ['--exclude','gen_ext'] + argv += ['--exclude','pyrex_ext'] + argv += ['--exclude','swig_ext'] + + nose.run(argv=argv) + + @_add_doc('benchmark') + def bench(self, label='fast', verbose=1, extra_argv=None): + ''' Run benchmarks for module using nose + + %(test_header)s''' + argv = self._test_argv(label, verbose, extra_argv) + argv += ['--match', r'(?:^|[\\b_\\.%s-])[Bb]ench' % os.sep] + nose.run(argv=argv) Added: trunk/numpy/testing/nulltester.py =================================================================== --- trunk/numpy/testing/nulltester.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/testing/nulltester.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -0,0 +1,15 @@ +''' Null tester to signal nose tests disabled + +Merely returns error reporting lack of nose package or version number +below requirements. + +See pkgtester, nosetester modules + +''' + +class NullTester(object): + def test(self, labels=None, *args, **kwargs): + raise ImportError, \ + 'Need nose >=0.10 for tests - see %s' % \ + 'http://somethingaboutorange.com/mrl/projects/nose' + bench = test Modified: trunk/numpy/testing/numpytest.py =================================================================== --- trunk/numpy/testing/numpytest.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/testing/numpytest.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -10,10 +10,7 @@ import warnings __all__ = ['set_package_path', 'set_local_path', 'restore_path', - 'IgnoreException', 'NumpyTestCase', 'NumpyTest', - 'ScipyTestCase', 'ScipyTest', # for backward compatibility - 'importall', - ] + 'IgnoreException', 'importall',] DEBUG=0 from numpy.testing.utils import jiffies @@ -113,96 +110,7 @@ self.stream.flush() -class NumpyTestCase (unittest.TestCase): - def measure(self,code_str,times=1): - """ Return elapsed time for executing code_str in the - namespace of the caller for given times. - """ - frame = get_frame(1) - locs,globs = frame.f_locals,frame.f_globals - code = compile(code_str, - 'NumpyTestCase runner for '+self.__class__.__name__, - 'exec') - i = 0 - elapsed = jiffies() - while i>sys.stderr,yellow_text('Warning: %s' % (message)) - sys.stderr.flush() - def info(self, message): - print>>sys.stdout, message - sys.stdout.flush() - - def rundocs(self, filename=None): - """ Run doc string tests found in filename. - """ - import doctest - if filename is None: - f = get_frame(1) - filename = f.f_globals['__file__'] - name = os.path.splitext(os.path.basename(filename))[0] - path = [os.path.dirname(filename)] - file, pathname, description = imp.find_module(name, path) - try: - m = imp.load_module(name, file, pathname, description) - finally: - file.close() - if sys.version[:3]<'2.4': - doctest.testmod(m, verbose=False) - else: - tests = doctest.DocTestFinder().find(m) - runner = doctest.DocTestRunner(verbose=False) - for test in tests: - runner.run(test) - return - -class ScipyTestCase(NumpyTestCase): - def __init__(self, package=None): - warnings.warn("ScipyTestCase is now called NumpyTestCase; please update your code", - DeprecationWarning, stacklevel=2) - NumpyTestCase.__init__(self, package) - - def _get_all_method_names(cls): names = dir(cls) if sys.version[:3]<='2.1': @@ -214,461 +122,7 @@ # for debug build--check for memory leaks during the test. -class _NumPyTextTestResult(unittest._TextTestResult): - def startTest(self, test): - unittest._TextTestResult.startTest(self, test) - if self.showAll: - N = len(sys.getobjects(0)) - self._totnumobj = N - self._totrefcnt = sys.gettotalrefcount() - return - def stopTest(self, test): - if self.showAll: - N = len(sys.getobjects(0)) - self.stream.write("objects: %d ===> %d; " % (self._totnumobj, N)) - self.stream.write("refcnts: %d ===> %d\n" % (self._totrefcnt, - sys.gettotalrefcount())) - return - -class NumPyTextTestRunner(unittest.TextTestRunner): - def _makeResult(self): - return _NumPyTextTestResult(self.stream, self.descriptions, self.verbosity) - - -class NumpyTest: - """ Numpy tests site manager. - - Usage: NumpyTest().test(level=1,verbosity=1) - - is package name or its module object. - - Package is supposed to contain a directory tests/ with test_*.py - files where * refers to the names of submodules. See .rename() - method to redefine name mapping between test_*.py files and names of - submodules. Pattern test_*.py can be overwritten by redefining - .get_testfile() method. - - test_*.py files are supposed to define a classes, derived from - NumpyTestCase or unittest.TestCase, with methods having names - starting with test or bench or check. The names of TestCase classes - must have a prefix test. This can be overwritten by redefining - .check_testcase_name() method. - - And that is it! No need to implement test or test_suite functions - in each .py file. - - Old-style test_suite(level=1) hooks are also supported. - """ - _check_testcase_name = re.compile(r'test.*|Test.*').match - def check_testcase_name(self, name): - """ Return True if name matches TestCase class. - """ - return not not self._check_testcase_name(name) - - testfile_patterns = ['test_%(modulename)s.py'] - def get_testfile(self, module, verbosity = 0): - """ Return path to module test file. - """ - mstr = self._module_str - short_module_name = self._get_short_module_name(module) - d = os.path.split(module.__file__)[0] - test_dir = os.path.join(d,'tests') - local_test_dir = os.path.join(os.getcwd(),'tests') - if os.path.basename(os.path.dirname(local_test_dir)) \ - == os.path.basename(os.path.dirname(test_dir)): - test_dir = local_test_dir - for pat in self.testfile_patterns: - fn = os.path.join(test_dir, pat % {'modulename':short_module_name}) - if os.path.isfile(fn): - return fn - if verbosity>1: - self.warn('No test file found in %s for module %s' \ - % (test_dir, mstr(module))) - return - - def __init__(self, package=None): - if package is None: - from numpy.distutils.misc_util import get_frame - f = get_frame(1) - package = f.f_locals.get('__name__',f.f_globals.get('__name__',None)) - assert package is not None - self.package = package - self._rename_map = {} - - def rename(self, **kws): - """Apply renaming submodule test file test_.py to - test_.py. - - Usage: self.rename(name='newname') before calling the - self.test() method. - - If 'newname' is None, then no tests will be executed for a given - module. - """ - for k,v in kws.items(): - self._rename_map[k] = v - return - - def _module_str(self, module): - filename = module.__file__[-30:] - if filename!=module.__file__: - filename = '...'+filename - return '' % (module.__name__, filename) - - def _get_method_names(self,clsobj,level): - names = [] - for mthname in _get_all_method_names(clsobj): - if mthname[:5] not in ['bench','check'] \ - and mthname[:4] not in ['test']: - continue - mth = getattr(clsobj, mthname) - if type(mth) is not types.MethodType: - continue - d = mth.im_func.func_defaults - if d is not None: - mthlevel = d[0] - else: - mthlevel = 1 - if level>=mthlevel: - if mthname not in names: - names.append(mthname) - for base in clsobj.__bases__: - for n in self._get_method_names(base,level): - if n not in names: - names.append(n) - return names - - def _get_short_module_name(self, module): - d,f = os.path.split(module.__file__) - short_module_name = os.path.splitext(os.path.basename(f))[0] - if short_module_name=='__init__': - short_module_name = module.__name__.split('.')[-1] - short_module_name = self._rename_map.get(short_module_name,short_module_name) - return short_module_name - - def _get_module_tests(self, module, level, verbosity): - mstr = self._module_str - - short_module_name = self._get_short_module_name(module) - if short_module_name is None: - return [] - - test_file = self.get_testfile(module, verbosity) - - if test_file is None: - return [] - - if not os.path.isfile(test_file): - if short_module_name[:5]=='info_' \ - and short_module_name[5:]==module.__name__.split('.')[-2]: - return [] - if short_module_name in ['__cvs_version__','__svn_version__']: - return [] - if short_module_name[-8:]=='_version' \ - and short_module_name[:-8]==module.__name__.split('.')[-2]: - return [] - if verbosity>1: - self.warn(test_file) - self.warn(' !! No test file %r found for %s' \ - % (os.path.basename(test_file), mstr(module))) - return [] - - if test_file in self.test_files: - return [] - - parent_module_name = '.'.join(module.__name__.split('.')[:-1]) - test_module_name,ext = os.path.splitext(os.path.basename(test_file)) - test_dir_module = parent_module_name+'.tests' - test_module_name = test_dir_module+'.'+test_module_name - - if test_dir_module not in sys.modules: - sys.modules[test_dir_module] = imp.new_module(test_dir_module) - - old_sys_path = sys.path[:] - try: - f = open(test_file,'r') - test_module = imp.load_module(test_module_name, f, - test_file, ('.py', 'r', 1)) - f.close() - except: - sys.path[:] = old_sys_path - self.warn('FAILURE importing tests for %s' % (mstr(module))) - output_exception(sys.stderr) - return [] - sys.path[:] = old_sys_path - - self.test_files.append(test_file) - - return self._get_suite_list(test_module, level, module.__name__) - - def _get_suite_list(self, test_module, level, module_name='__main__', - verbosity=1): - suite_list = [] - if hasattr(test_module, 'test_suite'): - suite_list.extend(test_module.test_suite(level)._tests) - for name in dir(test_module): - obj = getattr(test_module, name) - if type(obj) is not type(unittest.TestCase) \ - or not issubclass(obj, unittest.TestCase) \ - or not self.check_testcase_name(obj.__name__): - continue - for mthname in self._get_method_names(obj,level): - suite = obj(mthname) - if getattr(suite,'isrunnable',lambda mthname:1)(mthname): - suite_list.append(suite) - matched_suite_list = [suite for suite in suite_list \ - if self.testcase_match(suite.id()\ - .replace('__main__.',''))] - if verbosity>=0: - self.info(' Found %s/%s tests for %s' \ - % (len(matched_suite_list), len(suite_list), module_name)) - return matched_suite_list - - def _test_suite_from_modules(self, this_package, level, verbosity): - package_name = this_package.__name__ - modules = [] - for name, module in sys.modules.items(): - if not name.startswith(package_name) or module is None: - continue - if not hasattr(module,'__file__'): - continue - if os.path.basename(os.path.dirname(module.__file__))=='tests': - continue - modules.append((name, module)) - - modules.sort() - modules = [m[1] for m in modules] - - self.test_files = [] - suites = [] - for module in modules: - suites.extend(self._get_module_tests(module, abs(level), verbosity)) - - suites.extend(self._get_suite_list(sys.modules[package_name], - abs(level), verbosity=verbosity)) - return unittest.TestSuite(suites) - - def _test_suite_from_all_tests(self, this_package, level, verbosity): - importall(this_package) - package_name = this_package.__name__ - - # Find all tests/ directories under the package - test_dirs_names = {} - for name, module in sys.modules.items(): - if not name.startswith(package_name) or module is None: - continue - if not hasattr(module, '__file__'): - continue - d = os.path.dirname(module.__file__) - if os.path.basename(d)=='tests': - continue - d = os.path.join(d, 'tests') - if not os.path.isdir(d): - continue - if d in test_dirs_names: - continue - test_dir_module = '.'.join(name.split('.')[:-1]+['tests']) - test_dirs_names[d] = test_dir_module - - test_dirs = test_dirs_names.keys() - test_dirs.sort() - - # For each file in each tests/ directory with a test case in it, - # import the file, and add the test cases to our list - suite_list = [] - testcase_match = re.compile(r'\s*class\s+\w+\s*\(.*TestCase').match - for test_dir in test_dirs: - test_dir_module = test_dirs_names[test_dir] - - if test_dir_module not in sys.modules: - sys.modules[test_dir_module] = imp.new_module(test_dir_module) - - for fn in os.listdir(test_dir): - base, ext = os.path.splitext(fn) - if ext != '.py': - continue - f = os.path.join(test_dir, fn) - - # check that file contains TestCase class definitions: - fid = open(f, 'r') - skip = True - for line in fid: - if testcase_match(line): - skip = False - break - fid.close() - if skip: - continue - - # import the test file - n = test_dir_module + '.' + base - # in case test files import local modules - sys.path.insert(0, test_dir) - fo = None - try: - try: - fo = open(f) - test_module = imp.load_module(n, fo, f, - ('.py', 'U', 1)) - except Exception, msg: - print 'Failed importing %s: %s' % (f,msg) - continue - finally: - if fo: - fo.close() - del sys.path[0] - - suites = self._get_suite_list(test_module, level, - module_name=n, - verbosity=verbosity) - suite_list.extend(suites) - - all_tests = unittest.TestSuite(suite_list) - return all_tests - - def test(self, level=1, verbosity=1, all=True, sys_argv=[], - testcase_pattern='.*'): - """Run Numpy module test suite with level and verbosity. - - level: - None --- do nothing, return None - < 0 --- scan for tests of level=abs(level), - don't run them, return TestSuite-list - > 0 --- scan for tests of level, run them, - return TestRunner - > 10 --- run all tests (same as specifying all=True). - (backward compatibility). - - verbosity: - >= 0 --- show information messages - > 1 --- show warnings on missing tests - - all: - True --- run all test files (like self.testall()) - False (default) --- only run test files associated with a module - - sys_argv --- replacement of sys.argv[1:] during running - tests. - - testcase_pattern --- run only tests that match given pattern. - - It is assumed (when all=False) that package tests suite follows - the following convention: for each package module, there exists - file /tests/test_.py that defines - TestCase classes (with names having prefix 'test_') with methods - (with names having prefixes 'check_' or 'bench_'); each of these - methods are called when running unit tests. - """ - if level is None: # Do nothing. - return - - if isinstance(self.package, str): - exec 'import %s as this_package' % (self.package) - else: - this_package = self.package - - self.testcase_match = re.compile(testcase_pattern).match - - if all: - all_tests = self._test_suite_from_all_tests(this_package, - level, verbosity) - else: - all_tests = self._test_suite_from_modules(this_package, - level, verbosity) - - if level < 0: - return all_tests - - runner = unittest.TextTestRunner(verbosity=verbosity) - old_sys_argv = sys.argv[1:] - sys.argv[1:] = sys_argv - # Use the builtin displayhook. If the tests are being run - # under IPython (for instance), any doctest test suites will - # fail otherwise. - old_displayhook = sys.displayhook - sys.displayhook = sys.__displayhook__ - try: - r = runner.run(all_tests) - finally: - sys.displayhook = old_displayhook - sys.argv[1:] = old_sys_argv - return r - - def testall(self, level=1,verbosity=1): - """ Run Numpy module test suite with level and verbosity. - - level: - None --- do nothing, return None - < 0 --- scan for tests of level=abs(level), - don't run them, return TestSuite-list - > 0 --- scan for tests of level, run them, - return TestRunner - - verbosity: - >= 0 --- show information messages - > 1 --- show warnings on missing tests - - Different from .test(..) method, this method looks for - TestCase classes from all files in /tests/ - directory and no assumptions are made for naming the - TestCase classes or their methods. - """ - return self.test(level=level, verbosity=verbosity, all=True) - - def run(self): - """ Run Numpy module test suite with level and verbosity - taken from sys.argv. Requires optparse module. - """ - try: - from optparse import OptionParser - except ImportError: - self.warn('Failed to import optparse module, ignoring.') - return self.test() - usage = r'usage: %prog [-v ] [-l ]'\ - r' [-s ""]'\ - r' [-t ""]' - parser = OptionParser(usage) - parser.add_option("-v", "--verbosity", - action="store", - dest="verbosity", - default=1, - type='int') - parser.add_option("-l", "--level", - action="store", - dest="level", - default=1, - type='int') - parser.add_option("-s", "--sys-argv", - action="store", - dest="sys_argv", - default='', - type='string') - parser.add_option("-t", "--testcase-pattern", - action="store", - dest="testcase_pattern", - default=r'.*', - type='string') - (options, args) = parser.parse_args() - return self.test(options.level,options.verbosity, - sys_argv=shlex.split(options.sys_argv or ''), - testcase_pattern=options.testcase_pattern) - - def warn(self, message): - from numpy.distutils.misc_util import yellow_text - print>>sys.stderr,yellow_text('Warning: %s' % (message)) - sys.stderr.flush() - def info(self, message): - print>>sys.stdout, message - sys.stdout.flush() - -class ScipyTest(NumpyTest): - def __init__(self, package=None): - warnings.warn("ScipyTest is now called NumpyTest; please update your code", - DeprecationWarning, stacklevel=2) - NumpyTest.__init__(self, package) - - def importall(package): """ Try recursively to import all subpackages under package. Deleted: trunk/numpy/testing/parametric.py =================================================================== --- trunk/numpy/testing/parametric.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/testing/parametric.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -1,300 +0,0 @@ -"""Support for parametric tests in unittest. - -:Author: Fernando Perez - -Purpose -======= - -Briefly, the main class in this module allows you to easily and cleanly -(without the gross name-mangling hacks that are normally needed) to write -unittest TestCase classes that have parametrized tests. That is, tests which -consist of multiple sub-tests that scan for example a parameter range, but -where you want each sub-test to: - -* count as a separate test in the statistics. - -* be run even if others in the group error out or fail. - - -The class offers a simple name-based convention to create such tests (see -simple example at the end), in one of two ways: - -* Each sub-test in a group can be run fully independently, with the - setUp/tearDown methods being called each time. - -* The whole group can be run with setUp/tearDown being called only once for the - group. This lets you conveniently reuse state that may be very expensive to - compute for multiple tests. Be careful not to corrupt it!!! - - -Caveats -======= - -This code relies on implementation details of the unittest module (some key -methods are heavily modified versions of those, after copying them in). So it -may well break either if you make sophisticated use of the unittest APIs, or if -unittest itself changes in the future. I have only tested this with Python -2.5. - -""" -__docformat__ = "restructuredtext en" - -import unittest - -class ParametricTestCase(unittest.TestCase): - """TestCase subclass with support for parametric tests. - - Subclasses of this class can implement test methods that return a list of - tests and arguments to call those with, to do parametric testing (often - also called 'data driven' testing.""" - - #: Prefix for tests with independent state. These methods will be run with - #: a separate setUp/tearDown call for each test in the group. - _indepParTestPrefix = 'testip' - - #: Prefix for tests with shared state. These methods will be run with - #: a single setUp/tearDown call for the whole group. This is useful when - #: writing a group of tests for which the setup is expensive and one wants - #: to actually share that state. Use with care (especially be careful not - #: to mutate the state you are using, which will alter later tests). - _shareParTestPrefix = 'testsp' - - def exec_test(self,test,args,result): - """Execute a single test. Returns a success boolean""" - - ok = False - try: - test(*args) - ok = True - except self.failureException: - result.addFailure(self, self._exc_info()) - except KeyboardInterrupt: - raise - except: - result.addError(self, self._exc_info()) - - return ok - - def set_testMethodDoc(self,doc): - self._testMethodDoc = doc - self._TestCase__testMethodDoc = doc - - def get_testMethodDoc(self): - return self._testMethodDoc - - testMethodDoc = property(fset=set_testMethodDoc, fget=get_testMethodDoc) - - def get_testMethodName(self): - try: - return getattr(self,"_testMethodName") - except: - return getattr(self,"_TestCase__testMethodName") - - testMethodName = property(fget=get_testMethodName) - - def run_test(self, testInfo,result): - """Run one test with arguments""" - - test,args = testInfo[0],testInfo[1:] - - # Reset the doc attribute to be the docstring of this particular test, - # so that in error messages it prints the actual test's docstring and - # not that of the test factory. - self.testMethodDoc = test.__doc__ - result.startTest(self) - try: - try: - self.setUp() - except KeyboardInterrupt: - raise - except: - result.addError(self, self._exc_info()) - return - - ok = self.exec_test(test,args,result) - - try: - self.tearDown() - except KeyboardInterrupt: - raise - except: - result.addError(self, self._exc_info()) - ok = False - if ok: result.addSuccess(self) - finally: - result.stopTest(self) - - def run_tests(self, tests,result): - """Run many tests with a common setUp/tearDown. - - The entire set of tests is run with a single setUp/tearDown call.""" - - try: - self.setUp() - except KeyboardInterrupt: - raise - except: - result.testsRun += 1 - result.addError(self, self._exc_info()) - return - - saved_doc = self.testMethodDoc - - try: - # Run all the tests specified - for testInfo in tests: - test,args = testInfo[0],testInfo[1:] - - # Set the doc argument for this test. Note that even if we do - # this, the fail/error tracebacks still print the docstring for - # the parent factory, because they only generate the message at - # the end of the run, AFTER we've restored it. There is no way - # to tell the unittest system (without overriding a lot of - # stuff) to extract this information right away, the logic is - # hardcoded to pull it later, since unittest assumes it doesn't - # change. - self.testMethodDoc = test.__doc__ - result.startTest(self) - ok = self.exec_test(test,args,result) - if ok: result.addSuccess(self) - - finally: - # Restore docstring info and run tearDown once only. - self.testMethodDoc = saved_doc - try: - self.tearDown() - except KeyboardInterrupt: - raise - except: - result.addError(self, self._exc_info()) - - def run(self, result=None): - """Test runner.""" - - #print - #print '*** run for method:',self._testMethodName # dbg - #print '*** doc:',self._testMethodDoc # dbg - - if result is None: result = self.defaultTestResult() - - # Independent tests: each gets its own setup/teardown - if self.testMethodName.startswith(self._indepParTestPrefix): - for t in getattr(self,self.testMethodName)(): - self.run_test(t,result) - # Shared-state test: single setup/teardown for all - elif self.testMethodName.startswith(self._shareParTestPrefix): - tests = getattr(self,self.testMethodName,'runTest')() - self.run_tests(tests,result) - # Normal unittest Test methods - else: - unittest.TestCase.run(self,result) - -############################################################################# -# Quick and dirty interactive example/test -if __name__ == '__main__': - - class ExampleTestCase(ParametricTestCase): - - #------------------------------------------------------------------- - # An instrumented setUp method so we can see when it gets called and - # how many times per instance - counter = 0 - - def setUp(self): - self.counter += 1 - print 'setUp count: %2s for: %s' % (self.counter, - self.testMethodDoc) - - #------------------------------------------------------------------- - # A standard test method, just like in the unittest docs. - def test_foo(self): - """Normal test for feature foo.""" - pass - - #------------------------------------------------------------------- - # Testing methods that need parameters. These can NOT be named test*, - # since they would be picked up by unittest and called without - # arguments. Instead, call them anything else (I use tst*) and then - # load them via the factories below. - def tstX(self,i): - "Test feature X with parameters." - print 'tstX, i=',i - if i==1 or i==3: - # Test fails - self.fail('i is bad, bad: %s' % i) - - def tstY(self,i): - "Test feature Y with parameters." - print 'tstY, i=',i - if i==1: - # Force an error - 1/0 - - def tstXX(self,i,j): - "Test feature XX with parameters." - print 'tstXX, i=',i,'j=',j - if i==1: - # Test fails - self.fail('i is bad, bad: %s' % i) - - def tstYY(self,i): - "Test feature YY with parameters." - print 'tstYY, i=',i - if i==2: - # Force an error - 1/0 - - def tstZZ(self): - """Test feature ZZ without parameters, needs multiple runs. - - This could be a random test that you want to run multiple times.""" - pass - - #------------------------------------------------------------------- - # Parametric test factories that create the test groups to call the - # above tst* methods with their required arguments. - def testip(self): - """Independent parametric test factory. - - A separate setUp() call is made for each test returned by this - method. - - You must return an iterable (list or generator is fine) containing - tuples with the actual method to be called as the first argument, - and the arguments for that call later.""" - return [(self.tstX,i) for i in range(5)] - - def testip2(self): - """Another independent parametric test factory""" - return [(self.tstY,i) for i in range(5)] - - def testip3(self): - """Test factory combining different subtests. - - This one shows how to assemble calls to different tests.""" - return [(self.tstX,3),(self.tstX,9),(self.tstXX,4,10), - (self.tstZZ,),(self.tstZZ,)] - - def testsp(self): - """Shared parametric test factory - - A single setUp() call is made for all the tests returned by this - method. - """ - return [(self.tstXX,i,i+1) for i in range(5)] - - def testsp2(self): - """Another shared parametric test factory""" - return [(self.tstYY,i) for i in range(5)] - - def testsp3(self): - """Another shared parametric test factory. - - This one simply calls the same test multiple times, without any - arguments. Note that you must still return tuples, even if there - are no arguments.""" - return [(self.tstZZ,) for i in range(10)] - - - # This test class runs normally under unittest's default runner - unittest.main() Added: trunk/numpy/testing/pkgtester.py =================================================================== --- trunk/numpy/testing/pkgtester.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/testing/pkgtester.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -0,0 +1,27 @@ +''' Define test function for scipy package + +Module tests for presence of useful version of nose. If present +returns NoseTester, otherwise returns a placeholder test routine +reporting lack of nose and inability to run tests. Typical use is in +module __init__: + +from scipy.testing.pkgtester import Tester +test = Tester().test + +See nosetester module for test implementation + +''' +fine_nose = True +try: + import nose +except ImportError: + fine_nose = False +else: + nose_version = nose.__versioninfo__ + if nose_version[0] < 1 and nose_version[1] < 10: + fine_nose = False + +if fine_nose: + from numpy.testing.nosetester import NoseTester as Tester +else: + from numpy.testing.nulltester import NullTester as Tester Modified: trunk/numpy/testing/tests/test_utils.py =================================================================== --- trunk/numpy/testing/tests/test_utils.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/testing/tests/test_utils.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -1,9 +1,7 @@ import numpy as N -from numpy.testing.utils import * - +from numpy.testing import * import unittest - class _GenericTest(object): def _test_equal(self, a, b): self._assert_func(a, b) @@ -163,5 +161,6 @@ else: raise AssertionError("should have raised an AssertionError") + if __name__ == '__main__': - unittest.main() + nose.run(argv=['', __file__]) Modified: trunk/numpy/testing/utils.py =================================================================== --- trunk/numpy/testing/utils.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/testing/utils.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -10,8 +10,8 @@ __all__ = ['assert_equal', 'assert_almost_equal','assert_approx_equal', 'assert_array_equal', 'assert_array_less', 'assert_string_equal', - 'assert_array_almost_equal', 'jiffies', 'memusage', 'rand', - 'runstring', 'raises'] + 'assert_array_almost_equal', 'build_err_msg', 'jiffies', 'memusage', + 'rand', 'rundocs', 'runstring'] def rand(*args): """Returns an array of random numbers with the given shape. @@ -295,32 +295,25 @@ assert actual==desired, msg -def raises(*exceptions): - """ Assert that a test function raises one of the specified exceptions to - pass. +def rundocs(filename=None): + """ Run doc string tests found in filename. """ - # FIXME: when we transition to nose, just use its implementation. It's - # better. - def deco(function): - def f2(*args, **kwds): - try: - function(*args, **kwds) - except exceptions: - pass - except: - # Anything else. - raise - else: - raise AssertionError('%s() did not raise one of (%s)' % - (function.__name__, ', '.join([e.__name__ for e in exceptions]))) - try: - f2.__name__ = function.__name__ - except TypeError: - # Python 2.3 does not permit this. - pass - f2.__dict__ = function.__dict__ - f2.__doc__ = function.__doc__ - f2.__module__ = function.__module__ - return f2 - - return deco + import doctest, imp + if filename is None: + f = sys._getframe(1) + filename = f.f_globals['__file__'] + name = os.path.splitext(os.path.basename(filename))[0] + path = [os.path.dirname(filename)] + file, pathname, description = imp.find_module(name, path) + try: + m = imp.load_module(name, file, pathname, description) + finally: + file.close() + if sys.version[:3]<'2.4': + doctest.testmod(m, verbose=False) + else: + tests = doctest.DocTestFinder().find(m) + runner = doctest.DocTestRunner(verbose=False) + for test in tests: + runner.run(test) + return Modified: trunk/numpy/tests/test_ctypeslib.py =================================================================== --- trunk/numpy/tests/test_ctypeslib.py 2008-06-17 00:11:02 UTC (rev 5286) +++ trunk/numpy/tests/test_ctypeslib.py 2008-06-17 00:23:20 UTC (rev 5287) @@ -2,8 +2,8 @@ from numpy.ctypeslib import ndpointer, load_library from numpy.testing import * -class TestLoadLibrary(NumpyTestCase): - def check_basic(self): +class TestLoadLibrary(TestCase): + def test_basic(self): try: cdll = load_library('multiarray', np.core.multiarray.__file__) @@ -12,7 +12,7 @@ " (import error was: %s)" % str(e) print msg - def check_basic2(self): + def test_basic2(self): """Regression for #801: load_library with a full library name (including extension) does not work.""" try: @@ -28,8 +28,8 @@ " (import error was: %s)" % str(e) print msg -class TestNdpointer(NumpyTestCase): - def check_dtype(self): +class TestNdpointer(TestCase): + def test_dtype(self): dt = np.intc p = ndpointer(dtype=dt) self.assert_(p.from_param(np.array([1], dt))) @@ -56,7 +56,7 @@ else: self.assert_(p.from_param(np.zeros((10,), dt2))) - def check_ndim(self): + def test_ndim(self): p = ndpointer(ndim=0) self.assert_(p.from_param(np.array(1))) self.assertRaises(TypeError, p.from_param, np.array([1])) @@ -66,14 +66,14 @@ p = ndpointer(ndim=2) self.assert_(p.from_param(np.array([[1]]))) - def check_shape(self): + def test_shape(self): p = ndpointer(shape=(1,2)) self.assert_(p.from_param(np.array([[1,2]]))) self.assertRaises(TypeError, p.from_param, np.array([[1],[2]])) p = ndpointer(shape=()) self.assert_(p.from_param(np.array(1))) - def check_flags(self): + def test_flags(self): x = np.array([[1,2,3]], order='F') p = ndpointer(flags='FORTRAN') self.assert_(p.from_param(x)) @@ -83,5 +83,6 @@ self.assert_(p.from_param(x)) self.assertRaises(TypeError, p.from_param, np.array([[1,2,3]])) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) From numpy-svn at scipy.org Mon Jun 16 21:11:44 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 16 Jun 2008 20:11:44 -0500 (CDT) Subject: [Numpy-svn] r5288 - trunk/numpy Message-ID: <20080617011144.1B8E639C67A@scipy.org> Author: rkern Date: 2008-06-16 20:11:43 -0500 (Mon, 16 Jun 2008) New Revision: 5288 Modified: trunk/numpy/_import_tools.py Log: When using PackageLoader, do not add subpackage names to __all__. Modified: trunk/numpy/_import_tools.py =================================================================== --- trunk/numpy/_import_tools.py 2008-06-17 00:23:20 UTC (rev 5287) +++ trunk/numpy/_import_tools.py 2008-06-17 01:11:43 UTC (rev 5288) @@ -152,7 +152,7 @@ Parameters ---------- - *packges : arg-tuple + *packages : arg-tuple the names (one or more strings) of all the modules one wishes to load into the top-level namespace. verbose= : integer @@ -183,9 +183,6 @@ postpone_import = getattr(info_module,'postpone_import',False) if (postpone and not global_symbols) \ or (postpone_import and postpone is not None): - self.log('__all__.append(%r)' % (package_name)) - if '.' not in package_name: - self.parent_export_names.append(package_name) continue old_object = frame.f_locals.get(package_name,None) From numpy-svn at scipy.org Mon Jun 16 22:17:39 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 16 Jun 2008 21:17:39 -0500 (CDT) Subject: [Numpy-svn] r5289 - in trunk: . numpy/testing Message-ID: <20080617021739.1BAD639C63D@scipy.org> Author: alan.mcintyre Date: 2008-06-16 21:17:34 -0500 (Mon, 16 Jun 2008) New Revision: 5289 Modified: trunk/README.txt trunk/numpy/testing/__init__.py trunk/numpy/testing/decorators.py trunk/numpy/testing/nosetester.py trunk/numpy/testing/pkgtester.py trunk/numpy/testing/utils.py Log: Update README.txt to indicate nose version dependency, and port SciPy r4424 to NumPy (prevent import of nose until actual execution of tests). Restored "raises" function to numpy/testing/utils.py until it can be replaced with the function of the same name from nose.tools after the lazy import. Modified: trunk/README.txt =================================================================== --- trunk/README.txt 2008-06-17 01:11:43 UTC (rev 5288) +++ trunk/README.txt 2008-06-17 02:17:34 UTC (rev 5289) @@ -13,8 +13,9 @@ python -c 'import numpy; numpy.test()' -Please note that you must have the 'nose' test framework installed in order to -run the tests. More information about nose is available here: +Please note that you must have version 0.10 or later of the 'nose' test +framework installed in order to run the tests. More information about nose is +available here: http://somethingaboutorange.com/mrl/projects/nose/ Modified: trunk/numpy/testing/__init__.py =================================================================== --- trunk/numpy/testing/__init__.py 2008-06-17 01:11:43 UTC (rev 5288) +++ trunk/numpy/testing/__init__.py 2008-06-17 02:17:34 UTC (rev 5289) @@ -10,14 +10,6 @@ import decorators as dec from utils import * - -try: - import nose - from nose.tools import raises -except ImportError: - pass - from numpytest import * - from pkgtester import Tester test = Tester().test Modified: trunk/numpy/testing/decorators.py =================================================================== --- trunk/numpy/testing/decorators.py 2008-06-17 01:11:43 UTC (rev 5288) +++ trunk/numpy/testing/decorators.py 2008-06-17 02:17:34 UTC (rev 5289) @@ -10,11 +10,6 @@ """ -try: - import nose -except ImportError: - pass - def slow(t): """Labels a test as 'slow'. @@ -76,6 +71,9 @@ if msg is None: msg = 'Test skipped due to test condition' def skip_decorator(f): + # Local import to avoid a hard nose dependency and only incur the + # import time overhead at actual test-time. + import nose def skipper(*args, **kwargs): if skip_condition: raise nose.SkipTest, msg @@ -87,6 +85,9 @@ def skipknownfailure(f): ''' Decorator to raise SkipTest for test known to fail ''' + # Local import to avoid a hard nose dependency and only incur the + # import time overhead at actual test-time. + import nose def skipper(*args, **kwargs): raise nose.SkipTest, 'This test is known to fail' return nose.tools.make_decorator(f)(skipper) Modified: trunk/numpy/testing/nosetester.py =================================================================== --- trunk/numpy/testing/nosetester.py 2008-06-17 01:11:43 UTC (rev 5288) +++ trunk/numpy/testing/nosetester.py 2008-06-17 02:17:34 UTC (rev 5289) @@ -7,8 +7,26 @@ import sys import re -import nose +def import_nose(): + """ Import nose only when needed. + """ + fine_nose = True + try: + import nose + from nose.tools import raises + except ImportError: + fine_nose = False + else: + nose_version = nose.__versioninfo__ + if nose_version[0] < 1 and nose_version[1] < 10: + fine_nose = False + if not fine_nose: + raise ImportError('Need nose >=0.10 for tests - see ' + 'http://somethingaboutorange.com/mrl/projects/nose') + + return nose + class NoseTester(object): """ Nose test runner. @@ -113,6 +131,7 @@ doctests : boolean If True, run doctests in module, default False ''' + nose = import_nose() argv = self._test_argv(label, verbose, extra_argv) if doctests: argv+=['--with-doctest','--doctest-tests'] @@ -135,6 +154,7 @@ ''' Run benchmarks for module using nose %(test_header)s''' + nose = import_nose() argv = self._test_argv(label, verbose, extra_argv) argv += ['--match', r'(?:^|[\\b_\\.%s-])[Bb]ench' % os.sep] nose.run(argv=argv) Modified: trunk/numpy/testing/pkgtester.py =================================================================== --- trunk/numpy/testing/pkgtester.py 2008-06-17 01:11:43 UTC (rev 5288) +++ trunk/numpy/testing/pkgtester.py 2008-06-17 02:17:34 UTC (rev 5289) @@ -11,17 +11,4 @@ See nosetester module for test implementation ''' -fine_nose = True -try: - import nose -except ImportError: - fine_nose = False -else: - nose_version = nose.__versioninfo__ - if nose_version[0] < 1 and nose_version[1] < 10: - fine_nose = False - -if fine_nose: - from numpy.testing.nosetester import NoseTester as Tester -else: - from numpy.testing.nulltester import NullTester as Tester +from numpy.testing.nosetester import NoseTester as Tester Modified: trunk/numpy/testing/utils.py =================================================================== --- trunk/numpy/testing/utils.py 2008-06-17 01:11:43 UTC (rev 5288) +++ trunk/numpy/testing/utils.py 2008-06-17 02:17:34 UTC (rev 5289) @@ -11,7 +11,7 @@ __all__ = ['assert_equal', 'assert_almost_equal','assert_approx_equal', 'assert_array_equal', 'assert_array_less', 'assert_string_equal', 'assert_array_almost_equal', 'build_err_msg', 'jiffies', 'memusage', - 'rand', 'rundocs', 'runstring'] + 'raises', 'rand', 'rundocs', 'runstring'] def rand(*args): """Returns an array of random numbers with the given shape. @@ -317,3 +317,34 @@ for test in tests: runner.run(test) return + + +def raises(*exceptions): + """ Assert that a test function raises one of the specified exceptions to + pass. + """ + # FIXME: when we transition to nose, just use its implementation. It's + # better. + def deco(function): + def f2(*args, **kwds): + try: + function(*args, **kwds) + except exceptions: + pass + except: + # Anything else. + raise + else: + raise AssertionError('%s() did not raise one of (%s)' % + (function.__name__, ', '.join([e.__name__ for e in exceptions]))) + try: + f2.__name__ = function.__name__ + except TypeError: + # Python 2.3 does not permit this. + pass + f2.__dict__ = function.__dict__ + f2.__doc__ = function.__doc__ + f2.__module__ = function.__module__ + return f2 + + return deco From numpy-svn at scipy.org Tue Jun 17 09:06:22 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 17 Jun 2008 08:06:22 -0500 (CDT) Subject: [Numpy-svn] r5290 - trunk/numpy/doc Message-ID: <20080617130622.3074839C8B8@scipy.org> Author: stefan Date: 2008-06-17 08:06:08 -0500 (Tue, 17 Jun 2008) New Revision: 5290 Modified: trunk/numpy/doc/HOWTO_DOCUMENT.txt Log: Update documentation standard. Modified: trunk/numpy/doc/HOWTO_DOCUMENT.txt =================================================================== --- trunk/numpy/doc/HOWTO_DOCUMENT.txt 2008-06-17 02:17:34 UTC (rev 5289) +++ trunk/numpy/doc/HOWTO_DOCUMENT.txt 2008-06-17 13:06:08 UTC (rev 5290) @@ -339,7 +339,29 @@ `_ is available. +Documenting classes +------------------- +Class docstring +``````````````` +Use the same sections as outlined above (all except ``Returns`` are +applicable). The constructor (``__init__``) should also be documented +here. + +An ``Attributes`` section may be used to describe class variables:: + + Attributes + ---------- + x : float + The X coordinate. + y : float + The Y coordinate. + +Method docstrings +````````````````` +Document these as you would any other function. Do not include +``self`` in the list of parameters. + Common reST concepts -------------------- For paragraphs, indentation is significant and indicates indentation in the From numpy-svn at scipy.org Tue Jun 17 16:08:29 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 17 Jun 2008 15:08:29 -0500 (CDT) Subject: [Numpy-svn] r5291 - in trunk/numpy/lib: . tests Message-ID: <20080617200829.8627839C226@scipy.org> Author: oliphant Date: 2008-06-17 15:08:28 -0500 (Tue, 17 Jun 2008) New Revision: 5291 Modified: trunk/numpy/lib/function_base.py trunk/numpy/lib/tests/test_function_base.py Log: Fix piecewise to handle 0-d inputs. Modified: trunk/numpy/lib/function_base.py =================================================================== --- trunk/numpy/lib/function_base.py 2008-06-17 13:06:08 UTC (rev 5290) +++ trunk/numpy/lib/function_base.py 2008-06-17 20:08:28 UTC (rev 5291) @@ -574,13 +574,32 @@ n += 1 if (n != n2): raise ValueError, "function list and condition list must be the same" + zerod = False + # This is a hack to work around problems with NumPy's + # handling of 0-d arrays and boolean indexing with + # numpy.bool_ scalars + if x.ndim == 0: + x = x[None] + zerod = True + newcondlist = [] + for k in range(n): + if condlist[k].ndim == 0: + condition = condlist[k][None] + else: + condition = condlist[k] + newcondlist.append(condition) + condlist = newcondlist y = empty(x.shape, x.dtype) for k in range(n): item = funclist[k] if not callable(item): y[condlist[k]] = item else: - y[condlist[k]] = item(x[condlist[k]], *args, **kw) + vals = x[condlist[k]] + if vals.size > 0: + y[condlist[k]] = item(vals, *args, **kw) + if zerod: + y = y.squeeze() return y def select(condlist, choicelist, default=0): Modified: trunk/numpy/lib/tests/test_function_base.py =================================================================== --- trunk/numpy/lib/tests/test_function_base.py 2008-06-17 13:06:08 UTC (rev 5290) +++ trunk/numpy/lib/tests/test_function_base.py 2008-06-17 20:08:28 UTC (rev 5291) @@ -616,5 +616,12 @@ for i in range(len(desired)): assert_array_equal(res[i],desired[i]) +class TestPiecewise(TestCase): + def test_0d(self): + x = array(3) + y = piecewise(x, x>3, [4, 0]) + assert y.ndim == 0 + assert y == 0 + if __name__ == "__main__": nose.run(argv=['', __file__]) From numpy-svn at scipy.org Tue Jun 17 18:54:08 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 17 Jun 2008 17:54:08 -0500 (CDT) Subject: [Numpy-svn] r5292 - in trunk/numpy/ma: . tests Message-ID: <20080617225408.79A44C7C00B@scipy.org> Author: pierregm Date: 2008-06-17 17:54:05 -0500 (Tue, 17 Jun 2008) New Revision: 5292 Modified: trunk/numpy/ma/core.py trunk/numpy/ma/mrecords.py trunk/numpy/ma/tests/test_core.py Log: fixed dictionary update for compatibility with Python 2.3 Modified: trunk/numpy/ma/core.py =================================================================== --- trunk/numpy/ma/core.py 2008-06-17 20:08:28 UTC (rev 5291) +++ trunk/numpy/ma/core.py 2008-06-17 22:54:05 UTC (rev 5292) @@ -110,13 +110,14 @@ 'V' : '???', } max_filler = ntypes._minvals -max_filler.update([(k, -np.inf) for k in [np.float32, np.float64]]) +max_filler.update(dict([(k, -np.inf) for k in [np.float32, np.float64]])) min_filler = ntypes._maxvals -min_filler.update([(k, +np.inf) for k in [np.float32, np.float64]]) +min_filler.update(dict([(k, +np.inf) for k in [np.float32, np.float64]])) if 'float128' in ntypes.typeDict: - max_filler.update([(np.float128, -np.inf)]) - min_filler.update([(np.float128, +np.inf)]) + max_filler[np.float128] = -np.inf + min_filler[np.float128] = +np.inf + def default_fill_value(obj): """Calculate the default fill value for the argument object. Modified: trunk/numpy/ma/mrecords.py =================================================================== --- trunk/numpy/ma/mrecords.py 2008-06-17 20:08:28 UTC (rev 5291) +++ trunk/numpy/ma/mrecords.py 2008-06-17 22:54:05 UTC (rev 5292) @@ -170,7 +170,7 @@ _locdict = self.__dict__ if _locdict['_baseclass'] == ndarray: _locdict['_baseclass'] = recarray - _locdict.update(_mask=_mask, _fieldmask=_mask) + _locdict.update({'_mask':_mask, '_fieldmask':_mask}) return def _getdata(self): Modified: trunk/numpy/ma/tests/test_core.py =================================================================== --- trunk/numpy/ma/tests/test_core.py 2008-06-17 20:08:28 UTC (rev 5291) +++ trunk/numpy/ma/tests/test_core.py 2008-06-17 22:54:05 UTC (rev 5292) @@ -878,7 +878,7 @@ # We had a tailored comment to make sure special attributes are properly # dealt with a = array(['3', '4', '5']) - a._basedict.update(comment="updated!") + a._basedict.update({'comment':"updated!"}) # b = array(a, dtype=int) assert_equal(b._data, [3,4,5]) From numpy-svn at scipy.org Wed Jun 18 11:22:38 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 18 Jun 2008 10:22:38 -0500 (CDT) Subject: [Numpy-svn] r5293 - trunk/numpy Message-ID: <20080618152238.CB97839C2EB@scipy.org> Author: stefan Date: 2008-06-18 10:22:24 -0500 (Wed, 18 Jun 2008) New Revision: 5293 Modified: trunk/numpy/__init__.py Log: Add `ma` to __all__. Modified: trunk/numpy/__init__.py =================================================================== --- trunk/numpy/__init__.py 2008-06-17 22:54:05 UTC (rev 5292) +++ trunk/numpy/__init__.py 2008-06-18 15:22:24 UTC (rev 5293) @@ -119,5 +119,5 @@ 'show_config']) __all__.extend(core.__all__) __all__.extend(lib.__all__) - __all__.extend(['linalg', 'fft', 'random', 'ctypeslib']) + __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma']) From numpy-svn at scipy.org Wed Jun 18 11:32:05 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 18 Jun 2008 10:32:05 -0500 (CDT) Subject: [Numpy-svn] r5294 - trunk/numpy/doc Message-ID: <20080618153205.6848739C2EB@scipy.org> Author: stefan Date: 2008-06-18 10:31:50 -0500 (Wed, 18 Jun 2008) New Revision: 5294 Modified: trunk/numpy/doc/HOWTO_DOCUMENT.txt Log: Add `Methods` section to documentation standard. Modified: trunk/numpy/doc/HOWTO_DOCUMENT.txt =================================================================== --- trunk/numpy/doc/HOWTO_DOCUMENT.txt 2008-06-18 15:22:24 UTC (rev 5293) +++ trunk/numpy/doc/HOWTO_DOCUMENT.txt 2008-06-18 15:31:50 UTC (rev 5294) @@ -357,6 +357,34 @@ y : float The Y coordinate. +In general, it is not necessary to list class methods. Those that are +not part of the public API have names that start with an underscore. +In some cases, however, a class may have a great many methods, of +which only a few are relevant (e.g., subclasses of ndarray). Then, it +becomes useful to have an additional ``Methods`` section:: + + class Photo(ndarray): + """ + Array with associated photographic information. + + ... + + Attributes + ---------- + exposure : float + Exposure in seconds. + + Methods + ------- + colorspace(c='rgb') + Represent the photo in the given colorspace. + gamma(n=1.0) + Change the photo's gamma exposure. + + """ + +Note that `self` is *not* listed as the first parameter of methods. + Method docstrings ````````````````` Document these as you would any other function. Do not include From numpy-svn at scipy.org Wed Jun 18 14:31:43 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 18 Jun 2008 13:31:43 -0500 (CDT) Subject: [Numpy-svn] r5295 - branches/1.1.x/numpy/ma Message-ID: <20080618183143.80BC239C666@scipy.org> Author: pierregm Date: 2008-06-18 13:31:38 -0500 (Wed, 18 Jun 2008) New Revision: 5295 Modified: branches/1.1.x/numpy/ma/core.py branches/1.1.x/numpy/ma/mrecords.py Log: * fixed dictionaries update for backwards compatibility with Python 2.3 * add support for out methods in sum/cumsum/prod/cumprod Modified: branches/1.1.x/numpy/ma/core.py =================================================================== --- branches/1.1.x/numpy/ma/core.py 2008-06-18 15:31:50 UTC (rev 5294) +++ branches/1.1.x/numpy/ma/core.py 2008-06-18 18:31:38 UTC (rev 5295) @@ -61,24 +61,23 @@ import cPickle import operator -import numpy -from numpy.core import bool_, complex_, float_, int_, object_, str_ +import numpy as np +from numpy import ndarray, dtype, typecodes, amax, amin, iscomplexobj,\ + bool_, complex_, float_, int_, object_, str_ +from numpy import array as narray + import numpy.core.umath as umath -import numpy.core.fromnumeric as fromnumeric -import numpy.core.numeric as numeric import numpy.core.numerictypes as ntypes -from numpy import bool_, dtype, typecodes, amax, amin, ndarray, iscomplexobj from numpy import expand_dims as n_expand_dims -from numpy import array as narray import warnings -MaskType = bool_ +MaskType = np.bool_ nomask = MaskType(0) -divide_tolerance = 1.e-35 -numpy.seterr(all='ignore') +divide_tolerance = np.finfo(float).tiny +np.seterr(all='ignore') def doc_note(note): return "\nNotes\n-----\n%s" % note @@ -90,7 +89,7 @@ "Class for MA related errors." def __init__ (self, args=None): "Creates an exception." - Exception.__init__(self,args) + Exception.__init__(self, args) self.args = args def __str__(self): "Calculates the string representation." @@ -111,20 +110,21 @@ 'V' : '???', } max_filler = ntypes._minvals -max_filler.update([(k,-numpy.inf) for k in [numpy.float32, numpy.float64]]) +max_filler.update(dict([(k, -np.inf) for k in [np.float32, np.float64]])) min_filler = ntypes._maxvals -min_filler.update([(k,numpy.inf) for k in [numpy.float32, numpy.float64]]) +min_filler.update(dict([(k, +np.inf) for k in [np.float32, np.float64]])) if 'float128' in ntypes.typeDict: - max_filler.update([(numpy.float128,-numpy.inf)]) - min_filler.update([(numpy.float128, numpy.inf)]) + max_filler[np.float128] = -np.inf + min_filler[np.float128] = +np.inf + def default_fill_value(obj): """Calculate the default fill value for the argument object. """ if hasattr(obj,'dtype'): defval = default_filler[obj.dtype.kind] - elif isinstance(obj, numeric.dtype): + elif isinstance(obj, np.dtype): defval = default_filler[obj.kind] elif isinstance(obj, float): defval = default_filler['f'] @@ -155,7 +155,7 @@ return min_filler[ntypes.typeDict['int_']] elif isinstance(obj, long): return min_filler[ntypes.typeDict['uint']] - elif isinstance(obj, numeric.dtype): + elif isinstance(obj, np.dtype): return min_filler[obj] else: raise TypeError, 'Unsuitable type for calculating minimum.' @@ -177,28 +177,28 @@ return max_filler[ntypes.typeDict['int_']] elif isinstance(obj, long): return max_filler[ntypes.typeDict['uint']] - elif isinstance(obj, numeric.dtype): + elif isinstance(obj, np.dtype): return max_filler[obj] else: raise TypeError, 'Unsuitable type for calculating minimum.' def _check_fill_value(fill_value, dtype): - descr = numpy.dtype(dtype).descr + descr = np.dtype(dtype).descr if fill_value is None: if len(descr) > 1: - fill_value = [default_fill_value(numeric.dtype(d[1])) + fill_value = [default_fill_value(np.dtype(d[1])) for d in descr] else: fill_value = default_fill_value(dtype) else: - fill_value = narray(fill_value).tolist() - fval = numpy.resize(fill_value, len(descr)) + fill_value = np.array(fill_value).tolist() + fval = np.resize(fill_value, len(descr)) if len(descr) > 1: - fill_value = [numpy.asarray(f).astype(d[1]).item() + fill_value = [np.asarray(f).astype(d[1]).item() for (f,d) in zip(fval, descr)] else: - fill_value = narray(fval, copy=False, dtype=dtype).item() + fill_value = np.array(fval, copy=False, dtype=dtype).item() return fill_value @@ -263,9 +263,9 @@ # Should we check for contiguity ? and a.flags['CONTIGUOUS']: return a elif isinstance(a, dict): - return narray(a, 'O') + return np.array(a, 'O') else: - return narray(a) + return np.array(a) #####-------------------------------------------------------------------------- def get_masked_subclass(*arrays): @@ -302,7 +302,7 @@ return a subclass of ndarray if approriate (True). """ - data = getattr(a, '_data', numpy.array(a, subok=subok)) + data = getattr(a, '_data', np.array(a, subok=subok)) if not subok: return data.view(ndarray) return data @@ -331,7 +331,7 @@ """ a = masked_array(a, copy=copy, subok=True) #invalid = (numpy.isnan(a._data) | numpy.isinf(a._data)) - invalid = numpy.logical_not(numpy.isfinite(a._data)) + invalid = np.logical_not(np.isfinite(a._data)) if not invalid.any(): return a a._mask |= invalid @@ -439,16 +439,16 @@ "Execute the call behavior." # m = getmask(a) - d1 = get_data(a) + d1 = getdata(a) # if self.domain is not None: - dm = narray(self.domain(d1), copy=False) - m = numpy.logical_or(m, dm) + dm = np.array(self.domain(d1), copy=False) + m = np.logical_or(m, dm) # The following two lines control the domain filling methods. d1 = d1.copy() # We could use smart indexing : d1[dm] = self.fill ... - # ... but numpy.putmask looks more efficient, despite the copy. - numpy.putmask(d1, dm, self.fill) + # ... but np.putmask looks more efficient, despite the copy. + np.putmask(d1, dm, self.fill) # Take care of the masked singletong first ... if not m.ndim and m: return masked @@ -500,14 +500,14 @@ "Execute the call behavior." m = mask_or(getmask(a), getmask(b)) (d1, d2) = (get_data(a), get_data(b)) - result = self.f(d1, d2, *args, **kwargs).view(get_masked_subclass(a,b)) + result = self.f(d1, d2, *args, **kwargs).view(get_masked_subclass(a, b)) if result.size > 1: if m is not nomask: result._mask = make_mask_none(result.shape) result._mask.flat = m - if isinstance(a,MaskedArray): + if isinstance(a, MaskedArray): result._update_from(a) - if isinstance(b,MaskedArray): + if isinstance(b, MaskedArray): result._update_from(b) elif m: return masked @@ -554,7 +554,7 @@ m = umath.logical_or.outer(ma, mb) if (not m.ndim) and m: return masked - rcls = get_masked_subclass(a,b) + rcls = get_masked_subclass(a, b) # We could fill the arguments first, butis it useful ? # d = self.f.outer(filled(a, self.fillx), filled(b, self.filly)).view(rcls) d = self.f.outer(getdata(a), getdata(b)).view(rcls) @@ -614,16 +614,16 @@ if t.any(None): mb = mask_or(mb, t) # The following line controls the domain filling - d2 = numpy.where(t,self.filly,d2) + d2 = np.where(t,self.filly,d2) m = mask_or(ma, mb) if (not m.ndim) and m: return masked - result = self.f(d1, d2).view(get_masked_subclass(a,b)) + result = self.f(d1, d2).view(get_masked_subclass(a, b)) if result.ndim > 0: result._mask = m - if isinstance(a,MaskedArray): + if isinstance(a, MaskedArray): result._update_from(a) - if isinstance(b,MaskedArray): + if isinstance(b, MaskedArray): result._update_from(b) return result @@ -647,7 +647,7 @@ negative = _MaskedUnaryOperation(umath.negative) floor = _MaskedUnaryOperation(umath.floor) ceil = _MaskedUnaryOperation(umath.ceil) -around = _MaskedUnaryOperation(fromnumeric.round_) +around = _MaskedUnaryOperation(np.round_) logical_not = _MaskedUnaryOperation(umath.logical_not) # Domained unary ufuncs ....................................................... sqrt = _MaskedUnaryOperation(umath.sqrt, 0.0, @@ -716,15 +716,15 @@ return getattr(a, '_mask', nomask) getmask = get_mask -def getmaskarray(a): - """Return the mask of a, if any, or a boolean array of the shape +def getmaskarray(arr): + """Return the mask of arr, if any, or a boolean array of the shape of a, full of False. """ - m = getmask(a) - if m is nomask: - m = make_mask_none(fromnumeric.shape(a)) - return m + mask = getmask(arr) + if mask is nomask: + mask = make_mask_none(np.shape(arr)) + return mask def is_mask(m): """Return True if m is a legal mask. @@ -785,7 +785,7 @@ A tuple indicating the shape of the final mask. """ - result = numeric.zeros(s, dtype=MaskType) + result = np.zeros(s, dtype=MaskType) return result def mask_or (m1, m2, copy=False, shrink=True): @@ -801,9 +801,9 @@ First mask. m2 : array_like Second mask - copy : bool + copy : {False, True}, optional Whether to return a copy. - shrink : bool + shrink : {True, False}, optional Whether to shrink m to nomask if all its values are False. """ @@ -834,7 +834,7 @@ """ cond = make_mask(condition) - a = narray(a, copy=copy, subok=True) + a = np.array(a, copy=copy, subok=True) if hasattr(a, '_mask'): cond = mask_or(cond, a._mask) cls = type(a) @@ -910,27 +910,34 @@ return masked_where(condition, x, copy=copy) # -def masked_object(x, value, copy=True): +def masked_object(x, value, copy=True, shrink=True): """Mask the array x where the data are exactly equal to value. This function is suitable only for object arrays: for floating point, please use ``masked_values`` instead. - Notes - ----- - The mask is set to `nomask` if posible. + Parameters + ---------- + x : array-like + Array to mask + value : var + Comparison value + copy : {True, False}, optional + Whether to return a copy of x. + shrink : {True, False}, optional + Whether to collapse a mask full of False to nomask """ if isMaskedArray(x): condition = umath.equal(x._data, value) mask = x._mask else: - condition = umath.equal(fromnumeric.asarray(x), value) + condition = umath.equal(np.asarray(x), value) mask = nomask - mask = mask_or(mask, make_mask(condition, shrink=True)) + mask = mask_or(mask, make_mask(condition, shrink=shrink)) return masked_array(x, mask=mask, copy=copy, fill_value=value) -def masked_values(x, value, rtol=1.e-5, atol=1.e-8, copy=True): +def masked_values(x, value, rtol=1.e-5, atol=1.e-8, copy=True, shrink=True): """Mask the array x where the data are approximately equal in value, i.e. @@ -945,23 +952,25 @@ Array to fill. value : float Masking value. - rtol : float + rtol : {float}, optional Tolerance parameter. - atol : float + atol : {float}, optional Tolerance parameter (1e-8). - copy : bool + copy : {True, False}, optional Whether to return a copy of x. + shrink : {True, False}, optional + Whether to collapse a mask full of False to nomask """ abs = umath.absolute xnew = filled(x, value) - if issubclass(xnew.dtype.type, numeric.floating): + if issubclass(xnew.dtype.type, np.floating): condition = umath.less_equal(abs(xnew-value), atol+rtol*abs(value)) mask = getattr(x, '_mask', nomask) else: condition = umath.equal(xnew, value) mask = nomask - mask = mask_or(mask, make_mask(condition, shrink=True)) + mask = mask_or(mask, make_mask(condition, shrink=shrink)) return masked_array(xnew, mask=mask, copy=copy, fill_value=value) def masked_invalid(a, copy=True): @@ -969,8 +978,8 @@ preexisting mask is conserved. """ - a = narray(a, copy=copy, subok=True) - condition = ~(numpy.isfinite(a)) + a = np.array(a, copy=copy, subok=True) + condition = ~(np.isfinite(a)) if hasattr(a, '_mask'): condition = mask_or(condition, a._mask) cls = type(a) @@ -1054,7 +1063,7 @@ def getdoc(self): "Return the doc of the function (from the doc of the method)." methdoc = getattr(ndarray, self._name, None) - methdoc = getattr(numpy, self._name, methdoc) + methdoc = getattr(np, self._name, methdoc) if methdoc is not None: return methdoc.__doc__ # @@ -1084,7 +1093,7 @@ "Define an interator." def __init__(self, ma): self.ma = ma - self.ma_iter = numpy.asarray(ma).flat + self.ma_iter = np.asarray(ma).flat if ma._mask is nomask: self.maskiter = None @@ -1106,7 +1115,7 @@ return d -class MaskedArray(numeric.ndarray): +class MaskedArray(ndarray): """Arrays with possibly masked values. Masked values of True exclude the corresponding element from any computation. @@ -1151,11 +1160,11 @@ __array_priority__ = 15 _defaultmask = nomask _defaulthardmask = False - _baseclass = numeric.ndarray + _baseclass = ndarray def __new__(cls, data=None, mask=nomask, dtype=None, copy=False, subok=True, ndmin=0, fill_value=None, - keep_mask=True, hard_mask=False, flag=None,shrink=True, + keep_mask=True, hard_mask=False, flag=None, shrink=True, **options): """Create a new masked array from scratch. @@ -1168,7 +1177,7 @@ DeprecationWarning) shrink = flag # Process data............ - _data = narray(data, dtype=dtype, copy=copy, subok=True, ndmin=ndmin) + _data = np.array(data, dtype=dtype, copy=copy, subok=True, ndmin=ndmin) _baseclass = getattr(data, '_baseclass', type(_data)) _basedict = getattr(data, '_basedict', getattr(data, '__dict__', {})) if not isinstance(data, MaskedArray) or not subok: @@ -1192,13 +1201,13 @@ else: _data._sharedmask = True else: - mask = narray(mask, dtype=MaskType, copy=copy) + mask = np.array(mask, dtype=MaskType, copy=copy) if mask.shape != _data.shape: (nd, nm) = (_data.size, mask.size) if nm == 1: - mask = numeric.resize(mask, _data.shape) + mask = np.resize(mask, _data.shape) elif nm == nd: - mask = fromnumeric.reshape(mask, _data.shape) + mask = np.reshape(mask, _data.shape) else: msg = "Mask and data not compatible: data size is %i, "+\ "mask size is %i." @@ -1212,7 +1221,7 @@ _data._mask = mask _data._sharedmask = not copy else: - _data._mask = umath.logical_or(mask, _data._mask) + _data._mask = np.logical_or(mask, _data._mask) _data._sharedmask = False # Update fill_value....... @@ -1226,15 +1235,15 @@ def _update_from(self, obj): """Copies some attributes of obj to self. """ - if obj is not None and isinstance(obj,ndarray): + if obj is not None and isinstance(obj, ndarray): _baseclass = type(obj) else: _baseclass = ndarray - _basedict = getattr(obj,'_basedict',getattr(obj,'__dict__',{})) + _basedict = getattr(obj, '_basedict', getattr(obj, '__dict__',{})) _dict = dict(_fill_value=getattr(obj, '_fill_value', None), _hardmask=getattr(obj, '_hardmask', False), _sharedmask=getattr(obj, '_sharedmask', False), - _baseclass=getattr(obj,'_baseclass',_baseclass), + _baseclass=getattr(obj,'_baseclass', _baseclass), _basedict=_basedict,) self.__dict__.update(_dict) self.__dict__.update(_basedict) @@ -1280,7 +1289,7 @@ # Domain not recognized, use fill_value instead fill_value = self.fill_value result = result.copy() - numpy.putmask(result, d, fill_value) + np.putmask(result, d, fill_value) # Update the mask if m is nomask: if d is not nomask: @@ -1361,7 +1370,7 @@ if value is masked: m = self._mask if m is nomask: - m = numpy.zeros(self.shape, dtype=MaskType) + m = np.zeros(self.shape, dtype=MaskType) m[indx] = True self._mask = m self._sharedmask = False @@ -1371,9 +1380,9 @@ valmask = getmask(value) if self._mask is nomask: # Set the data, then the mask - ndarray.__setitem__(self._data,indx,dval) + ndarray.__setitem__(self._data, indx, dval) if valmask is not nomask: - self._mask = numpy.zeros(self.shape, dtype=MaskType) + self._mask = np.zeros(self.shape, dtype=MaskType) self._mask[indx] = valmask elif not self._hardmask: # Unshare the mask if necessary to avoid propagation @@ -1391,7 +1400,7 @@ dindx[~mindx] = dval elif mindx is nomask: dindx = dval - ndarray.__setitem__(self._data,indx,dindx) + ndarray.__setitem__(self._data, indx, dindx) self._mask[indx] = mindx #............................................ def __getslice__(self, i, j): @@ -1417,7 +1426,7 @@ """ if mask is not nomask: - mask = narray(mask, copy=copy, dtype=MaskType) + mask = np.array(mask, copy=copy, dtype=MaskType) # We could try to check whether shrinking is needed.. # ... but we would waste some precious time # if self._shrinkmask and not mask.any(): @@ -1437,7 +1446,7 @@ self.unshare_mask() self._mask.flat = mask if self._mask.shape: - self._mask = numeric.reshape(self._mask, self.shape) + self._mask = np.reshape(self._mask, self.shape) _set_mask = __setmask__ #.... def _get_mask(self): @@ -1526,7 +1535,7 @@ If value is None, use a default based on the data type. """ - self._fill_value = _check_fill_value(value,self.dtype) + self._fill_value = _check_fill_value(value, self.dtype) fill_value = property(fget=get_fill_value, fset=set_fill_value, doc="Filling value.") @@ -1559,21 +1568,21 @@ fill_value = self.fill_value # if self is masked_singleton: - result = numeric.asanyarray(fill_value) + result = np.asanyarray(fill_value) else: result = self._data.copy() try: - numpy.putmask(result, m, fill_value) + np.putmask(result, m, fill_value) except (TypeError, AttributeError): fill_value = narray(fill_value, dtype=object) d = result.astype(object) - result = fromnumeric.choose(m, (d, fill_value)) + result = np.choose(m, (d, fill_value)) except IndexError: #ok, if scalar if self._data.shape: raise elif m: - result = narray(fill_value, dtype=self.dtype) + result = np.array(fill_value, dtype=self.dtype) else: result = self._data return result @@ -1584,7 +1593,7 @@ """ data = ndarray.ravel(self._data) if self._mask is not nomask: - data = data.compress(numpy.logical_not(ndarray.ravel(self._mask))) + data = data.compress(np.logical_not(ndarray.ravel(self._mask))) return data @@ -1605,7 +1614,7 @@ # Get the basic components (_data, _mask) = (self._data, self._mask) # Force the condition to a regular ndarray (forget the missing values...) - condition = narray(condition, copy=False, subok=False) + condition = np.array(condition, copy=False, subok=False) # _new = _data.compress(condition, axis=axis, out=out).view(type(self)) _new._update_from(self) @@ -1634,7 +1643,7 @@ # convert to object array to make filled work #CHECK: the two lines below seem more robust than the self._data.astype # res = numeric.empty(self._data.shape, object_) -# numeric.putmask(res,~m,self._data) +# numeric.putmask(res, ~m, self._data) res = self._data.astype("|O8") res[m] = f else: @@ -1739,8 +1748,8 @@ new_mask = mask_or(other_mask, dom_mask) # The following 3 lines control the domain filling if dom_mask.any(): - other_data = other_data.copy() - numpy.putmask(other_data, dom_mask, 1) + (_, fval) = ufunc_fills[np.divide] + other_data = np.where(dom_mask, fval, other_data) ndarray.__idiv__(self._data, other_data) self._mask = mask_or(self._mask, new_mask) return self @@ -1751,28 +1760,28 @@ other_data = getdata(other) other_mask = getmask(other) ndarray.__ipow__(_data, other_data) - invalid = numpy.logical_not(numpy.isfinite(_data)) - new_mask = mask_or(other_mask,invalid) + invalid = np.logical_not(np.isfinite(_data)) + new_mask = mask_or(other_mask, invalid) self._mask = mask_or(self._mask, new_mask) # The following line is potentially problematic, as we change _data... - numpy.putmask(self._data,invalid,self.fill_value) + np.putmask(self._data,invalid,self.fill_value) return self #............................................ def __float__(self): "Convert to float." if self.size > 1: - raise TypeError,\ - "Only length-1 arrays can be converted to Python scalars" + raise TypeError("Only length-1 arrays can be converted "\ + "to Python scalars") elif self._mask: warnings.warn("Warning: converting a masked element to nan.") - return numpy.nan + return np.nan return float(self.item()) def __int__(self): "Convert to int." if self.size > 1: - raise TypeError,\ - "Only length-1 arrays can be converted to Python scalars" + raise TypeError("Only length-1 arrays can be converted "\ + "to Python scalars") elif self._mask: raise MAError, 'Cannot convert masked element to a Python int.' return int(self.item()) @@ -1822,9 +1831,9 @@ n = s[axis] t = list(s) del t[axis] - return numeric.ones(t) * n - n1 = numpy.size(m, axis) - n2 = m.astype(int_).sum(axis) + return np.ones(t) * n + n1 = np.size(m, axis) + n2 = m.astype(int).sum(axis) if axis is None: return (n1-n2) else: @@ -1925,84 +1934,95 @@ return (self.ctypes.data, self._mask.ctypes.data) #............................................ def all(self, axis=None, out=None): - """Return True if all entries along the given axis are True, - False otherwise. Masked values are considered as True during - computation. + """a.all(axis=None, out=None) + + Check if all of the elements of `a` are true. - Parameter - ---------- - axis : int, optional - Axis along which the operation is performed. If None, - the operation is performed on a flatten array - out : {MaskedArray}, optional - Alternate optional output. If not None, out should be - a valid MaskedArray of the same shape as the output of - self._data.all(axis). + Performs a logical_and over the given axis and returns the result. + Masked values are considered as True during computation. + For convenience, the output array is masked where ALL the values along the + current axis are masked: if the output would have been a scalar and that + all the values are masked, then the output is `masked`. - Returns A masked array, where the mask is True if all data along - ------- - the axis are masked. + Parameters + ---------- + axis : {None, integer} + Axis to perform the operation over. + If None, perform over flattened array. + out : {None, array}, optional + Array into which the result can be placed. Its type is preserved + and it must be of the right shape to hold the output. - Notes - ----- - An exception is raised if ``out`` is not None and not of the - same type as self. + See Also + -------- + all : equivalent function + + Example + ------- + >>> array([1,2,3]).all() + True + >>> a = array([1,2,3], mask=True) + >>> (a.all() is masked) + True """ + mask = self._mask.all(axis) if out is None: d = self.filled(True).all(axis=axis).view(type(self)) - if d.ndim > 0: - d.__setmask__(self._mask.all(axis)) + if d.ndim: + d.__setmask__(mask) + elif mask: + return masked return d elif type(out) is not type(self): raise TypeError("The external array should have " \ "a type %s (got %s instead)" %\ (type(self), type(out))) self.filled(True).all(axis=axis, out=out) - if out.ndim: - out.__setmask__(self._mask.all(axis)) + if isinstance(out, MaskedArray): + if out.ndim or mask: + out.__setmask__(mask) return out def any(self, axis=None, out=None): - """Returns True if at least one entry along the given axis is - True. + """a.any(axis=None, out=None) - Returns False if all entries are False. - Masked values are considered as True during computation. + Check if any of the elements of `a` are true. - Parameter - ---------- - axis : int, optional - Axis along which the operation is performed. - If None, the operation is performed on a flatten array - out : {MaskedArray}, optional - Alternate optional output. If not None, out should be - a valid MaskedArray of the same shape as the output of - self._data.all(axis). + Performs a logical_or over the given axis and returns the result. + Masked values are considered as False during computation. - Returns A masked array, where the mask is True if all data along - ------- - the axis are masked. + Parameters + ---------- + axis : {None, integer} + Axis to perform the operation over. + If None, perform over flattened array and return a scalar. + out : {None, array}, optional + Array into which the result can be placed. Its type is preserved + and it must be of the right shape to hold the output. - Notes - ----- - An exception is raised if ``out`` is not None and not of the - same type as self. + See Also + -------- + any : equivalent function """ + mask = self._mask.all(axis) if out is None: d = self.filled(False).any(axis=axis).view(type(self)) - if d.ndim > 0: - d.__setmask__(self._mask.all(axis)) + if d.ndim: + d.__setmask__(mask) + elif mask: + d = masked return d elif type(out) is not type(self): raise TypeError("The external array should have a type %s "\ "(got %s instead)" %\ (type(self), type(out))) self.filled(False).any(axis=axis, out=out) - if out.ndim: - out.__setmask__(self._mask.all(axis)) + if isinstance(out, MaskedArray): + if out.ndim or mask: + out.__setmask__(mask) return out @@ -2023,7 +2043,8 @@ """ return narray(self.filled(0), copy=False).nonzero() - #............................................ + + def trace(self, offset=0, axis1=0, axis2=1, dtype=None, out=None): """a.trace(offset=0, axis1=0, axis2=1, dtype=None, out=None) @@ -2031,7 +2052,7 @@ indicated `axis1` and `axis2`. """ - # TODO: What are we doing with `out`? + #!!!: implement out + test! m = self._mask if m is nomask: result = super(MaskedArray, self).trace(offset=offset, axis1=axis1, @@ -2039,117 +2060,270 @@ return result.astype(dtype) else: D = self.diagonal(offset=offset, axis1=axis1, axis2=axis2) - return D.astype(dtype).filled(0).sum(axis=None) - #............................................ - def sum(self, axis=None, dtype=None): - """Sum the array over the given axis. + return D.astype(dtype).filled(0).sum(axis=None, out=out) - Masked elements are set to 0 internally. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : dtype, optional - Datatype for the intermediary computation. If not given, - the current dtype is used instead. + def sum(self, axis=None, dtype=None, out=None): + """a.sum(axis=None, dtype=None, out=None) + Return the sum of the array elements over the given axis. + Masked elements are set to 0 internally. + + Parameters + ---------- + axis : {None, -1, int}, optional + Axis along which the sum is computed. The default + (`axis` = None) is to compute over the flattened array. + dtype : {None, dtype}, optional + Determines the type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and + the type of a is an integer type of precision less than the default + platform integer, then the default platform integer precision is + used. Otherwise, the dtype is the same as that of a. + out : {None, ndarray}, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. + + """ - if self._mask is nomask: - mask = nomask - else: - mask = self._mask.all(axis) - if (not mask.ndim) and mask: - return masked - result = self.filled(0).sum(axis, dtype=dtype).view(type(self)) - if result.ndim > 0: - result.__setmask__(mask) - return result + _mask = ndarray.__getattribute__(self, '_mask') + newmask = _mask.all(axis=axis) + # No explicit output + if out is None: + result = self.filled(0).sum(axis, dtype=dtype).view(type(self)) + if result.ndim: + result.__setmask__(newmask) + elif newmask: + result = masked + return result + # Explicit output + result = self.filled(0).sum(axis, dtype=dtype, out=out) + if isinstance(out, MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = newmask + return out - def cumsum(self, axis=None, dtype=None): - """Return the cumulative sum of the elements of the array - along the given axis. - Masked values are set to 0 internally. + def cumsum(self, axis=None, dtype=None, out=None): + """a.cumsum(axis=None, dtype=None, out=None) - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + Return the cumulative sum of the elements along the given axis. + The cumulative sum is calculated over the flattened array by + default, otherwise over the specified axis. + + Masked values are set to 0 internally during the computation. + However, their position is saved, and the result will be masked at + the same locations. + + Parameters + ---------- + axis : {None, -1, int}, optional + Axis along which the sum is computed. The default + (`axis` = None) is to compute over the flattened array. + dtype : {None, dtype}, optional + Determines the type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and + the type of a is an integer type of precision less than the default + platform integer, then the default platform integer precision is + used. Otherwise, the dtype is the same as that of a. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. + WARNING : The mask is lost if out is not a valid MaskedArray ! + + Returns + ------- + cumsum : ndarray. + A new array holding the result is returned unless ``out`` is + specified, in which case a reference to ``out`` is returned. + + Example + ------- + >>> print array(arange(10),mask=[0,0,0,1,1,1,0,0,0,0]).cumsum() + [0 1 3 -- -- -- 9 16 24 33] + + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + """ - result = self.filled(0).cumsum(axis=axis, dtype=dtype).view(type(self)) - result.__setmask__(self.mask) + result = self.filled(0).cumsum(axis=axis, dtype=dtype, out=out) + if out is not None: + if isinstance(out, MaskedArray): + out.__setmask__(self.mask) + return out + result = result.view(type(self)) + result.__setmask__(self._mask) return result - def prod(self, axis=None, dtype=None): - """Return the product of the elements of the array along the - given axis. - Masked elements are set to 1 internally. + def prod(self, axis=None, dtype=None, out=None): + """a.prod(axis=None, dtype=None, out=None) - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + Return the product of the array elements over the given axis. + Masked elements are set to 1 internally for computation. + Parameters + ---------- + axis : {None, -1, int}, optional + Axis over which the product is taken. If None is used, then the + product is over all the array elements. + dtype : {None, dtype}, optional + Determines the type of the returned array and of the accumulator + where the elements are multiplied. If dtype has the value None and + the type of a is an integer type of precision less than the default + platform integer, then the default platform integer precision is + used. Otherwise, the dtype is the same as that of a. + out : {None, array}, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + + Returns + ------- + product_along_axis : {array, scalar}, see dtype parameter above. + Returns an array whose shape is the same as a with the specified + axis removed. Returns a 0d array when a is 1d or axis=None. + Returns a reference to the specified output array if specified. + + See Also + -------- + prod : equivalent function + + Examples + -------- + >>> prod([1.,2.]) + 2.0 + >>> prod([1.,2.], dtype=int32) + 2 + >>> prod([[1.,2.],[3.,4.]]) + 24.0 + >>> prod([[1.,2.],[3.,4.]], axis=1) + array([ 2., 12.]) + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + """ - if self._mask is nomask: - mask = nomask - else: - mask = self._mask.all(axis) - if (not mask.ndim) and mask: - return masked - result = self.filled(1).prod(axis=axis, dtype=dtype).view(type(self)) - if result.ndim: - result.__setmask__(mask) - return result + _mask = ndarray.__getattribute__(self, '_mask') + newmask = _mask.all(axis=axis) + # No explicit output + if out is None: + result = self.filled(1).prod(axis, dtype=dtype).view(type(self)) + if result.ndim: + result.__setmask__(newmask) + elif newmask: + result = masked + return result + # Explicit output + result = self.filled(1).prod(axis, dtype=dtype, out=out) + if isinstance(out,MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = newmask + return out product = prod - def cumprod(self, axis=None, dtype=None): - """Return the cumulative product of the elements of the array - along the given axis. + def cumprod(self, axis=None, dtype=None, out=None): + """ + a.cumprod(axis=None, dtype=None, out=None) - Masked values are set to 1 internally. + Return the cumulative product of the elements along the given axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + The cumulative product is taken over the flattened array by + default, otherwise over the specified axis. - """ - result = self.filled(1).cumprod(axis=axis, dtype=dtype).view(type(self)) - result.__setmask__(self.mask) + Masked values are set to 1 internally during the computation. + However, their position is saved, and the result will be masked at + the same locations. + + Parameters + ---------- + axis : {None, -1, int}, optional + Axis along which the product is computed. The default + (`axis` = None) is to compute over the flattened array. + dtype : {None, dtype}, optional + Determines the type of the returned array and of the accumulator + where the elements are multiplied. If dtype has the value None and + the type of a is an integer type of precision less than the default + platform integer, then the default platform integer precision is + used. Otherwise, the dtype is the same as that of a. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. + WARNING : The mask is lost if out is not a valid MaskedArray ! + + Returns + ------- + cumprod : ndarray. + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + """ + result = self.filled(1).cumprod(axis=axis, dtype=dtype, out=out) + if out is not None: + if isinstance(out, MaskedArray): + out.__setmask__(self._mask) + return out + result = result.view(type(self)) + result.__setmask__(self._mask) return result + def mean(self, axis=None, dtype=None, out=None): - """Average the array over the given axis. Equivalent to + """a.mean(axis=None, dtype=None, out=None) -> mean - a.sum(axis, dtype) / a.size(axis). + Returns the average of the array elements. The average is taken over the + flattened array by default, otherwise over the specified axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + Parameters + ---------- + axis : integer + Axis along which the means are computed. The default is + to compute the mean of the flattened array. + dtype : type + Type to use in computing the means. For arrays of + integer type the default is float32, for arrays of float types it + is the same as the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + Returns + ------- + mean : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. + + See Also + -------- + var : variance + std : standard deviation + + Notes + ----- + The mean is the sum of the elements along the axis divided by the + number of elements. + + """ if self._mask is nomask: result = super(MaskedArray, self).mean(axis=axis, dtype=dtype) @@ -2158,7 +2332,13 @@ cnt = self.count(axis=axis) result = dsum*1./cnt if out is not None: - out.flat = result.ravel() + out.flat = result + if isinstance(out, MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = getattr(result, '_mask', nomask) + return out return result def anom(self, axis=None, dtype=None): @@ -2181,87 +2361,149 @@ else: return (self - expand_dims(m,axis)) - def var(self, axis=None, dtype=None, ddof=0): - """Return the variance, a measure of the spread of a distribution. + def var(self, axis=None, dtype=None, out=None, ddof=0): + """a.var(axis=None, dtype=None, out=None, ddof=0) -> variance - The variance is the average of the squared deviations from the - mean, i.e. var = mean(abs(x - x.mean())**2). + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by default, + otherwise over the specified axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + Parameters + ---------- + axis : integer + Axis along which the variance is computed. The default is to + compute the variance of the flattened array. + dtype : data-type + Type to use in computing the variance. For arrays of integer type + the default is float32, for arrays of float types it is the same as + the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + ddof : {0, integer}, + Means Delta Degrees of Freedom. The divisor used in calculation is + N - ddof. - Notes - ----- - The value returned is by default a biased estimate of the - true variance, since the mean is computed by dividing by N-ddof. - For the (more standard) unbiased estimate, use ddof=1 or. - Note that for complex numbers the absolute value is taken before - squaring, so that the result is always real and nonnegative. + Returns + ------- + variance : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. + See Also + -------- + std : standard deviation + mean: average + + Notes + ----- + The variance is the average of the squared deviations from the mean, + i.e. var = mean(abs(x - x.mean())**2). The mean is computed by + dividing by N-ddof, where N is the number of elements. The argument + ddof defaults to zero; for an unbiased estimate supply ddof=1. Note + that for complex numbers the absolute value is taken before squaring, + so that the result is always real and nonnegative. + """ + # Easy case: nomask, business as usual if self._mask is nomask: - # TODO: Do we keep super, or var _data and take a view ? - return super(MaskedArray, self).var(axis=axis, dtype=dtype, - ddof=ddof) + return self._data.var(axis=axis, dtype=dtype, out=out, ddof=ddof) + # Some data are masked, yay! + cnt = self.count(axis=axis)-ddof + danom = self.anom(axis=axis, dtype=dtype) + if iscomplexobj(self): + danom = umath.absolute(danom)**2 else: - cnt = self.count(axis=axis)-ddof - danom = self.anom(axis=axis, dtype=dtype) - if iscomplexobj(self): - danom = umath.absolute(danom)**2 - else: - danom *= danom - dvar = narray(danom.sum(axis) / cnt).view(type(self)) - if axis is not None: - dvar._mask = mask_or(self._mask.all(axis), (cnt==1)) + danom *= danom + dvar = divide(danom.sum(axis), cnt).view(type(self)) + # Apply the mask if it's not a scalar + if dvar.ndim: + dvar._mask = mask_or(self._mask.all(axis), (cnt<=ddof)) dvar._update_from(self) - return dvar + elif getattr(dvar,'_mask', False): + # Make sure that masked is returned when the scalar is masked. + dvar = masked + if out is not None: + if isinstance(out, MaskedArray): + out.__setmask__(True) + else: + out.flat = np.nan + return out + # In case with have an explicit output + if out is not None: + # Set the data + out.flat = dvar + # Set the mask if needed + if isinstance(out, MaskedArray): + out.__setmask__(dvar.mask) + return out + return dvar - def std(self, axis=None, dtype=None, ddof=0): - """Return the standard deviation, a measure of the spread of a - distribution. + def std(self, axis=None, dtype=None, out=None, ddof=0): + """a.std(axis=None, dtype=None, out=None, ddof=0) - The standard deviation is the square root of the average of - the squared deviations from the mean, i.e. + Returns the standard deviation of the array elements, a measure of the + spread of a distribution. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. - std = sqrt(mean(abs(x - x.mean())**2)). + Parameters + ---------- + axis : integer + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : type + Type to use in computing the standard deviation. For arrays of + integer type the default is float32, for arrays of float types it + is the same as the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + ddof : {0, integer} + Means Delta Degrees of Freedom. The divisor used in calculations + is N-ddof. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. - If not given, the current dtype is used instead. + Returns + ------- + standard deviation : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. - Notes - ----- - The value returned is by default a biased estimate of the - true standard deviation, since the mean is computed by dividing - by N-ddof. For the more standard unbiased estimate, use ddof=1. - Note that for complex numbers the absolute value is taken before - squaring, so that the result is always real and nonnegative. - """ - dvar = self.var(axis,dtype,ddof=ddof) - if axis is not None or dvar is not masked: + See Also + -------- + var : variance + mean : average + + Notes + ----- + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e. var = sqrt(mean(abs(x - x.mean())**2)). The + computed standard deviation is computed by dividing by the number of + elements, N-ddof. The option ddof defaults to zero, that is, a biased + estimate. Note that for complex numbers std takes the absolute value before + squaring, so that the result is always real and nonnegative. + + """ + dvar = self.var(axis=axis,dtype=dtype,out=out, ddof=ddof) + if dvar is not masked: dvar = sqrt(dvar) + if out is not None: + out **= 0.5 + return out return dvar #............................................ def round(self, decimals=0, out=None): - result = self._data.round(decimals).view(type(self)) + result = self._data.round(decimals=decimals, out=out).view(type(self)) result._mask = self._mask result._update_from(self) + # No explicit output: we're done if out is None: return result - out[:] = result - return + if isinstance(out, MaskedArray): + out.__setmask__(self._mask) + return out round.__doc__ = ndarray.round.__doc__ #............................................ @@ -2314,49 +2556,71 @@ fill_value = default_fill_value(self) d = self.filled(fill_value).view(ndarray) return d.argsort(axis=axis, kind=kind, order=order) - #........................ - def argmin(self, axis=None, fill_value=None): - """Return an ndarray of indices for the minimum values of a - along the specified axis. - Masked values are treated as if they had the value fill_value. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. If None, the - output of minimum_fill_value(self._data) is used. + def argmin(self, axis=None, fill_value=None, out=None): + """a.argmin(axis=None, out=None) + Return array of indices to the minimum values along the given axis. + + Parameters + ---------- + axis : {None, integer} + If None, the index is into the flattened array, otherwise along + the specified axis + fill_value : {var}, optional + Value used to fill in the masked values. If None, the output of + minimum_fill_value(self._data) is used instead. + out : {None, array}, optional + Array into which the result can be placed. Its type is preserved + and it must be of the right shape to hold the output. + """ if fill_value is None: fill_value = minimum_fill_value(self) d = self.filled(fill_value).view(ndarray) - return d.argmin(axis) - #........................ - def argmax(self, axis=None, fill_value=None): - """Returns the array of indices for the maximum values of `a` - along the specified axis. + return d.argmin(axis, out=out) - Masked values are treated as if they had the value fill_value. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. If None, the - output of maximum_fill_value(self._data) is used. + def argmax(self, axis=None, fill_value=None, out=None): + """a.argmax(axis=None, out=None) + Returns array of indices of the maximum values along the given axis. + Masked values are treated as if they had the value fill_value. + + Parameters + ---------- + axis : {None, integer} + If None, the index is into the flattened array, otherwise along + the specified axis + fill_value : {var}, optional + Value used to fill in the masked values. If None, the output of + maximum_fill_value(self._data) is used instead. + out : {None, array}, optional + Array into which the result can be placed. Its type is preserved + and it must be of the right shape to hold the output. + + Returns + ------- + index_array : {integer_array} + + Examples + -------- + >>> a = arange(6).reshape(2,3) + >>> a.argmax() + 5 + >>> a.argmax(0) + array([1, 1, 1]) + >>> a.argmax(1) + array([2, 2]) + """ if fill_value is None: fill_value = maximum_fill_value(self._data) d = self.filled(fill_value).view(ndarray) - return d.argmax(axis) + return d.argmax(axis, out=out) + def sort(self, axis=-1, kind='quicksort', order=None, endwith=True, fill_value=None): """Sort along the given axis. @@ -2418,53 +2682,66 @@ filler = maximum_fill_value(self) else: filler = fill_value - idx = numpy.indices(self.shape) + idx = np.indices(self.shape) idx[axis] = self.filled(filler).argsort(axis=axis,kind=kind,order=order) idx_l = idx.tolist() tmp_mask = self._mask[idx_l].flat tmp_data = self._data[idx_l].flat - self.flat = tmp_data + self._data.flat = tmp_data self._mask.flat = tmp_mask return #............................................ - def min(self, axis=None, fill_value=None): - """Return the minimum of a along the given axis. + def min(self, axis=None, out=None, fill_value=None): + """a.min(axis=None, out=None, fill_value=None) - Masked values are filled with fill_value. + Return the minimum along a given axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. - If None, use the the output of minimum_fill_value(). + Parameters + ---------- + axis : {None, int}, optional + Axis along which to operate. By default, ``axis`` is None and the + flattened input is used. + out : array_like, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + fill_value : {var}, optional + Value used to fill in the masked values. + If None, use the output of minimum_fill_value(). - """ - mask = self._mask - # Check all/nothing case ...... - if mask is nomask: - return super(MaskedArray, self).min(axis=axis) - elif (not mask.ndim) and mask: - return masked - # Get the mask ................ - if axis is None: - mask = umath.logical_and.reduce(mask.flat) - else: - mask = umath.logical_and.reduce(mask, axis=axis) - # Skip if all masked .......... - if not mask.ndim and mask: - return masked - # Get the fill value ........... + Returns + ------- + amin : array_like + New array holding the result. + If ``out`` was specified, ``out`` is returned. + + """ + _mask = ndarray.__getattribute__(self, '_mask') + newmask = _mask.all(axis=axis) if fill_value is None: fill_value = minimum_fill_value(self) - # Get the data ................ - result = self.filled(fill_value).min(axis=axis).view(type(self)) - if result.ndim > 0: - result._mask = mask - return result + # No explicit output + if out is None: + result = self.filled(fill_value).min(axis=axis, out=out).view(type(self)) + if result.ndim: + # Set the mask + result.__setmask__(newmask) + # Get rid of Infs + if newmask.ndim: + np.putmask(result, newmask, result.fill_value) + elif newmask: + result = masked + return result + # Explicit output + result = self.filled(fill_value).min(axis=axis, out=out) + if isinstance(out, MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = newmask + else: + np.putmask(out, newmask, np.nan) + return out def mini(self, axis=None): if axis is None: @@ -2473,59 +2750,92 @@ return minimum.reduce(self, axis) #........................ - def max(self, axis=None, fill_value=None): - """Return the maximum/a along the given axis. + def max(self, axis=None, out=None, fill_value=None): + """a.max(axis=None, out=None, fill_value=None) - Masked values are filled with fill_value. + Return the maximum along a given axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. - If None, use the the output of maximum_fill_value(). + Parameters + ---------- + axis : {None, int}, optional + Axis along which to operate. By default, ``axis`` is None and the + flattened input is used. + out : array_like, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + fill_value : {var}, optional + Value used to fill in the masked values. + If None, use the output of maximum_fill_value(). + + Returns + ------- + amax : array_like + New array holding the result. + If ``out`` was specified, ``out`` is returned. + """ - mask = self._mask - # Check all/nothing case ...... - if mask is nomask: - return super(MaskedArray, self).max(axis=axis) - elif (not mask.ndim) and mask: - return masked - # Check the mask .............. - if axis is None: - mask = umath.logical_and.reduce(mask.flat) - else: - mask = umath.logical_and.reduce(mask, axis=axis) - # Skip if all masked .......... - if not mask.ndim and mask: - return masked - # Get the fill value .......... + _mask = ndarray.__getattribute__(self, '_mask') + newmask = _mask.all(axis=axis) if fill_value is None: fill_value = maximum_fill_value(self) - # Get the data ................ - result = self.filled(fill_value).max(axis=axis).view(type(self)) - if result.ndim > 0: - result._mask = mask - return result - #........................ - def ptp(self, axis=None, fill_value=None): - """Return the visible data range (max-min) along the given axis. + # No explicit output + if out is None: + result = self.filled(fill_value).max(axis=axis, out=out).view(type(self)) + if result.ndim: + # Set the mask + result.__setmask__(newmask) + # Get rid of Infs + if newmask.ndim: + np.putmask(result, newmask, result.fill_value) + elif newmask: + result = masked + return result + # Explicit output + result = self.filled(fill_value).max(axis=axis, out=out) + if isinstance(out, MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = newmask + else: + np.putmask(out, newmask, np.nan) + return out - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. If None, the - maximum uses the maximum default, the minimum uses the - minimum default. + def ptp(self, axis=None, out=None, fill_value=None): + """a.ptp(axis=None, out=None) + Return (maximum - minimum) along the the given dimension + (i.e. peak-to-peak value). + + Parameters + ---------- + axis : {None, int}, optional + Axis along which to find the peaks. If None (default) the + flattened array is used. + out : array_like + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. + fill_value : {var}, optional + Value used to fill in the masked values. + + Returns + ------- + ptp : ndarray. + A new array holding the result, unless ``out`` was + specified, in which case a reference to ``out`` is returned. + + """ - return self.max(axis, fill_value) - self.min(axis, fill_value) + if out is None: + result = self.max(axis=axis, fill_value=fill_value) + result -= self.min(axis=axis, fill_value=fill_value) + return result + out.flat = self.max(axis=axis, out=out, fill_value=fill_value) + out -= self.min(axis=axis, fill_value=fill_value) + return out + # Array methods --------------------------------------- copy = _arraymethod('copy') diagonal = _arraymethod('diagonal') @@ -2658,7 +2968,7 @@ isMA = isMaskedArray #backward compatibility # We define the masked singleton as a float for higher precedence... # Note that it can be tricky sometimes w/ type comparison -masked_singleton = MaskedArray(0, dtype=float_, mask=True) +masked_singleton = MaskedArray(0, dtype=np.float_, mask=True) masked = masked_singleton masked_array = MaskedArray @@ -2675,7 +2985,7 @@ for convenience. And backwards compatibility... """ - #TODO: we should try to put 'order' somwehere + #!!!: we should try to put 'order' somwehere return MaskedArray(data, mask=mask, dtype=dtype, copy=copy, subok=subok, keep_mask=keep_mask, hard_mask=hard_mask, fill_value=fill_value, ndmin=ndmin, shrink=shrink) @@ -2764,35 +3074,32 @@ self.fill_value_func = maximum_fill_value #.......................................................... -def min(array, axis=None, out=None): - """Return the minima along the given axis. +def min(obj, axis=None, out=None, fill_value=None): + try: + return obj.min(axis=axis, fill_value=fill_value, out=out) + except (AttributeError, TypeError): + # If obj doesn't have a max method, + # ...or if the method doesn't accept a fill_value argument + return asanyarray(obj).min(axis=axis, fill_value=fill_value, out=out) +min.__doc__ = MaskedArray.min.__doc__ - If `axis` is None, applies to the flattened array. +def max(obj, axis=None, out=None, fill_value=None): + try: + return obj.max(axis=axis, fill_value=fill_value, out=out) + except (AttributeError, TypeError): + # If obj doesn't have a max method, + # ...or if the method doesn't accept a fill_value argument + return asanyarray(obj).max(axis=axis, fill_value=fill_value, out=out) +max.__doc__ = MaskedArray.max.__doc__ - """ - if out is not None: - raise TypeError("Output arrays Unsupported for masked arrays") - if axis is None: - return minimum(array) - else: - return minimum.reduce(array, axis) -min.__doc__ = MaskedArray.min.__doc__ -#............................ -def max(obj, axis=None, out=None): - if out is not None: - raise TypeError("Output arrays Unsupported for masked arrays") - if axis is None: - return maximum(obj) - else: - return maximum.reduce(obj, axis) -max.__doc__ = MaskedArray.max.__doc__ -#............................. -def ptp(obj, axis=None): +def ptp(obj, axis=None, out=None, fill_value=None): """a.ptp(axis=None) = a.max(axis)-a.min(axis)""" try: - return obj.max(axis)-obj.min(axis) - except AttributeError: - return max(obj, axis=axis) - min(obj, axis=axis) + return obj.ptp(axis, out=out, fill_value=fill_value) + except (AttributeError, TypeError): + # If obj doesn't have a max method, + # ...or if the method doesn't accept a fill_value argument + return asanyarray(obj).ptp(axis=axis, fill_value=fill_value, out=out) ptp.__doc__ = MaskedArray.ptp.__doc__ @@ -2816,7 +3123,7 @@ try: return getattr(MaskedArray, self._methodname).__doc__ except: - return getattr(numpy, self._methodname).__doc__ + return getattr(np, self._methodname).__doc__ def __call__(self, a, *args, **params): if isinstance(a, MaskedArray): return getattr(a, self._methodname).__call__(*args, **params) @@ -2830,7 +3137,7 @@ try: return method(*args, **params) except SystemError: - return getattr(numpy,self._methodname).__call__(a, *args, **params) + return getattr(np,self._methodname).__call__(a, *args, **params) all = _frommethod('all') anomalies = anom = _frommethod('anom') @@ -2877,13 +3184,13 @@ # Get the result and view it as a (subclass of) MaskedArray result = umath.power(fa,fb).view(basetype) # Find where we're in trouble w/ NaNs and Infs - invalid = numpy.logical_not(numpy.isfinite(result.view(ndarray))) + invalid = np.logical_not(np.isfinite(result.view(ndarray))) # Retrieve some extra attributes if needed if isinstance(result,MaskedArray): result._update_from(a) # Add the initial mask if m is not nomask: - if numpy.isscalar(result): + if np.isscalar(result): return masked result._mask = m # Fix the invalid parts @@ -2904,7 +3211,7 @@ # if m.all(): # fa.flat = 1 # else: -# numpy.putmask(fa,m,1) +# np.putmask(fa,m,1) # return masked_array(umath.power(fa, fb), m) #.............................................................................. @@ -2952,7 +3259,7 @@ else: filler = fill_value # return - indx = numpy.indices(a.shape).tolist() + indx = np.indices(a.shape).tolist() indx[axis] = filled(a,filler).argsort(axis=axis,kind=kind,order=order) return a[indx] sort.__doc__ = MaskedArray.sort.__doc__ @@ -2961,13 +3268,13 @@ def compressed(x): """Return a 1-D array of all the non-masked data.""" if getmask(x) is nomask: - return numpy.asanyarray(x) + return np.asanyarray(x) else: return x.compressed() def concatenate(arrays, axis=0): "Concatenate the arrays along the given axis." - d = numpy.concatenate([getdata(a) for a in arrays], axis) + d = np.concatenate([getdata(a) for a in arrays], axis) rcls = get_masked_subclass(*arrays) data = d.view(rcls) # Check whether one of the arrays has a non-empty mask... @@ -2977,7 +3284,7 @@ else: return data # OK, so we have to concatenate the masks - dm = numpy.concatenate([getmaskarray(a) for a in arrays], axis) + dm = np.concatenate([getmaskarray(a) for a in arrays], axis) # If we decide to keep a '_shrinkmask' option, we want to check that ... # ... all of them are True, and then check for dm.any() # shrink = numpy.logical_or.reduce([getattr(a,'_shrinkmask',True) for a in arrays]) @@ -3059,21 +3366,21 @@ if getmask(a) is nomask: if valmask is not nomask: a._sharedmask = True - a.mask = numpy.zeros(a.shape, dtype=bool_) - numpy.putmask(a._mask, mask, valmask) + a.mask = np.zeros(a.shape, dtype=bool_) + np.putmask(a._mask, mask, valmask) elif a._hardmask: if valmask is not nomask: m = a._mask.copy() - numpy.putmask(m, mask, valmask) + np.putmask(m, mask, valmask) a.mask |= m else: if valmask is nomask: valmask = getmaskarray(values) - numpy.putmask(a._mask, mask, valmask) - numpy.putmask(a._data, mask, valdata) + np.putmask(a._mask, mask, valmask) + np.putmask(a._data, mask, valdata) return -def transpose(a,axes=None): +def transpose(a, axes=None): """Return a view of the array with dimensions permuted according to axes, as a masked array. @@ -3107,8 +3414,8 @@ # We can't use _frommethods here, as N.resize is notoriously whiny. m = getmask(x) if m is not nomask: - m = numpy.resize(m, new_shape) - result = numpy.resize(x, new_shape).view(get_masked_subclass(x)) + m = np.resize(m, new_shape) + result = np.resize(x, new_shape).view(get_masked_subclass(x)) if result.ndim: result._mask = m return result @@ -3117,18 +3424,18 @@ #................................................ def rank(obj): "maskedarray version of the numpy function." - return fromnumeric.rank(getdata(obj)) -rank.__doc__ = numpy.rank.__doc__ + return np.rank(getdata(obj)) +rank.__doc__ = np.rank.__doc__ # def shape(obj): "maskedarray version of the numpy function." - return fromnumeric.shape(getdata(obj)) -shape.__doc__ = numpy.shape.__doc__ + return np.shape(getdata(obj)) +shape.__doc__ = np.shape.__doc__ # def size(obj, axis=None): "maskedarray version of the numpy function." - return fromnumeric.size(getdata(obj), axis) -size.__doc__ = numpy.size.__doc__ + return np.size(getdata(obj), axis) +size.__doc__ = np.size.__doc__ #................................................ #####-------------------------------------------------------------------------- @@ -3158,55 +3465,97 @@ elif x is None or y is None: raise ValueError, "Either both or neither x and y should be given." # Get the condition ............... - fc = filled(condition, 0).astype(bool_) - notfc = numpy.logical_not(fc) + fc = filled(condition, 0).astype(MaskType) + notfc = np.logical_not(fc) # Get the data ...................................... xv = getdata(x) yv = getdata(y) if x is masked: ndtype = yv.dtype - xm = numpy.ones(fc.shape, dtype=MaskType) elif y is masked: ndtype = xv.dtype - ym = numpy.ones(fc.shape, dtype=MaskType) else: - ndtype = numpy.max([xv.dtype, yv.dtype]) - xm = getmask(x) - d = numpy.empty(fc.shape, dtype=ndtype).view(MaskedArray) - numpy.putmask(d._data, fc, xv.astype(ndtype)) - numpy.putmask(d._data, notfc, yv.astype(ndtype)) - d._mask = numpy.zeros(fc.shape, dtype=MaskType) - numpy.putmask(d._mask, fc, getmask(x)) - numpy.putmask(d._mask, notfc, getmask(y)) - d._mask |= getmaskarray(condition) - if not d._mask.any(): + ndtype = np.max([xv.dtype, yv.dtype]) + # Construct an empty array and fill it + d = np.empty(fc.shape, dtype=ndtype).view(MaskedArray) + _data = d._data + np.putmask(_data, fc, xv.astype(ndtype)) + np.putmask(_data, notfc, yv.astype(ndtype)) + # Create an empty mask and fill it + _mask = d._mask = np.zeros(fc.shape, dtype=MaskType) + np.putmask(_mask, fc, getmask(x)) + np.putmask(_mask, notfc, getmask(y)) + _mask |= getmaskarray(condition) + if not _mask.any(): d._mask = nomask return d -def choose (indices, t, out=None, mode='raise'): - "Return array shaped like indices with elements chosen from t" - #TODO: implement options `out` and `mode`, if possible. +def choose (indices, choices, out=None, mode='raise'): + """ + choose(a, choices, out=None, mode='raise') + + Use an index array to construct a new array from a set of choices. + + Given an array of integers and a set of n choice arrays, this method + will create a new array that merges each of the choice arrays. Where a + value in `a` is i, the new array will have the value that choices[i] + contains in the same place. + + Parameters + ---------- + a : int array + This array must contain integers in [0, n-1], where n is the number + of choices. + choices : sequence of arrays + Choice arrays. The index array and all of the choices should be + broadcastable to the same shape. + out : array, optional + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + 'raise' : raise an error + 'wrap' : wrap around + 'clip' : clip to the range + + Returns + ------- + merged_array : array + + See Also + -------- + choose : equivalent function + + """ def fmask (x): "Returns the filled array, or True if masked." if x is masked: - return 1 + return True return filled(x) def nmask (x): "Returns the mask, True if ``masked``, False if ``nomask``." if x is masked: - return 1 - m = getmask(x) - if m is nomask: - return 0 - return m + return True + return getmask(x) + # Get the indices...... c = filled(indices, 0) - masks = [nmask(x) for x in t] - a = [fmask(x) for x in t] - d = numpy.choose(c, a) - m = numpy.choose(c, masks) - m = make_mask(mask_or(m, getmask(indices)), copy=0, shrink=True) - return masked_array(d, mask=m) + # Get the masks........ + masks = [nmask(x) for x in choices] + data = [fmask(x) for x in choices] + # Construct the mask + outputmask = np.choose(c, masks, mode=mode) + outputmask = make_mask(mask_or(outputmask, getmask(indices)), + copy=0, shrink=True) + # Get the choices...... + d = np.choose(c, data, mode=mode, out=out).view(MaskedArray) + if out is not None: + if isinstance(out, MaskedArray): + out.__setmask__(outputmask) + return out + d.__setmask__(outputmask) + return d + def round_(a, decimals=0, out=None): """Return a copy of a, rounded to 'decimals' places. @@ -3231,17 +3580,13 @@ """ if out is None: - return numpy.round_(a, decimals, out) + return np.round_(a, decimals, out) else: - numpy.round_(getdata(a), decimals, out) + np.round_(getdata(a), decimals, out) if hasattr(out, '_mask'): out._mask = getmask(a) return out -def arange(stop, start=None, step=1, dtype=None): - "maskedarray version of the numpy function." - return numpy.arange(stop, start, step, dtype).view(MaskedArray) -arange.__doc__ = numpy.arange.__doc__ def inner(a, b): "maskedarray version of the numpy function." @@ -3251,8 +3596,8 @@ fa.shape = (1,) if len(fb.shape) == 0: fb.shape = (1,) - return numpy.inner(fa, fb).view(MaskedArray) -inner.__doc__ = numpy.inner.__doc__ + return np.inner(fa, fb).view(MaskedArray) +inner.__doc__ = np.inner.__doc__ inner.__doc__ += doc_note("Masked values are replaced by 0.") innerproduct = inner @@ -3260,16 +3605,16 @@ "maskedarray version of the numpy function." fa = filled(a, 0).ravel() fb = filled(b, 0).ravel() - d = numeric.outer(fa, fb) + d = np.outer(fa, fb) ma = getmask(a) mb = getmask(b) if ma is nomask and mb is nomask: return masked_array(d) ma = getmaskarray(a) mb = getmaskarray(b) - m = make_mask(1-numeric.outer(1-ma, 1-mb), copy=0) + m = make_mask(1-np.outer(1-ma, 1-mb), copy=0) return masked_array(d, mask=m) -outer.__doc__ = numpy.outer.__doc__ +outer.__doc__ = np.outer.__doc__ outer.__doc__ += doc_note("Masked values are replaced by 0.") outerproduct = outer @@ -3310,7 +3655,7 @@ x = filled(array(d1, copy=0, mask=m), fill_value).astype(float) y = filled(array(d2, copy=0, mask=m), 1).astype(float) d = umath.less_equal(umath.absolute(x-y), atol + rtol * umath.absolute(y)) - return fromnumeric.alltrue(fromnumeric.ravel(d)) + return np.alltrue(np.ravel(d)) #.............................................................................. def asarray(a, dtype=None): @@ -3336,26 +3681,6 @@ return masked_array(a, dtype=dtype, copy=False, keep_mask=True, subok=True) -def empty(new_shape, dtype=float): - "maskedarray version of the numpy function." - return numpy.empty(new_shape, dtype).view(MaskedArray) -empty.__doc__ = numpy.empty.__doc__ - -def empty_like(a): - "maskedarray version of the numpy function." - return numpy.empty_like(a).view(MaskedArray) -empty_like.__doc__ = numpy.empty_like.__doc__ - -def ones(new_shape, dtype=float): - "maskedarray version of the numpy function." - return numpy.ones(new_shape, dtype).view(MaskedArray) -ones.__doc__ = numpy.ones.__doc__ - -def zeros(new_shape, dtype=float): - "maskedarray version of the numpy function." - return numpy.zeros(new_shape, dtype).view(MaskedArray) -zeros.__doc__ = numpy.zeros.__doc__ - #####-------------------------------------------------------------------------- #---- --- Pickling --- #####-------------------------------------------------------------------------- @@ -3405,7 +3730,7 @@ """ __doc__ = None def __init__(self, funcname): - self._func = getattr(numpy, funcname) + self._func = getattr(np, funcname) self.__doc__ = self.getdoc() def getdoc(self): "Return the doc of the function (from the doc of the method)." @@ -3413,10 +3738,15 @@ def __call__(self, a, *args, **params): return self._func.__call__(a, *args, **params).view(MaskedArray) +arange = _convert2ma('arange') +clip = np.clip +empty = _convert2ma('empty') +empty_like = _convert2ma('empty_like') frombuffer = _convert2ma('frombuffer') fromfunction = _convert2ma('fromfunction') identity = _convert2ma('identity') -indices = numpy.indices -clip = numpy.clip +indices = np.indices +ones = _convert2ma('ones') +zeros = _convert2ma('zeros') ############################################################################### Modified: branches/1.1.x/numpy/ma/mrecords.py =================================================================== --- branches/1.1.x/numpy/ma/mrecords.py 2008-06-18 15:31:50 UTC (rev 5294) +++ branches/1.1.x/numpy/ma/mrecords.py 2008-06-18 18:31:38 UTC (rev 5295) @@ -51,9 +51,6 @@ formats = '' for obj in data: obj = np.asarray(obj) -# if not isinstance(obj, ndarray): -## if not isinstance(obj, ndarray): -# raise ValueError, "item in the array list must be an ndarray." formats += _typestr[obj.dtype.type] if issubclass(obj.dtype.type, ntypes.flexible): formats += `obj.itemsize` @@ -75,7 +72,7 @@ elif isinstance(names, str): new_names = names.split(',') else: - raise NameError, "illegal input names %s" % `names` + raise NameError("illegal input names %s" % `names`) nnames = len(new_names) if nnames < ndescr: new_names += default_names[nnames:] @@ -88,7 +85,7 @@ ndescr.append(t) else: ndescr.append((n,t[1])) - return numeric.dtype(ndescr) + return np.dtype(ndescr) def _get_fieldmask(self): @@ -124,7 +121,6 @@ self = recarray.__new__(cls, shape, dtype=dtype, buf=buf, offset=offset, strides=strides, formats=formats, byteorder=byteorder, aligned=aligned,) -# self = self.view(cls) # mdtype = [(k,'|b1') for (k,_) in self.dtype.descr] if mask is nomask or not np.size(mask): @@ -411,9 +407,9 @@ return ndarray.view(self, obj) except TypeError: pass - dtype = np.dtype(obj) - if dtype.fields is None: - return self.__array__().view(dtype) + dtype_ = np.dtype(obj) + if dtype_.fields is None: + return self.__array__().view(dtype_) return ndarray.view(self, obj) #...................................................... def filled(self, fill_value=None): @@ -451,14 +447,14 @@ def soften_mask(self): "Forces the mask to soft" self._hardmask = False - #...................................................... + def copy(self): """Returns a copy of the masked record.""" _localdict = self.__dict__ copied = self._data.copy().view(type(self)) copied._fieldmask = self._fieldmask.copy() return copied - #...................................................... + def tolist(self, fill_value=None): """Copy the data portion of the array to a hierarchical python list and returns that list. @@ -654,21 +650,21 @@ # Start the conversion loop ....... for f in arr: try: - val = int(f) + int(f) except ValueError: try: - val = float(f) + float(f) except ValueError: try: val = complex(f) except ValueError: vartypes.append(arr.dtype) else: - vartypes.append(complex) + vartypes.append(np.dtype(complex)) else: - vartypes.append(float) + vartypes.append(np.dtype(float)) else: - vartypes.append(int) + vartypes.append(np.dtype(int)) return vartypes def openfile(fname): From numpy-svn at scipy.org Wed Jun 18 18:53:45 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 18 Jun 2008 17:53:45 -0500 (CDT) Subject: [Numpy-svn] r5296 - trunk Message-ID: <20080618225345.46D4339CA5C@scipy.org> Author: rkern Date: 2008-06-18 17:53:44 -0500 (Wed, 18 Jun 2008) New Revision: 5296 Modified: trunk/setup.py Log: PyPI metadata fixes. Modified: trunk/setup.py =================================================================== --- trunk/setup.py 2008-06-18 18:31:38 UTC (rev 5295) +++ trunk/setup.py 2008-06-18 22:53:44 UTC (rev 5296) @@ -20,7 +20,7 @@ import sys CLASSIFIERS = """\ -Development Status :: 4 - Beta +Development Status :: 5 - Production/Stable Intended Audience :: Science/Research Intended Audience :: Developers License :: OSI Approved @@ -79,7 +79,7 @@ maintainer_email = "numpy-discussion at lists.sourceforge.net", description = DOCLINES[0], long_description = "\n".join(DOCLINES[2:]), - url = "http://numeric.scipy.org", + url = "http://numpy.scipy.org", download_url = "http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=175103", license = 'BSD', classifiers=filter(None, CLASSIFIERS.split('\n')), From numpy-svn at scipy.org Wed Jun 18 18:55:30 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 18 Jun 2008 17:55:30 -0500 (CDT) Subject: [Numpy-svn] r5297 - branches/1.1.x Message-ID: <20080618225530.EA5E939CA92@scipy.org> Author: rkern Date: 2008-06-18 17:55:30 -0500 (Wed, 18 Jun 2008) New Revision: 5297 Modified: branches/1.1.x/setup.py Log: PyPI metadata fixes. Modified: branches/1.1.x/setup.py =================================================================== --- branches/1.1.x/setup.py 2008-06-18 22:53:44 UTC (rev 5296) +++ branches/1.1.x/setup.py 2008-06-18 22:55:30 UTC (rev 5297) @@ -20,7 +20,7 @@ import sys CLASSIFIERS = """\ -Development Status :: 4 - Beta +Development Status :: 5 - Production/Stable Intended Audience :: Science/Research Intended Audience :: Developers License :: OSI Approved @@ -79,7 +79,7 @@ maintainer_email = "numpy-discussion at lists.sourceforge.net", description = DOCLINES[0], long_description = "\n".join(DOCLINES[2:]), - url = "http://numeric.scipy.org", + url = "http://numpy.scipy.org", download_url = "http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=175103", license = 'BSD', classifiers=filter(None, CLASSIFIERS.split('\n')), From numpy-svn at scipy.org Thu Jun 19 02:16:53 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 19 Jun 2008 01:16:53 -0500 (CDT) Subject: [Numpy-svn] r5298 - trunk/numpy/doc/cython Message-ID: <20080619061653.14B3C39C071@scipy.org> Author: fperez Date: 2008-06-19 01:16:48 -0500 (Thu, 19 Jun 2008) New Revision: 5298 Added: trunk/numpy/doc/cython/c_numpy.pxd trunk/numpy/doc/cython/c_python.pxd Removed: trunk/numpy/doc/cython/Python.pxi trunk/numpy/doc/cython/numpy.pxi Modified: trunk/numpy/doc/cython/Makefile trunk/numpy/doc/cython/numpyx.pyx Log: Updated Cython code to use .pxd files with cimport instead of .pxi/include. Using cimport/pxd is the currently recommended approach by the Cython team. Modified: trunk/numpy/doc/cython/Makefile =================================================================== --- trunk/numpy/doc/cython/Makefile 2008-06-18 22:55:30 UTC (rev 5297) +++ trunk/numpy/doc/cython/Makefile 2008-06-19 06:16:48 UTC (rev 5298) @@ -24,7 +24,7 @@ numpyx.pyx.html: numpyx.pyx cython -a numpyx.pyx - @echo "Annotated HTML of the C code generated in numpy.pyx.html" + @echo "Annotated HTML of the C code generated in numpyx.html" # Phony targets for cleanup and similar uses Deleted: trunk/numpy/doc/cython/Python.pxi =================================================================== --- trunk/numpy/doc/cython/Python.pxi 2008-06-18 22:55:30 UTC (rev 5297) +++ trunk/numpy/doc/cython/Python.pxi 2008-06-19 06:16:48 UTC (rev 5298) @@ -1,62 +0,0 @@ -# :Author: Robert Kern -# :Copyright: 2004, Enthought, Inc. -# :License: BSD Style - - -cdef extern from "Python.h": - # Not part of the Python API, but we might as well define it here. - # Note that the exact type doesn't actually matter for Pyrex. - ctypedef int size_t - - # Some type declarations we need - ctypedef int Py_intptr_t - - - # String API - char* PyString_AsString(object string) - char* PyString_AS_STRING(object string) - object PyString_FromString(char* c_string) - object PyString_FromStringAndSize(char* c_string, int length) - object PyString_InternFromString(char *v) - - # Float API - object PyFloat_FromDouble(double v) - double PyFloat_AsDouble(object ob) - long PyInt_AsLong(object ob) - - - # Memory API - void* PyMem_Malloc(size_t n) - void* PyMem_Realloc(void* buf, size_t n) - void PyMem_Free(void* buf) - - void Py_DECREF(object obj) - void Py_XDECREF(object obj) - void Py_INCREF(object obj) - void Py_XINCREF(object obj) - - # CObject API - ctypedef void (*destructor1)(void* cobj) - ctypedef void (*destructor2)(void* cobj, void* desc) - int PyCObject_Check(object p) - object PyCObject_FromVoidPtr(void* cobj, destructor1 destr) - object PyCObject_FromVoidPtrAndDesc(void* cobj, void* desc, - destructor2 destr) - void* PyCObject_AsVoidPtr(object self) - void* PyCObject_GetDesc(object self) - int PyCObject_SetVoidPtr(object self, void* cobj) - - # TypeCheck API - int PyFloat_Check(object obj) - int PyInt_Check(object obj) - - # Error API - int PyErr_Occurred() - void PyErr_Clear() - int PyErr_CheckSignals() - -cdef extern from "string.h": - void *memcpy(void *s1, void *s2, int n) - -cdef extern from "math.h": - double fabs(double x) Copied: trunk/numpy/doc/cython/c_numpy.pxd (from rev 5297, trunk/numpy/doc/cython/numpy.pxi) Copied: trunk/numpy/doc/cython/c_python.pxd (from rev 5297, trunk/numpy/doc/cython/Python.pxi) Deleted: trunk/numpy/doc/cython/numpy.pxi =================================================================== --- trunk/numpy/doc/cython/numpy.pxi 2008-06-18 22:55:30 UTC (rev 5297) +++ trunk/numpy/doc/cython/numpy.pxi 2008-06-19 06:16:48 UTC (rev 5298) @@ -1,133 +0,0 @@ -# :Author: Travis Oliphant - -cdef extern from "numpy/arrayobject.h": - - cdef enum NPY_TYPES: - NPY_BOOL - NPY_BYTE - NPY_UBYTE - NPY_SHORT - NPY_USHORT - NPY_INT - NPY_UINT - NPY_LONG - NPY_ULONG - NPY_LONGLONG - NPY_ULONGLONG - NPY_FLOAT - NPY_DOUBLE - NPY_LONGDOUBLE - NPY_CFLOAT - NPY_CDOUBLE - NPY_CLONGDOUBLE - NPY_OBJECT - NPY_STRING - NPY_UNICODE - NPY_VOID - NPY_NTYPES - NPY_NOTYPE - - cdef enum requirements: - NPY_CONTIGUOUS - NPY_FORTRAN - NPY_OWNDATA - NPY_FORCECAST - NPY_ENSURECOPY - NPY_ENSUREARRAY - NPY_ELEMENTSTRIDES - NPY_ALIGNED - NPY_NOTSWAPPED - NPY_WRITEABLE - NPY_UPDATEIFCOPY - NPY_ARR_HAS_DESCR - - NPY_BEHAVED - NPY_BEHAVED_NS - NPY_CARRAY - NPY_CARRAY_RO - NPY_FARRAY - NPY_FARRAY_RO - NPY_DEFAULT - - NPY_IN_ARRAY - NPY_OUT_ARRAY - NPY_INOUT_ARRAY - NPY_IN_FARRAY - NPY_OUT_FARRAY - NPY_INOUT_FARRAY - - NPY_UPDATE_ALL - - cdef enum defines: - NPY_MAXDIMS - - ctypedef struct npy_cdouble: - double real - double imag - - ctypedef struct npy_cfloat: - double real - double imag - - ctypedef int npy_intp - - ctypedef extern class numpy.dtype [object PyArray_Descr]: - cdef int type_num, elsize, alignment - cdef char type, kind, byteorder, hasobject - cdef object fields, typeobj - - ctypedef extern class numpy.ndarray [object PyArrayObject]: - cdef char *data - cdef int nd - cdef npy_intp *dimensions - cdef npy_intp *strides - cdef object base - cdef dtype descr - cdef int flags - - ctypedef extern class numpy.flatiter [object PyArrayIterObject]: - cdef int nd_m1 - cdef npy_intp index, size - cdef ndarray ao - cdef char *dataptr - - ctypedef extern class numpy.broadcast [object PyArrayMultiIterObject]: - cdef int numiter - cdef npy_intp size, index - cdef int nd - cdef npy_intp *dimensions - cdef void **iters - - object PyArray_ZEROS(int ndims, npy_intp* dims, NPY_TYPES type_num, int fortran) - object PyArray_EMPTY(int ndims, npy_intp* dims, NPY_TYPES type_num, int fortran) - dtype PyArray_DescrFromTypeNum(NPY_TYPES type_num) - object PyArray_SimpleNew(int ndims, npy_intp* dims, NPY_TYPES type_num) - int PyArray_Check(object obj) - object PyArray_ContiguousFromAny(object obj, NPY_TYPES type, - int mindim, int maxdim) - object PyArray_ContiguousFromObject(object obj, NPY_TYPES type, - int mindim, int maxdim) - npy_intp PyArray_SIZE(ndarray arr) - npy_intp PyArray_NBYTES(ndarray arr) - void *PyArray_DATA(ndarray arr) - object PyArray_FromAny(object obj, dtype newtype, int mindim, int maxdim, - int requirements, object context) - object PyArray_FROMANY(object obj, NPY_TYPES type_num, int min, - int max, int requirements) - object PyArray_NewFromDescr(object subtype, dtype newtype, int nd, - npy_intp* dims, npy_intp* strides, void* data, - int flags, object parent) - - object PyArray_FROM_OTF(object obj, NPY_TYPES type, int flags) - object PyArray_EnsureArray(object) - - object PyArray_MultiIterNew(int n, ...) - - char *PyArray_MultiIter_DATA(broadcast multi, int i) - void PyArray_MultiIter_NEXTi(broadcast multi, int i) - void PyArray_MultiIter_NEXT(broadcast multi) - - object PyArray_IterNew(object arr) - void PyArray_ITER_NEXT(flatiter it) - - void import_array() Modified: trunk/numpy/doc/cython/numpyx.pyx =================================================================== --- trunk/numpy/doc/cython/numpyx.pyx 2008-06-18 22:55:30 UTC (rev 5297) +++ trunk/numpy/doc/cython/numpyx.pyx 2008-06-19 06:16:48 UTC (rev 5298) @@ -2,21 +2,21 @@ """Cython access to Numpy arrays - simple example. """ -# Includes from the python headers -include "Python.pxi" -# Include the Numpy C API for use via Cython extension code -include "numpy.pxi" +# Import the pieces of the Python C API we need to use (from c_python.pxd): +cimport c_python as py +# Import the NumPy C API (from c_numpy.pxd) +cimport c_numpy as cnp + ################################################ # Initialize numpy - this MUST be done before any other code is executed. -import_array() +cnp.import_array() -# Import the Numpy module for access to its usual Python API +# Import the NumPy module for access to its usual Python API import numpy as np - # A 'def' function is visible in the Python-imported module -def print_array_info(ndarray arr): +def print_array_info(cnp.ndarray arr): """Simple information printer about an array. Code meant to illustrate Cython/NumPy integration only.""" @@ -24,19 +24,19 @@ cdef int i print '-='*10 - # Note: the double cast here (void * first, then Py_intptr_t) is needed in - # Cython but not in Pyrex, since the casting behavior of cython is slightly - # different (and generally safer) than that of Pyrex. In this case, we - # just want the memory address of the actual Array object, so we cast it to - # void before doing the Py_intptr_t cast: + # Note: the double cast here (void * first, then py.Py_intptr_t) is needed + # in Cython but not in Pyrex, since the casting behavior of cython is + # slightly different (and generally safer) than that of Pyrex. In this + # case, we just want the memory address of the actual Array object, so we + # cast it to void before doing the py.Py_intptr_t cast: print 'Printing array info for ndarray at 0x%0lx'% \ - (arr,) + (arr,) print 'number of dimensions:',arr.nd - print 'address of strides: 0x%0lx'%(arr.strides,) + print 'address of strides: 0x%0lx'%(arr.strides,) print 'strides:' for i from 0<=iarr.strides[i] + print ' stride %d:'%i,arr.strides[i] print 'memory dump:' print_elements( arr.data, arr.strides, arr.dimensions, arr.nd, sizeof(double), arr.dtype ) @@ -46,12 +46,12 @@ # A 'cdef' function is NOT visible to the python side, but it is accessible to # the rest of this Cython module cdef print_elements(char *data, - Py_intptr_t* strides, - Py_intptr_t* dimensions, + py.Py_intptr_t* strides, + py.Py_intptr_t* dimensions, int nd, int elsize, object dtype): - cdef Py_intptr_t i,j + cdef py.Py_intptr_t i,j cdef void* elptr if dtype not in [np.dtype(np.object_), @@ -78,7 +78,7 @@ print_elements(data, strides+1, dimensions+1, nd-1, elsize, dtype) data = data + strides[0] -def test_methods(ndarray arr): +def test_methods(cnp.ndarray arr): """Test a few attribute accesses for an array. This illustrates how the pyrex-visible object is in practice a strange From numpy-svn at scipy.org Thu Jun 19 08:09:54 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 19 Jun 2008 07:09:54 -0500 (CDT) Subject: [Numpy-svn] r5299 - trunk/numpy/doc Message-ID: <20080619120954.7F95339C405@scipy.org> Author: stefan Date: 2008-06-19 07:09:38 -0500 (Thu, 19 Jun 2008) New Revision: 5299 Modified: trunk/numpy/doc/HOWTO_DOCUMENT.txt Log: Use a colon instead of a semi-colon to separate index levels. Modified: trunk/numpy/doc/HOWTO_DOCUMENT.txt =================================================================== --- trunk/numpy/doc/HOWTO_DOCUMENT.txt 2008-06-19 06:16:48 UTC (rev 5298) +++ trunk/numpy/doc/HOWTO_DOCUMENT.txt 2008-06-19 12:09:38 UTC (rev 5299) @@ -329,11 +329,11 @@ :refguide: ufunc, trigonometry To index a function as a sub-category of a class, separate index - entries by a semi-colon, e.g. + entries by a colon, e.g. :: - :refguide: ufunc, numpy;reshape, other + :refguide: ufunc, numpy:reshape, other A `list of available categories `_ is From numpy-svn at scipy.org Thu Jun 19 20:18:37 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 19 Jun 2008 19:18:37 -0500 (CDT) Subject: [Numpy-svn] r5300 - trunk/numpy/ma Message-ID: <20080620001837.ED18439CADA@scipy.org> Author: pierregm Date: 2008-06-19 19:18:32 -0500 (Thu, 19 Jun 2008) New Revision: 5300 Modified: trunk/numpy/ma/core.py trunk/numpy/ma/mrecords.py Log: * put maximum/minimum_fill_value back in __all__ Modified: trunk/numpy/ma/core.py =================================================================== --- trunk/numpy/ma/core.py 2008-06-19 12:09:38 UTC (rev 5299) +++ trunk/numpy/ma/core.py 2008-06-20 00:18:32 UTC (rev 5300) @@ -44,7 +44,8 @@ 'masked_less','masked_less_equal', 'masked_not_equal', 'masked_object','masked_outside', 'masked_print_option', 'masked_singleton','masked_values', 'masked_where', 'max', 'maximum', - 'mean', 'min', 'minimum', 'multiply', + 'maximum_fill_value', 'mean', 'min', 'minimum', 'minimum_fill_value', + 'multiply', 'negative', 'nomask', 'nonzero', 'not_equal', 'ones', 'outer', 'outerproduct', 'power', 'product', 'ptp', 'put', 'putmask', @@ -110,12 +111,12 @@ 'V' : '???', } max_filler = ntypes._minvals -max_filler.update(dict([(k, -np.inf) for k in [np.float32, np.float64]])) +max_filler.update([(k, -np.inf) for k in [np.float32, np.float64]]) min_filler = ntypes._maxvals -min_filler.update(dict([(k, +np.inf) for k in [np.float32, np.float64]])) +min_filler.update([(k, +np.inf) for k in [np.float32, np.float64]]) if 'float128' in ntypes.typeDict: - max_filler[np.float128] = -np.inf - min_filler[np.float128] = +np.inf + max_filler.update([(np.float128, -np.inf)]) + min_filler.update([(np.float128, +np.inf)]) def default_fill_value(obj): Modified: trunk/numpy/ma/mrecords.py =================================================================== --- trunk/numpy/ma/mrecords.py 2008-06-19 12:09:38 UTC (rev 5299) +++ trunk/numpy/ma/mrecords.py 2008-06-20 00:18:32 UTC (rev 5300) @@ -170,7 +170,7 @@ _locdict = self.__dict__ if _locdict['_baseclass'] == ndarray: _locdict['_baseclass'] = recarray - _locdict.update({'_mask':_mask, '_fieldmask':_mask}) + _locdict.update(_mask=_mask, _fieldmask=_mask) return def _getdata(self): From numpy-svn at scipy.org Fri Jun 20 00:17:56 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 19 Jun 2008 23:17:56 -0500 (CDT) Subject: [Numpy-svn] r5301 - trunk/numpy/doc/cython Message-ID: <20080620041756.C86ED39CB14@scipy.org> Author: fperez Date: 2008-06-19 23:17:54 -0500 (Thu, 19 Jun 2008) New Revision: 5301 Modified: trunk/numpy/doc/cython/c_numpy.pxd trunk/numpy/doc/cython/numpyx.pyx Log: Move the import_array() call directly into c_numpy.pxd. This makes the user-visible API for Cython usage simpler and closer to the Python one. Modified: trunk/numpy/doc/cython/c_numpy.pxd =================================================================== --- trunk/numpy/doc/cython/c_numpy.pxd 2008-06-20 00:18:32 UTC (rev 5300) +++ trunk/numpy/doc/cython/c_numpy.pxd 2008-06-20 04:17:54 UTC (rev 5301) @@ -1,5 +1,8 @@ # :Author: Travis Oliphant +# API declaration section. This basically exposes the NumPy C API to +# Pyrex/Cython programs. + cdef extern from "numpy/arrayobject.h": cdef enum NPY_TYPES: @@ -131,3 +134,11 @@ void PyArray_ITER_NEXT(flatiter it) void import_array() + +######################################################################## +# Other code (mostly initialization) + +# NumPy must be initialized before any user code is called in the extension +# module. By doing so here, we ensure the users don't have to explicitly +# remember this themselves, and provide a cleaner Cython API. +import_array() Modified: trunk/numpy/doc/cython/numpyx.pyx =================================================================== --- trunk/numpy/doc/cython/numpyx.pyx 2008-06-20 00:18:32 UTC (rev 5300) +++ trunk/numpy/doc/cython/numpyx.pyx 2008-06-20 04:17:54 UTC (rev 5301) @@ -2,16 +2,15 @@ """Cython access to Numpy arrays - simple example. """ -# Import the pieces of the Python C API we need to use (from c_python.pxd): +# Load the pieces of the Python C API we need to use (from c_python.pxd). Note +# that a 'cimport' is similart to a Python 'import' statement, but it provides +# access to the C part of a library instead of its Python-visible API. Please +# consult the Pyrex/Cython documentation for further details. cimport c_python as py -# Import the NumPy C API (from c_numpy.pxd) +# (C)Import the NumPy C API (from c_numpy.pxd) cimport c_numpy as cnp -################################################ -# Initialize numpy - this MUST be done before any other code is executed. -cnp.import_array() - # Import the NumPy module for access to its usual Python API import numpy as np From numpy-svn at scipy.org Fri Jun 20 02:04:30 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 20 Jun 2008 01:04:30 -0500 (CDT) Subject: [Numpy-svn] r5302 - in branches/cdavid: . numpy numpy/core numpy/core/code_generators numpy/core/src numpy/core/tests numpy/distutils numpy/distutils/command numpy/distutils/tests numpy/distutils/tests/f2py_ext/tests numpy/distutils/tests/f2py_f90_ext/tests numpy/distutils/tests/gen_ext/tests numpy/distutils/tests/pyrex_ext/tests numpy/distutils/tests/swig_ext/tests numpy/doc numpy/doc/cython numpy/f2py/lib/parser numpy/f2py/lib/tests numpy/f2py/tests/array_from_pyobj/tests numpy/fft numpy/fft/tests numpy/lib numpy/lib/tests numpy/linalg numpy/linalg/tests numpy/ma numpy/ma/tests numpy/numarray numpy/oldnumeric numpy/oldnumeric/tests numpy/random numpy/random/tests numpy/testing numpy/testing/tests numpy/tests tools/win32build tools/win32build/cpuid tools/win32build/nsis_scripts Message-ID: <20080620060430.50A5A39CB1E@scipy.org> Author: cdavid Date: 2008-06-20 00:59:26 -0500 (Fri, 20 Jun 2008) New Revision: 5302 Added: branches/cdavid/numpy/core/SConscript branches/cdavid/numpy/core/SConstruct branches/cdavid/numpy/core/code_generators/docstrings.py branches/cdavid/numpy/core/code_generators/generate_numpy_api.py branches/cdavid/numpy/core/code_generators/numpy_api_order.txt branches/cdavid/numpy/doc/cython/c_numpy.pxd branches/cdavid/numpy/doc/cython/c_python.pxd branches/cdavid/numpy/fft/SConscript branches/cdavid/numpy/fft/SConstruct branches/cdavid/numpy/lib/SConscript branches/cdavid/numpy/lib/SConstruct branches/cdavid/numpy/linalg/SConscript branches/cdavid/numpy/linalg/SConstruct branches/cdavid/numpy/numarray/SConscript branches/cdavid/numpy/numarray/SConstruct branches/cdavid/numpy/random/SConscript branches/cdavid/numpy/random/SConstruct branches/cdavid/numpy/testing/decorators.py branches/cdavid/numpy/testing/nosetester.py branches/cdavid/numpy/testing/nulltester.py branches/cdavid/numpy/testing/pkgtester.py branches/cdavid/tools/win32build/README.txt branches/cdavid/tools/win32build/cpuid/ branches/cdavid/tools/win32build/cpuid/SConstruct branches/cdavid/tools/win32build/cpuid/cpuid.c branches/cdavid/tools/win32build/cpuid/cpuid.h branches/cdavid/tools/win32build/cpuid/test.c branches/cdavid/tools/win32build/nsis_scripts/ branches/cdavid/tools/win32build/nsis_scripts/numpy-superinstaller-2.4.nsi branches/cdavid/tools/win32build/nsis_scripts/numpy-superinstaller-2.5.nsi Removed: branches/cdavid/numpy/core/SConstruct branches/cdavid/numpy/core/code_generators/array_api_order.txt branches/cdavid/numpy/core/code_generators/generate_array_api.py branches/cdavid/numpy/core/code_generators/multiarray_api_order.txt branches/cdavid/numpy/doc/cython/Python.pxi branches/cdavid/numpy/doc/cython/numpy.pxi branches/cdavid/numpy/fft/SConstruct branches/cdavid/numpy/lib/SConstruct branches/cdavid/numpy/linalg/SConstruct branches/cdavid/numpy/numarray/SConstruct branches/cdavid/numpy/random/SConstruct branches/cdavid/numpy/testing/info.py branches/cdavid/numpy/testing/parametric.py branches/cdavid/tools/win32build/cpuid/SConstruct branches/cdavid/tools/win32build/cpuid/cpuid.c branches/cdavid/tools/win32build/cpuid/cpuid.h branches/cdavid/tools/win32build/cpuid/test.c branches/cdavid/tools/win32build/nsis_scripts/numpy-superinstaller-2.4.nsi branches/cdavid/tools/win32build/nsis_scripts/numpy-superinstaller-2.5.nsi Modified: branches/cdavid/ branches/cdavid/README.txt branches/cdavid/numpy/__init__.py branches/cdavid/numpy/_import_tools.py branches/cdavid/numpy/core/__init__.py branches/cdavid/numpy/core/code_generators/generate_umath.py branches/cdavid/numpy/core/scons_support.py branches/cdavid/numpy/core/setup.py branches/cdavid/numpy/core/setupscons.py branches/cdavid/numpy/core/src/_sortmodule.c.src branches/cdavid/numpy/core/src/arraymethods.c branches/cdavid/numpy/core/src/arrayobject.c branches/cdavid/numpy/core/src/arraytypes.inc.src branches/cdavid/numpy/core/src/multiarraymodule.c branches/cdavid/numpy/core/src/scalartypes.inc.src branches/cdavid/numpy/core/src/ufuncobject.c branches/cdavid/numpy/core/tests/test_defmatrix.py branches/cdavid/numpy/core/tests/test_errstate.py branches/cdavid/numpy/core/tests/test_memmap.py branches/cdavid/numpy/core/tests/test_multiarray.py branches/cdavid/numpy/core/tests/test_numeric.py branches/cdavid/numpy/core/tests/test_numerictypes.py branches/cdavid/numpy/core/tests/test_records.py branches/cdavid/numpy/core/tests/test_regression.py branches/cdavid/numpy/core/tests/test_scalarmath.py branches/cdavid/numpy/core/tests/test_ufunc.py branches/cdavid/numpy/core/tests/test_umath.py branches/cdavid/numpy/core/tests/test_unicode.py branches/cdavid/numpy/ctypeslib.py branches/cdavid/numpy/distutils/__init__.py branches/cdavid/numpy/distutils/command/scons.py branches/cdavid/numpy/distutils/conv_template.py branches/cdavid/numpy/distutils/tests/f2py_ext/tests/test_fib2.py branches/cdavid/numpy/distutils/tests/f2py_f90_ext/tests/test_foo.py branches/cdavid/numpy/distutils/tests/gen_ext/tests/test_fib3.py branches/cdavid/numpy/distutils/tests/pyrex_ext/tests/test_primes.py branches/cdavid/numpy/distutils/tests/swig_ext/tests/test_example.py branches/cdavid/numpy/distutils/tests/swig_ext/tests/test_example2.py branches/cdavid/numpy/distutils/tests/test_fcompiler_gnu.py branches/cdavid/numpy/distutils/tests/test_misc_util.py branches/cdavid/numpy/doc/DISTUTILS.txt branches/cdavid/numpy/doc/HOWTO_BUILD_DOCS.txt branches/cdavid/numpy/doc/HOWTO_DOCUMENT.txt branches/cdavid/numpy/doc/cython/Makefile branches/cdavid/numpy/doc/cython/numpyx.pyx branches/cdavid/numpy/doc/example.py branches/cdavid/numpy/f2py/lib/parser/test_Fortran2003.py branches/cdavid/numpy/f2py/lib/parser/test_parser.py branches/cdavid/numpy/f2py/lib/tests/test_derived_scalar.py branches/cdavid/numpy/f2py/lib/tests/test_module_module.py branches/cdavid/numpy/f2py/lib/tests/test_module_scalar.py branches/cdavid/numpy/f2py/lib/tests/test_scalar_function_in.py branches/cdavid/numpy/f2py/lib/tests/test_scalar_in_out.py branches/cdavid/numpy/f2py/tests/array_from_pyobj/tests/test_array_from_pyobj.py branches/cdavid/numpy/fft/__init__.py branches/cdavid/numpy/fft/tests/test_fftpack.py branches/cdavid/numpy/fft/tests/test_helper.py branches/cdavid/numpy/lib/__init__.py branches/cdavid/numpy/lib/function_base.py branches/cdavid/numpy/lib/tests/test__datasource.py branches/cdavid/numpy/lib/tests/test_arraysetops.py branches/cdavid/numpy/lib/tests/test_financial.py branches/cdavid/numpy/lib/tests/test_format.py branches/cdavid/numpy/lib/tests/test_function_base.py branches/cdavid/numpy/lib/tests/test_getlimits.py branches/cdavid/numpy/lib/tests/test_index_tricks.py branches/cdavid/numpy/lib/tests/test_io.py branches/cdavid/numpy/lib/tests/test_machar.py branches/cdavid/numpy/lib/tests/test_polynomial.py branches/cdavid/numpy/lib/tests/test_regression.py branches/cdavid/numpy/lib/tests/test_shape_base.py branches/cdavid/numpy/lib/tests/test_twodim_base.py branches/cdavid/numpy/lib/tests/test_type_check.py branches/cdavid/numpy/lib/tests/test_ufunclike.py branches/cdavid/numpy/linalg/__init__.py branches/cdavid/numpy/linalg/tests/test_linalg.py branches/cdavid/numpy/linalg/tests/test_regression.py branches/cdavid/numpy/ma/ branches/cdavid/numpy/ma/__init__.py branches/cdavid/numpy/ma/core.py branches/cdavid/numpy/ma/extras.py branches/cdavid/numpy/ma/mrecords.py branches/cdavid/numpy/ma/tests/test_core.py branches/cdavid/numpy/ma/tests/test_extras.py branches/cdavid/numpy/ma/tests/test_mrecords.py branches/cdavid/numpy/ma/tests/test_old_ma.py branches/cdavid/numpy/ma/tests/test_subclassing.py branches/cdavid/numpy/ma/testutils.py branches/cdavid/numpy/numarray/__init__.py branches/cdavid/numpy/oldnumeric/__init__.py branches/cdavid/numpy/oldnumeric/tests/test_oldnumeric.py branches/cdavid/numpy/random/__init__.py branches/cdavid/numpy/random/tests/test_random.py branches/cdavid/numpy/testing/__init__.py branches/cdavid/numpy/testing/numpytest.py branches/cdavid/numpy/testing/tests/test_utils.py branches/cdavid/numpy/testing/utils.py branches/cdavid/numpy/tests/test_ctypeslib.py branches/cdavid/numpy/version.py branches/cdavid/setup.py Log: Merged revisions 5205-5301 via svnmerge from http://svn.scipy.org/svn/numpy/trunk ................ r5211 | stefan | 2008-05-21 01:07:23 +0900 (Wed, 21 May 2008) | 2 lines Fix unit test capturing under Python 2.6. ................ r5212 | rkern | 2008-05-22 05:18:40 +0900 (Thu, 22 May 2008) | 1 line Try again to fix the endianness tests. ................ r5213 | charris | 2008-05-22 05:25:14 +0900 (Thu, 22 May 2008) | 2 lines Fix one small error in test(all=1). ................ r5214 | charris | 2008-05-22 06:38:11 +0900 (Thu, 22 May 2008) | 2 lines Fix ordering assumption in regression test. ................ r5215 | oliphant | 2008-05-22 06:53:36 +0900 (Thu, 22 May 2008) | 1 line Fix the logic testing for potential problems with array subclasses. ................ r5216 | oliphant | 2008-05-22 06:54:28 +0900 (Thu, 22 May 2008) | 1 line Fix comments in tests. ................ r5217 | charris | 2008-05-22 08:16:02 +0900 (Thu, 22 May 2008) | 2 lines Make test(all=True) the default. ................ r5218 | cdavid | 2008-05-22 11:02:36 +0900 (Thu, 22 May 2008) | 1 line Fix #789 by Alan Mcintyre. ................ r5219 | cdavid | 2008-05-22 12:14:53 +0900 (Thu, 22 May 2008) | 1 line Remove trailing space. ................ r5220 | oliphant | 2008-05-22 12:43:55 +0900 (Thu, 22 May 2008) | 1 line Fix ticket #789 again. ................ r5221 | oliphant | 2008-05-22 15:34:33 +0900 (Thu, 22 May 2008) | 2 lines Fix bug reported on SciPy mailing list which arose when the results of a broadcast were too large to fit in memory and the simple MultiplyList function is not doing overflow detection. Create a new funtion that does Overflow detection but apply it sparingly. morarge broadcast results could caus ................ r5222 | jarrod.millman | 2008-05-22 15:43:22 +0900 (Thu, 22 May 2008) | 2 lines fixed whitespace w/ reindent ................ r5223 | oliphant | 2008-05-23 00:09:28 +0900 (Fri, 23 May 2008) | 1 line Add one-more test case using getmap to supplement the setmap test. ................ r5224 | pierregm | 2008-05-23 02:18:16 +0900 (Fri, 23 May 2008) | 1 line test_set_fields: filter out the warning ................ r5225 | charris | 2008-05-23 03:06:53 +0900 (Fri, 23 May 2008) | 2 lines Add PyArray_CompareString to the API. ................ r5226 | charris | 2008-05-23 07:07:16 +0900 (Fri, 23 May 2008) | 3 lines Add PyArray_CompareString to multiarray_api instead of array_api so as not to disturb the current order of the API. ................ r5228 | jarrod.millman | 2008-05-24 17:19:21 +0900 (Sat, 24 May 2008) | 2 lines trunk open for 1.2 development series ................ r5229 | charris | 2008-05-24 23:19:36 +0900 (Sat, 24 May 2008) | 2 lines Merge OBJECT_API and MULTIARRAY_API as NUMPY_API. ................ r5230 | charris | 2008-05-25 00:07:31 +0900 (Sun, 25 May 2008) | 2 lines Remove now unused files. They have been merged into numpy_api_order.txt. ................ r5231 | charris | 2008-05-25 01:41:19 +0900 (Sun, 25 May 2008) | 2 lines Define copy_string to memcpy. Closes ticket #666. ................ r5232 | charris | 2008-05-25 07:44:09 +0900 (Sun, 25 May 2008) | 2 lines Start work on testing ufuncs. ................ r5233 | charris | 2008-05-25 08:02:56 +0900 (Sun, 25 May 2008) | 2 lines Rename generate_array_api to generate_numpy_api. ................ r5234 | charris | 2008-05-25 08:04:06 +0900 (Sun, 25 May 2008) | 2 lines Delete generate_array_api.py ................ r5235 | charris | 2008-05-25 15:35:51 +0900 (Sun, 25 May 2008) | 2 lines Save preliminary work on testing ufuncs. ................ r5236 | cdavid | 2008-05-25 18:27:57 +0900 (Sun, 25 May 2008) | 1 line Handle library with extension in their name for ctypes.load_library. ................ r5240 | cdavid | 2008-05-26 20:29:37 +0900 (Mon, 26 May 2008) | 1 line Add cpuid + nsis scripts to build win32 installer. ................ r5241 | charris | 2008-05-27 07:08:35 +0900 (Tue, 27 May 2008) | 2 lines Fix regression in dtype='c' array creation. ................ r5242 | pierregm | 2008-05-27 07:15:29 +0900 (Tue, 27 May 2008) | 2 lines core : __new__: keep the fill_value of the initializing object by default mrecords: force _guessvartypes to return numpy.dtypes instead of types ................ r5244 | pierregm | 2008-05-29 11:31:28 +0900 (Thu, 29 May 2008) | 3 lines mrecords : Make sure a field shares its mask with the whole array mrecords : IMPORTANT : the mask of a field is no longer set to nomask when it's full of False, which simplifies masking specific fields. extras : Reorganized personal comments ................ r5245 | oliphant | 2008-05-30 00:15:45 +0900 (Fri, 30 May 2008) | 1 line Use memmove when memory areas can overlap. ................ r5246 | ptvirtan | 2008-06-01 09:53:50 +0900 (Sun, 01 Jun 2008) | 1 line Spell out namespace convention in Examples and See Also sections in docstrings ................ r5247 | stefan | 2008-06-03 17:50:08 +0900 (Tue, 03 Jun 2008) | 2 lines Update documentation standard. ................ r5248 | pierregm | 2008-06-04 06:23:15 +0900 (Wed, 04 Jun 2008) | 23 lines core: * use the "import numpy as np" convention * use np.function instead of (from)numeric.function * CHANGE : when using named fields, the fill_value is now a void-ndarray (and no longer a tuple) * _check_fill_value now checks that an existing fill_value is compatible with a new dtype (bug #806) * fix_invalid now accepts the mask keyword * MaskedArray.__new__ doesn't run _check_fill_value when the fill_value is None * add the astype method, to support the conversion of fill_value when needed. * arange/empty/empty_like/ones/zeros are now available through _convert2ma test_core: * modified test_filled_value to reflect that fill_value is a void-ndrecord when using named fields * added test_check_fill_value/test_check_fill_value_with_records testutils: * use the "import numpy as np" convention * assert_equal_records now uses getitem instead of getattr * assert_array_compare now calls numpy.testing.utils.assert_array_compare on filled data * the assert_xxx functions now accept the verbose keyword mrecords: * MaskedRecords inherit get_fill_value and set_fill_value from MaskedArray * In filled, force the filling value to be a void-ndarray ................ r5249 | pierregm | 2008-06-04 06:24:24 +0900 (Wed, 04 Jun 2008) | 1 line ................ r5250 | stefan | 2008-06-04 07:36:09 +0900 (Wed, 04 Jun 2008) | 2 lines Update examples section. ................ r5251 | pierregm | 2008-06-04 08:14:16 +0900 (Wed, 04 Jun 2008) | 4 lines core * masked_values now accept a shrink argument * fixed the divide_tolerance to numpy.finfo(float).tiny (bug #807) * in MaskedArray.__idiv__, use np.where instead of np.putmask to mask the denominator ................ r5252 | pierregm | 2008-06-04 08:41:20 +0900 (Wed, 04 Jun 2008) | 1 line use tempfile.mkstemp for the creation of temporary files ................ r5253 | pierregm | 2008-06-05 00:54:28 +0900 (Thu, 05 Jun 2008) | 1 line simplified MaskedArray.__setitem__ to fix setting object-ndarray elements ................ r5254 | dhuard | 2008-06-06 02:40:15 +0900 (Fri, 06 Jun 2008) | 1 line added verbose argument to assert_array_equal in assert_equal. Fixes ticket #810. ................ r5255 | oliphant | 2008-06-06 08:27:52 +0900 (Fri, 06 Jun 2008) | 1 line Fix more in ticket #791. ................ r5256 | pierregm | 2008-06-07 11:17:17 +0900 (Sat, 07 Jun 2008) | 3 lines * revamped choose to accept the out and mode keywords * revamped argmin/argmax to accept the out keyword * revamped all/any to accept the out keyword ................ r5257 | christoph.weidemann | 2008-06-07 14:08:06 +0900 (Sat, 07 Jun 2008) | 1 line Testcases for ticket #791 ................ r5258 | cdavid | 2008-06-08 00:57:45 +0900 (Sun, 08 Jun 2008) | 36 lines Merged revisions 5204-5257 via svnmerge from http://svn.scipy.org/svn/numpy/branches/cdavid ........ r5205 | cdavid | 2008-05-20 17:14:30 +0900 (Tue, 20 May 2008) | 3 lines Initialized merge tracking via "svnmerge" with revisions "1-5204" from http://svn.scipy.org/svn/numpy/trunk ........ r5206 | cdavid | 2008-05-20 17:17:27 +0900 (Tue, 20 May 2008) | 7 lines Current handling of bootstrapping is flawed: I should handle it at the distutils level, not at the scons level. This is the first step to detect bootstrapping at distutils level, and pass its state to scons through command line. ........ r5207 | cdavid | 2008-05-20 17:35:01 +0900 (Tue, 20 May 2008) | 1 line Fix typo when passing bootstrapping option to scons. ........ r5208 | cdavid | 2008-05-20 17:41:11 +0900 (Tue, 20 May 2008) | 5 lines Do not mess with __NUMPY_SETUP__ in scons scripts anymore: this is handled in numscons. ........ r5209 | cdavid | 2008-05-20 17:43:46 +0900 (Tue, 20 May 2008) | 1 line Forgot one file in lapack_lite when no LAPACK is available. ........ r5210 | cdavid | 2008-05-20 18:24:38 +0900 (Tue, 20 May 2008) | 1 line Handle fortran compiler on open-solaris ........ ................ r5259 | charris | 2008-06-08 07:43:03 +0900 (Sun, 08 Jun 2008) | 2 lines Fix missing return value, closes ticket #813. ................ r5260 | pierregm | 2008-06-08 12:57:56 +0900 (Sun, 08 Jun 2008) | 3 lines * revamped the functions min/max so that the methods are called * revamped the methods sum/prod/var/std/min/max/round to accept an explicit out argument * Force var to return masked when a masked scalar was returned ................ r5261 | cdavid | 2008-06-08 18:20:03 +0900 (Sun, 08 Jun 2008) | 1 line MSVC compiler does not have compiler_cxx member. ................ r5262 | ptvirtan | 2008-06-08 21:18:37 +0900 (Sun, 08 Jun 2008) | 1 line Move umath docstrings to a separate file. Make the automatic ufunc signature compatible with the documentation standard. ................ r5263 | pierregm | 2008-06-09 03:10:55 +0900 (Mon, 09 Jun 2008) | 4 lines * make_mask_none now accepts a fields argument to construct record-like masks easily * revamped where ................ r5264 | pierregm | 2008-06-09 08:04:42 +0900 (Mon, 09 Jun 2008) | 18 lines CHANGES: core: * When creating a masked array with named fields, the mask has now a flexible type [(n,bool) for n in fields], which allows individual fields to be masked. * When a masked array has named fields, setting the mask to a sequence of booleans will set the mask of all the fields of the corresponding record. * A new property, recordmask, returns either the mask (when no named fields) or a boolean array where values are True if all the fields of one record are masked, False otherwise. * A new private attribute, _isfield, has been introduced to keep track whether an array is a field of a record-like masked array or not, and make sure that the mask is properly propagated. * Setting an existing mask to nomask will only fill the mask with False, not transform it to nomask mrecords: * _fieldmask is now only a synonym for _mask, kept for convenience * revamped __getattribute__ to the example of numpy.core.records.recarray.__getattribute__ * __setslice__ and filled are now inhertied from MaskedArray tests * The tests in test_core have been reorganized to improve clarity and avoid duplication. * test_extras now uses the convention "import numpy as np" ................ r5265 | stefan | 2008-06-12 03:38:20 +0900 (Thu, 12 Jun 2008) | 2 lines How to use variables in math markup. ................ r5266 | cdavid | 2008-06-12 14:45:18 +0900 (Thu, 12 Jun 2008) | 1 line Ignore python and vim junk in numpy/ma. ................ r5267 | cdavid | 2008-06-12 15:35:22 +0900 (Thu, 12 Jun 2008) | 1 line scons command: set distutils libdir relatively to build directory. ................ r5268 | cdavid | 2008-06-12 16:20:28 +0900 (Thu, 12 Jun 2008) | 1 line Remove distutils_dirs_emitter hacks: no need anymore since we use variant_dir. ................ r5269 | cdavid | 2008-06-12 16:23:31 +0900 (Thu, 12 Jun 2008) | 1 line variant_dir: Rename SConscript for numpy.core. ................ r5270 | cdavid | 2008-06-12 16:24:09 +0900 (Thu, 12 Jun 2008) | 1 line Add boilerplate SConstruct to set variant dir transparantly. ................ r5271 | cdavid | 2008-06-12 16:28:27 +0900 (Thu, 12 Jun 2008) | 1 line Adapt SConscript to new architecture for build dir. ................ r5272 | cdavid | 2008-06-12 16:43:27 +0900 (Thu, 12 Jun 2008) | 1 line Adapt numpyconfig.h location in setup.py file. ................ r5273 | cdavid | 2008-06-12 17:59:20 +0900 (Thu, 12 Jun 2008) | 1 line When src_dir is not null, takes it into account to retrieve distutils libdir. ................ r5274 | cdavid | 2008-06-12 18:01:13 +0900 (Thu, 12 Jun 2008) | 1 line Adapt numpy.lib to new scons build_dir behavior. ................ r5275 | cdavid | 2008-06-12 18:48:42 +0900 (Thu, 12 Jun 2008) | 1 line Set numpy include path relatively to top setup callee when bootstrapping. ................ r5276 | cdavid | 2008-06-12 18:49:16 +0900 (Thu, 12 Jun 2008) | 1 line Adapat numpy.lib scons build to new build_dir conventions. ................ r5277 | cdavid | 2008-06-12 18:55:30 +0900 (Thu, 12 Jun 2008) | 1 line Adapt numpy.numarray to new build dir convention. ................ r5278 | cdavid | 2008-06-12 18:56:55 +0900 (Thu, 12 Jun 2008) | 1 line Adapt numpy.fft to new build dir conventions. ................ r5279 | cdavid | 2008-06-12 19:00:37 +0900 (Thu, 12 Jun 2008) | 1 line adapt numpy.linalg to new scons build_dir architecture. ................ r5280 | cdavid | 2008-06-12 19:05:12 +0900 (Thu, 12 Jun 2008) | 1 line adapt numpy.random to new scons build_dir architecture. ................ r5281 | cdavid | 2008-06-12 19:56:20 +0900 (Thu, 12 Jun 2008) | 1 line Make sure we are using numscons 0.8.0 or above. ................ r5282 | cdavid | 2008-06-13 00:16:07 +0900 (Fri, 13 Jun 2008) | 1 line Do not fail scons command when cxx compiler is not available. ................ r5283 | cdavid | 2008-06-14 15:06:13 +0900 (Sat, 14 Jun 2008) | 1 line Fix dotblas compilation on mac os X: scons scanner is not smart enough to interpret #include CPP_MACRO. ................ r5284 | pierregm | 2008-06-17 02:29:28 +0900 (Tue, 17 Jun 2008) | 2 lines core.MaskedArray.__new__ * Force a mask to be created from a list of masked arrays when mask=nomask and keep_mask=True ................ r5285 | jarrod.millman | 2008-06-17 09:08:31 +0900 (Tue, 17 Jun 2008) | 2 lines t ................ r5286 | alan.mcintyre | 2008-06-17 09:11:02 +0900 (Tue, 17 Jun 2008) | 2 lines test ................ r5287 | alan.mcintyre | 2008-06-17 09:23:20 +0900 (Tue, 17 Jun 2008) | 2 lines Switched to use nose to run tests. Added test and bench functions to all modules. ................ r5288 | rkern | 2008-06-17 10:11:43 +0900 (Tue, 17 Jun 2008) | 1 line When using PackageLoader, do not add subpackage names to __all__. ................ r5289 | alan.mcintyre | 2008-06-17 11:17:34 +0900 (Tue, 17 Jun 2008) | 5 lines Update README.txt to indicate nose version dependency, and port SciPy r4424 to NumPy (prevent import of nose until actual execution of tests). Restored "raises" function to numpy/testing/utils.py until it can be replaced with the function of the same name from nose.tools after the lazy import. ................ r5290 | stefan | 2008-06-17 22:06:08 +0900 (Tue, 17 Jun 2008) | 2 lines Update documentation standard. ................ r5291 | oliphant | 2008-06-18 05:08:28 +0900 (Wed, 18 Jun 2008) | 1 line Fix piecewise to handle 0-d inputs. ................ r5292 | pierregm | 2008-06-18 07:54:05 +0900 (Wed, 18 Jun 2008) | 1 line fixed dictionary update for compatibility with Python 2.3 ................ r5293 | stefan | 2008-06-19 00:22:24 +0900 (Thu, 19 Jun 2008) | 2 lines Add `ma` to __all__. ................ r5294 | stefan | 2008-06-19 00:31:50 +0900 (Thu, 19 Jun 2008) | 2 lines Add `Methods` section to documentation standard. ................ r5296 | rkern | 2008-06-19 07:53:44 +0900 (Thu, 19 Jun 2008) | 1 line PyPI metadata fixes. ................ r5298 | fperez | 2008-06-19 15:16:48 +0900 (Thu, 19 Jun 2008) | 5 lines Updated Cython code to use .pxd files with cimport instead of .pxi/include. Using cimport/pxd is the currently recommended approach by the Cython team. ................ r5299 | stefan | 2008-06-19 21:09:38 +0900 (Thu, 19 Jun 2008) | 2 lines Use a colon instead of a semi-colon to separate index levels. ................ r5300 | pierregm | 2008-06-20 09:18:32 +0900 (Fri, 20 Jun 2008) | 1 line * put maximum/minimum_fill_value back in __all__ ................ r5301 | fperez | 2008-06-20 13:17:54 +0900 (Fri, 20 Jun 2008) | 5 lines Move the import_array() call directly into c_numpy.pxd. This makes the user-visible API for Cython usage simpler and closer to the Python one. ................ Property changes on: branches/cdavid ___________________________________________________________________ Name: svnmerge-integrated - /branches/aligned_alloca:1-5127 /branches/build_with_scons:1-4676 /branches/cleanconfig_rtm:1-4677 /branches/distutils-revamp:1-2752 /branches/distutils_scons_command:1-4619 /branches/multicore:1-3687 /branches/numpy.scons:1-4484 /trunk:1-5204 + /branches/aligned_alloca:1-5127 /branches/build_with_scons:1-4676 /branches/cleanconfig_rtm:1-4677 /branches/distutils-revamp:1-2752 /branches/distutils_scons_command:1-4619 /branches/multicore:1-3687 /branches/numpy.scons:1-4484 /trunk:1-5301 Modified: branches/cdavid/README.txt =================================================================== --- branches/cdavid/README.txt 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/README.txt 2008-06-20 05:59:26 UTC (rev 5302) @@ -9,11 +9,17 @@ If fast BLAS and LAPACK cannot be found, then a slower default version is used. After installation, tests can be run (from outside the source -directory) with +directory) with: python -c 'import numpy; numpy.test()' -The most current development version is always available from our +Please note that you must have version 0.10 or later of the 'nose' test +framework installed in order to run the tests. More information about nose is +available here: + +http://somethingaboutorange.com/mrl/projects/nose/ + +The most current development version of NumPy is always available from our subversion repository: http://svn.scipy.org/svn/numpy/trunk Modified: branches/cdavid/numpy/__init__.py =================================================================== --- branches/cdavid/numpy/__init__.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/__init__.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -94,8 +94,11 @@ __all__ = ['add_newdocs'] pkgload.__doc__ = PackageLoader.__call__.__doc__ - import testing - from testing import ScipyTest, NumpyTest + + from testing.pkgtester import Tester + test = Tester().test + bench = Tester().bench + import core from core import * import lib @@ -113,16 +116,8 @@ from core import round, abs, max, min __all__.extend(['__version__', 'pkgload', 'PackageLoader', - 'ScipyTest', 'NumpyTest', 'show_config']) + 'show_config']) __all__.extend(core.__all__) __all__.extend(lib.__all__) - __all__.extend(['linalg', 'fft', 'random', 'ctypeslib']) + __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma']) - def test(*args, **kw): - import os, sys - print 'Numpy is installed in %s' % (os.path.split(__file__)[0],) - print 'Numpy version %s' % (__version__,) - print 'Python version %s' % (sys.version.replace('\n', '',),) - return NumpyTest().test(*args, **kw) - test.__doc__ = NumpyTest.test.__doc__ - Modified: branches/cdavid/numpy/_import_tools.py =================================================================== --- branches/cdavid/numpy/_import_tools.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/_import_tools.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -152,7 +152,7 @@ Parameters ---------- - *packges : arg-tuple + *packages : arg-tuple the names (one or more strings) of all the modules one wishes to load into the top-level namespace. verbose= : integer @@ -183,9 +183,6 @@ postpone_import = getattr(info_module,'postpone_import',False) if (postpone and not global_symbols) \ or (postpone_import and postpone is not None): - self.log('__all__.append(%r)' % (package_name)) - if '.' not in package_name: - self.parent_export_names.append(package_name) continue old_object = frame.f_locals.get(package_name,None) Copied: branches/cdavid/numpy/core/SConscript (from rev 5301, trunk/numpy/core/SConscript) Deleted: branches/cdavid/numpy/core/SConstruct =================================================================== --- branches/cdavid/numpy/core/SConstruct 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/SConstruct 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,270 +0,0 @@ -# Last Change: Mon Apr 21 07:00 PM 2008 J -# vim:syntax=python -import os -import sys -from os.path import join as pjoin, basename as pbasename, dirname as pdirname -from copy import deepcopy - -from numscons import get_python_inc, get_pythonlib_dir -from numscons import GetNumpyEnvironment -from numscons import CheckCBLAS -from numscons import write_info - -from scons_support import CheckBrokenMathlib, define_no_smp, \ - check_mlib, check_mlibs, is_npy_no_signal -from scons_support import array_api_gen_bld, ufunc_api_gen_bld, template_bld, \ - umath_bld - - -env = GetNumpyEnvironment(ARGUMENTS) -env.Append(CPPPATH = [get_python_inc()]) -if os.name == 'nt': - # NT needs the pythonlib to run any code importing Python.h, including - # simple code using only typedef and so on, so we need it for configuration - # checks - env.AppendUnique(LIBPATH = [get_pythonlib_dir()]) - -#======================= -# Starting Configuration -#======================= -config = env.NumpyConfigure(custom_tests = {'CheckBrokenMathlib' : CheckBrokenMathlib, - 'CheckCBLAS' : CheckCBLAS}, config_h = pjoin(env['build_dir'], 'config.h')) - -# numpyconfig_sym will keep the values of some configuration variables, the one -# needed for the public numpy API. - -# Convention: list of tuples (definition, value). value: -# - 0: #undef definition -# - 1: #define definition -# - string: #define definition value -numpyconfig_sym = [] - -#--------------- -# Checking Types -#--------------- -if not config.CheckHeader("Python.h"): - raise RuntimeError("Error: Python.h header is not found (or cannot be " -"compiled). On linux, check that you have python-dev/python-devel packages. On" -" windows, check \ that you have the platform SDK.") - -def check_type(type, include = None): - st = config.CheckTypeSize(type, includes = include) - type = type.replace(' ', '_') - if st: - numpyconfig_sym.append(('SIZEOF_%s' % type.upper(), '%d' % st)) - else: - numpyconfig_sym.append(('SIZEOF_%s' % type.upper(), 0)) - -for type in ('short', 'int', 'long', 'float', 'double', 'long double'): - check_type(type) - -for type in ('Py_intptr_t',): - check_type(type, include = "#include \n") - -# We check declaration AND type because that's how distutils does it. -if config.CheckDeclaration('PY_LONG_LONG', includes = '#include \n'): - st = config.CheckTypeSize('PY_LONG_LONG', - includes = '#include \n') - assert not st == 0 - numpyconfig_sym.append(('DEFINE_NPY_SIZEOF_LONGLONG', - '#define NPY_SIZEOF_LONGLONG %d' % st)) - numpyconfig_sym.append(('DEFINE_NPY_SIZEOF_PY_LONG_LONG', - '#define NPY_SIZEOF_PY_LONG_LONG %d' % st)) -else: - numpyconfig_sym.append(('DEFINE_NPY_SIZEOF_LONGLONG', '')) - numpyconfig_sym.append(('DEFINE_NPY_SIZEOF_PY_LONG_LONG', '')) - -if not config.CheckDeclaration('CHAR_BIT', includes= '#include \n'): - raise RuntimeError(\ -"""Config wo CHAR_BIT is not supported with scons: please contact the -maintainer (cdavid)""") - -#---------------------- -# Checking signal stuff -#---------------------- -if is_npy_no_signal(): - numpyconfig_sym.append(('DEFINE_NPY_NO_SIGNAL', '#define NPY_NO_SIGNAL\n')) - config.Define('__NPY_PRIVATE_NO_SIGNAL', - comment = "define to 1 to disable SMP support ") -else: - numpyconfig_sym.append(('DEFINE_NPY_NO_SIGNAL', '')) - -#--------------------- -# Checking SMP option -#--------------------- -if define_no_smp(): - nosmp = 1 -else: - nosmp = 0 -numpyconfig_sym.append(('NPY_NO_SMP', nosmp)) - -#---------------------- -# Checking the mathlib -#---------------------- -mlibs = [[], ['m'], ['cpml']] -mathlib = os.environ.get('MATHLIB') -if mathlib: - mlibs.insert(0, mathlib) - -mlib = check_mlibs(config, mlibs) - -# XXX: this is ugly: mathlib has nothing to do in a public header file -numpyconfig_sym.append(('MATHLIB', ','.join(mlib))) - -#---------------------------------- -# Checking the math funcs available -#---------------------------------- -# Function to check: -mfuncs = ('expl', 'expf', 'log1p', 'expm1', 'asinh', 'atanhf', 'atanhl', - 'isnan', 'isinf', 'rint') - -# Set value to 1 for each defined function (in math lib) -mfuncs_defined = dict([(f, 0) for f in mfuncs]) - -# TODO: checklib vs checkfunc ? -def check_func(f): - """Check that f is available in mlib, and add the symbol appropriately. """ - st = config.CheckDeclaration(f, language = 'C', includes = "#include ") - if st: - st = config.CheckFunc(f, language = 'C') - if st: - mfuncs_defined[f] = 1 - else: - mfuncs_defined[f] = 0 - -for f in mfuncs: - check_func(f) - -if mfuncs_defined['expl'] == 1: - config.Define('HAVE_LONGDOUBLE_FUNCS', - comment = 'Define to 1 if long double funcs are available') -if mfuncs_defined['expf'] == 1: - config.Define('HAVE_FLOAT_FUNCS', - comment = 'Define to 1 if long double funcs are available') -if mfuncs_defined['asinh'] == 1: - config.Define('HAVE_INVERSE_HYPERBOLIC', - comment = 'Define to 1 if inverse hyperbolic funcs are '\ - 'available') -if mfuncs_defined['atanhf'] == 1: - config.Define('HAVE_INVERSE_HYPERBOLIC_FLOAT', - comment = 'Define to 1 if inverse hyperbolic float funcs '\ - 'are available') -if mfuncs_defined['atanhl'] == 1: - config.Define('HAVE_INVERSE_HYPERBOLIC_LONGDOUBLE', - comment = 'Define to 1 if inverse hyperbolic long double '\ - 'funcs are available') - -#------------------------------------------------------- -# Define the function PyOS_ascii_strod if not available -#------------------------------------------------------- -if not config.CheckDeclaration('PyOS_ascii_strtod', - includes = "#include "): - if config.CheckFunc('strtod'): - config.Define('PyOS_ascii_strtod', 'strtod', - "Define to a function to use as a replacement for "\ - "PyOS_ascii_strtod if not available in python header") - -#------------------------------------ -# DISTUTILS Hack on AMD64 on windows -#------------------------------------ -# XXX: this is ugly -if sys.platform=='win32' or os.name=='nt': - from distutils.msvccompiler import get_build_architecture - a = get_build_architecture() - print 'BUILD_ARCHITECTURE: %r, os.name=%r, sys.platform=%r' % \ - (a, os.name, sys.platform) - if a == 'AMD64': - distutils_use_sdk = 1 - config.Define('DISTUTILS_USE_SDK', distutils_use_sdk, - "define to 1 to disable SMP support ") - -#-------------- -# Checking Blas -#-------------- -if config.CheckCBLAS(): - build_blasdot = 1 -else: - build_blasdot = 0 - -config.Finish() -write_info(env) - -#========== -# Build -#========== - -#--------------------------------------- -# Generate the public configuration file -#--------------------------------------- -config_dict = {} -# XXX: this is ugly, make the API for config.h and numpyconfig.h similar -for key, value in numpyconfig_sym: - config_dict['@%s@' % key] = str(value) -env['SUBST_DICT'] = config_dict - -include_dir = 'include/numpy' -env.SubstInFile(pjoin(env['build_dir'], 'numpyconfig.h'), - pjoin(env['src_dir'], include_dir, 'numpyconfig.h.in')) - -env['CONFIG_H_GEN'] = numpyconfig_sym - -#--------------------------- -# Builder for generated code -#--------------------------- -env.Append(BUILDERS = {'GenerateMultiarrayApi' : array_api_gen_bld, - 'GenerateUfuncApi' : ufunc_api_gen_bld, - 'GenerateFromTemplate' : template_bld, - 'GenerateUmath' : umath_bld}) - -#------------------------ -# Generate generated code -#------------------------ -scalartypes_src = env.GenerateFromTemplate(pjoin('src', 'scalartypes.inc.src')) -arraytypes_src = env.GenerateFromTemplate(pjoin('src', 'arraytypes.inc.src')) -sortmodule_src = env.GenerateFromTemplate(pjoin('src', '_sortmodule.c.src')) -umathmodule_src = env.GenerateFromTemplate(pjoin('src', 'umathmodule.c.src')) -scalarmathmodule_src = env.GenerateFromTemplate( - pjoin('src', 'scalarmathmodule.c.src')) - -umath = env.GenerateUmath('__umath_generated', - pjoin('code_generators', 'generate_umath.py')) - -multiarray_api = env.GenerateMultiarrayApi('multiarray_api', - [ pjoin('code_generators', 'array_api_order.txt'), - pjoin('code_generators', 'multiarray_api_order.txt')]) - -ufunc_api = env.GenerateUfuncApi('ufunc_api', - pjoin('code_generators', 'ufunc_api_order.txt')) - -env.Append(CPPPATH = [pjoin(env['src_dir'], 'include'), env['build_dir']]) - -#----------------- -# Build multiarray -#----------------- -multiarray_src = [pjoin('src', 'multiarraymodule.c')] -multiarray = env.NumpyPythonExtension('multiarray', source = multiarray_src) - -#------------------ -# Build sort module -#------------------ -sort = env.NumpyPythonExtension('_sort', source = sortmodule_src) - -#------------------- -# Build umath module -#------------------- -umathmodule = env.NumpyPythonExtension('umath', source = umathmodule_src) - -#------------------------ -# Build scalarmath module -#------------------------ -scalarmathmodule = env.NumpyPythonExtension('scalarmath', - source = scalarmathmodule_src) - -#---------------------- -# Build _dotblas module -#---------------------- -if build_blasdot: - dotblas_src = [pjoin('blasdot', i) for i in ['_dotblas.c']] - blasenv = env.Clone() - blasenv.Append(CPPPATH = pjoin(env['src_dir'], 'blasdot')) - dotblas = blasenv.NumpyPythonExtension('_dotblas', source = dotblas_src) Copied: branches/cdavid/numpy/core/SConstruct (from rev 5301, trunk/numpy/core/SConstruct) Modified: branches/cdavid/numpy/core/__init__.py =================================================================== --- branches/cdavid/numpy/core/__init__.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/__init__.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -31,7 +31,6 @@ __all__ += char.__all__ - -def test(level=1, verbosity=1): - from numpy.testing import NumpyTest - return NumpyTest().test(level, verbosity) +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().bench Deleted: branches/cdavid/numpy/core/code_generators/array_api_order.txt =================================================================== --- branches/cdavid/numpy/core/code_generators/array_api_order.txt 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/code_generators/array_api_order.txt 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,85 +0,0 @@ -# The functions in the numpy_core C API -# They are defined here so that the order is set. -PyArray_SetNumericOps -PyArray_GetNumericOps -PyArray_INCREF -PyArray_XDECREF -PyArray_SetStringFunction -PyArray_DescrFromType -PyArray_TypeObjectFromType -PyArray_Zero -PyArray_One -PyArray_CastToType -PyArray_CastTo -PyArray_CastAnyTo -PyArray_CanCastSafely -PyArray_CanCastTo -PyArray_ObjectType -PyArray_DescrFromObject -PyArray_ConvertToCommonType -PyArray_DescrFromScalar -PyArray_DescrFromTypeObject -PyArray_Size -PyArray_Scalar -PyArray_FromScalar -PyArray_ScalarAsCtype -PyArray_CastScalarToCtype -PyArray_CastScalarDirect -PyArray_ScalarFromObject -PyArray_GetCastFunc -PyArray_FromDims -PyArray_FromDimsAndDataAndDescr -PyArray_FromAny -PyArray_EnsureArray -PyArray_EnsureAnyArray -PyArray_FromFile -PyArray_FromString -PyArray_FromBuffer -PyArray_FromIter -PyArray_Return -PyArray_GetField -PyArray_SetField -PyArray_Byteswap -PyArray_Resize -PyArray_MoveInto -PyArray_CopyInto -PyArray_CopyAnyInto -PyArray_CopyObject -PyArray_NewCopy -PyArray_ToList -PyArray_ToString -PyArray_ToFile -PyArray_Dump -PyArray_Dumps -PyArray_ValidType -PyArray_UpdateFlags -PyArray_New -PyArray_NewFromDescr -PyArray_DescrNew -PyArray_DescrNewFromType -PyArray_GetPriority -PyArray_IterNew -PyArray_MultiIterNew -PyArray_PyIntAsInt -PyArray_PyIntAsIntp -PyArray_Broadcast -PyArray_FillObjectArray -PyArray_FillWithScalar -PyArray_CheckStrides -PyArray_DescrNewByteorder -PyArray_IterAllButAxis -PyArray_CheckFromAny -PyArray_FromArray -PyArray_FromInterface -PyArray_FromStructInterface -PyArray_FromArrayAttr -PyArray_ScalarKind -PyArray_CanCoerceScalar -PyArray_NewFlagsObject -PyArray_CanCastScalar -PyArray_CompareUCS4 -PyArray_RemoveSmallest -PyArray_ElementStrides -PyArray_Item_INCREF -PyArray_Item_XDECREF -PyArray_FieldNames Copied: branches/cdavid/numpy/core/code_generators/docstrings.py (from rev 5301, trunk/numpy/core/code_generators/docstrings.py) Deleted: branches/cdavid/numpy/core/code_generators/generate_array_api.py =================================================================== --- branches/cdavid/numpy/core/code_generators/generate_array_api.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/code_generators/generate_array_api.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,213 +0,0 @@ -import os -import genapi - -types = ['Generic','Number','Integer','SignedInteger','UnsignedInteger', - 'Inexact', - 'Floating', 'ComplexFloating', 'Flexible', 'Character', - 'Byte','Short','Int', 'Long', 'LongLong', 'UByte', 'UShort', - 'UInt', 'ULong', 'ULongLong', 'Float', 'Double', 'LongDouble', - 'CFloat', 'CDouble', 'CLongDouble', 'Object', 'String', 'Unicode', - 'Void'] - -h_template = r""" -#ifdef _MULTIARRAYMODULE - -typedef struct { - PyObject_HEAD - npy_bool obval; -} PyBoolScalarObject; - - -static unsigned int PyArray_GetNDArrayCVersion (void); -static PyTypeObject PyBigArray_Type; -static PyTypeObject PyArray_Type; -static PyTypeObject PyArrayDescr_Type; -static PyTypeObject PyArrayFlags_Type; -static PyTypeObject PyArrayIter_Type; -static PyTypeObject PyArrayMapIter_Type; -static PyTypeObject PyArrayMultiIter_Type; -static int NPY_NUMUSERTYPES=0; -static PyTypeObject PyBoolArrType_Type; -static PyBoolScalarObject _PyArrayScalar_BoolValues[2]; - -%s - -#else - -#if defined(PY_ARRAY_UNIQUE_SYMBOL) -#define PyArray_API PY_ARRAY_UNIQUE_SYMBOL -#endif - -#if defined(NO_IMPORT) || defined(NO_IMPORT_ARRAY) -extern void **PyArray_API; -#else -#if defined(PY_ARRAY_UNIQUE_SYMBOL) -void **PyArray_API; -#else -static void **PyArray_API=NULL; -#endif -#endif - -#define PyArray_GetNDArrayCVersion (*(unsigned int (*)(void)) PyArray_API[0]) -#define PyBigArray_Type (*(PyTypeObject *)PyArray_API[1]) -#define PyArray_Type (*(PyTypeObject *)PyArray_API[2]) -#define PyArrayDescr_Type (*(PyTypeObject *)PyArray_API[3]) -#define PyArrayFlags_Type (*(PyTypeObject *)PyArray_API[4]) -#define PyArrayIter_Type (*(PyTypeObject *)PyArray_API[5]) -#define PyArrayMultiIter_Type (*(PyTypeObject *)PyArray_API[6]) -#define NPY_NUMUSERTYPES (*(int *)PyArray_API[7]) -#define PyBoolArrType_Type (*(PyTypeObject *)PyArray_API[8]) -#define _PyArrayScalar_BoolValues ((PyBoolScalarObject *)PyArray_API[9]) - -%s - -#if !defined(NO_IMPORT_ARRAY) && !defined(NO_IMPORT) -static int -_import_array(void) -{ - PyObject *numpy = PyImport_ImportModule("numpy.core.multiarray"); - PyObject *c_api = NULL; - if (numpy == NULL) return -1; - c_api = PyObject_GetAttrString(numpy, "_ARRAY_API"); - if (c_api == NULL) {Py_DECREF(numpy); return -1;} - if (PyCObject_Check(c_api)) { - PyArray_API = (void **)PyCObject_AsVoidPtr(c_api); - } - Py_DECREF(c_api); - Py_DECREF(numpy); - if (PyArray_API == NULL) return -1; - /* Perform runtime check of C API version */ - if (NPY_VERSION != PyArray_GetNDArrayCVersion()) { - PyErr_Format(PyExc_RuntimeError, "module compiled against "\ - "version %%x of C-API but this version of numpy is %%x", \ - (int) NPY_VERSION, (int) PyArray_GetNDArrayCVersion()); - return -1; - } - return 0; -} - -#define import_array() {if (_import_array() < 0) {PyErr_Print(); PyErr_SetString(PyExc_ImportError, "numpy.core.multiarray failed to import"); return; } } - -#define import_array1(ret) {if (_import_array() < 0) {PyErr_Print(); PyErr_SetString(PyExc_ImportError, "numpy.core.multiarray failed to import"); return ret; } } - -#define import_array2(msg, ret) {if (_import_array() < 0) {PyErr_Print(); PyErr_SetString(PyExc_ImportError, msg); return ret; } } - -#endif - -#endif -""" - - -c_template = r""" -/* These pointers will be stored in the C-object for use in other - extension modules -*/ - -void *PyArray_API[] = { - (void *) PyArray_GetNDArrayCVersion, - (void *) &PyBigArray_Type, - (void *) &PyArray_Type, - (void *) &PyArrayDescr_Type, - (void *) &PyArrayFlags_Type, - (void *) &PyArrayIter_Type, - (void *) &PyArrayMultiIter_Type, - (int *) &NPY_NUMUSERTYPES, - (void *) &PyBoolArrType_Type, - (void *) &_PyArrayScalar_BoolValues, -%s -}; -""" - -def generate_api(output_dir, force=False): - basename = 'multiarray_api' - - h_file = os.path.join(output_dir, '__%s.h' % basename) - c_file = os.path.join(output_dir, '__%s.c' % basename) - d_file = os.path.join(output_dir, '%s.txt' % basename) - targets = (h_file, c_file, d_file) - sources = ['array_api_order.txt', 'multiarray_api_order.txt'] - - if (not force and not genapi.should_rebuild(targets, sources + [__file__])): - return targets - else: - do_generate_api(targets, sources) - - return targets - -def do_generate_api(targets, sources): - header_file = targets[0] - c_file = targets[1] - doc_file = targets[2] - - objectapi_list = genapi.get_api_functions('OBJECT_API', - sources[0]) - multiapi_list = genapi.get_api_functions('MULTIARRAY_API', - sources[1]) - # API fixes for __arrayobject_api.h - - fixed = 10 - numtypes = len(types) + fixed - numobject = len(objectapi_list) + numtypes - nummulti = len(multiapi_list) - numtotal = numobject + nummulti - - module_list = [] - extension_list = [] - init_list = [] - - # setup types - for k, atype in enumerate(types): - num = fixed + k - astr = " (void *) &Py%sArrType_Type," % types[k] - init_list.append(astr) - astr = "static PyTypeObject Py%sArrType_Type;" % types[k] - module_list.append(astr) - astr = "#define Py%sArrType_Type (*(PyTypeObject *)PyArray_API[%d])" % \ - (types[k], num) - extension_list.append(astr) - - # set up object API - genapi.add_api_list(numtypes, 'PyArray_API', objectapi_list, - module_list, extension_list, init_list) - - # set up multiarray module API - genapi.add_api_list(numobject, 'PyArray_API', multiapi_list, - module_list, extension_list, init_list) - - - # Write to header - fid = open(header_file, 'w') - s = h_template % ('\n'.join(module_list), '\n'.join(extension_list)) - fid.write(s) - fid.close() - - # Write to c-code - fid = open(c_file, 'w') - s = c_template % '\n'.join(init_list) - fid.write(s) - fid.close() - - # write to documentation - fid = open(doc_file, 'w') - fid.write(''' -=========== -Numpy C-API -=========== - -Object API -========== -''') - for func in objectapi_list: - fid.write(func.to_ReST()) - fid.write('\n\n') - fid.write(''' - -Multiarray API -============== -''') - for func in multiapi_list: - fid.write(func.to_ReST()) - fid.write('\n\n') - fid.close() - - return targets Copied: branches/cdavid/numpy/core/code_generators/generate_numpy_api.py (from rev 5301, trunk/numpy/core/code_generators/generate_numpy_api.py) Modified: branches/cdavid/numpy/core/code_generators/generate_umath.py =================================================================== --- branches/cdavid/numpy/core/code_generators/generate_umath.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/code_generators/generate_umath.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,4 +1,8 @@ -import re +import re, textwrap +import sys, os +sys.path.insert(0, os.path.dirname(__file__)) +import docstrings +sys.path.pop(0) Zero = "PyUFunc_Zero" One = "PyUFunc_One" @@ -153,37 +157,37 @@ defdict = { 'add' : Ufunc(2, 1, Zero, - 'adds the arguments elementwise.', + docstrings.get('numpy.core.umath.add'), TD(noobj), TD(O, f='PyNumber_Add'), ), 'subtract' : Ufunc(2, 1, Zero, - 'subtracts the arguments elementwise.', + docstrings.get('numpy.core.umath.subtract'), TD(noobj), TD(O, f='PyNumber_Subtract'), ), 'multiply' : Ufunc(2, 1, One, - 'multiplies the arguments elementwise.', + docstrings.get('numpy.core.umath.multiply'), TD(noobj), TD(O, f='PyNumber_Multiply'), ), 'divide' : Ufunc(2, 1, One, - 'divides the arguments elementwise.', + docstrings.get('numpy.core.umath.divide'), TD(intfltcmplx), TD(O, f='PyNumber_Divide'), ), 'floor_divide' : Ufunc(2, 1, One, - 'floor divides the arguments elementwise.', + docstrings.get('numpy.core.umath.floor_divide'), TD(intfltcmplx), TD(O, f='PyNumber_FloorDivide'), ), 'true_divide' : Ufunc(2, 1, One, - 'true divides the arguments elementwise.', + docstrings.get('numpy.core.umath.true_divide'), TD('bBhH', out='f'), TD('iIlLqQ', out='d'), TD(flts+cmplx), @@ -191,346 +195,346 @@ ), 'conjugate' : Ufunc(1, 1, None, - 'takes the conjugate of x elementwise.', + docstrings.get('numpy.core.umath.conjugate'), TD(nobool_or_obj), TD(M, f='conjugate'), ), 'fmod' : Ufunc(2, 1, Zero, - 'computes (C-like) x1 % x2 elementwise.', + docstrings.get('numpy.core.umath.fmod'), TD(ints), TD(flts, f='fmod'), TD(M, f='fmod'), ), 'square' : Ufunc(1, 1, None, - 'compute x**2.', + docstrings.get('numpy.core.umath.square'), TD(nobool_or_obj), TD(O, f='Py_square'), ), 'reciprocal' : Ufunc(1, 1, None, - 'compute 1/x', + docstrings.get('numpy.core.umath.reciprocal'), TD(nobool_or_obj), TD(O, f='Py_reciprocal'), ), 'ones_like' : Ufunc(1, 1, None, - 'returns an array of ones of the shape and typecode of x.', + docstrings.get('numpy.core.umath.ones_like'), TD(nobool_or_obj), TD(O, f='Py_get_one'), ), 'power' : Ufunc(2, 1, One, - 'computes x1**x2 elementwise.', + docstrings.get('numpy.core.umath.power'), TD(ints), TD(inexact, f='pow'), TD(O, f='PyNumber_Power'), ), 'absolute' : Ufunc(1, 1, None, - 'takes |x| elementwise.', + docstrings.get('numpy.core.umath.absolute'), TD(nocmplx), TD(cmplx, out=('f', 'd', 'g')), TD(O, f='PyNumber_Absolute'), ), 'negative' : Ufunc(1, 1, None, - 'determines -x elementwise', + docstrings.get('numpy.core.umath.negative'), TD(nocmplx), TD(cmplx, f='neg'), TD(O, f='PyNumber_Negative'), ), 'sign' : Ufunc(1, 1, None, - 'returns -1 if x < 0 and 0 if x==0 and 1 if x > 0', + docstrings.get('numpy.core.umath.sign'), TD(nobool), ), 'greater' : Ufunc(2, 1, None, - 'returns elementwise x1 > x2 in a bool array.', + docstrings.get('numpy.core.umath.greater'), TD(all, out='?'), ), 'greater_equal' : Ufunc(2, 1, None, - 'returns elementwise x1 >= x2 in a bool array.', + docstrings.get('numpy.core.umath.greater_equal'), TD(all, out='?'), ), 'less' : Ufunc(2, 1, None, - 'returns elementwise x1 < x2 in a bool array.', + docstrings.get('numpy.core.umath.less'), TD(all, out='?'), ), 'less_equal' : Ufunc(2, 1, None, - 'returns elementwise x1 <= x2 in a bool array', + docstrings.get('numpy.core.umath.less_equal'), TD(all, out='?'), ), 'equal' : Ufunc(2, 1, None, - 'returns elementwise x1 == x2 in a bool array', + docstrings.get('numpy.core.umath.equal'), TD(all, out='?'), ), 'not_equal' : Ufunc(2, 1, None, - 'returns elementwise x1 |= x2', + docstrings.get('numpy.core.umath.not_equal'), TD(all, out='?'), ), 'logical_and' : Ufunc(2, 1, One, - 'returns x1 and x2 elementwise.', + docstrings.get('numpy.core.umath.logical_and'), TD(noobj, out='?'), TD(M, f='logical_and'), ), 'logical_not' : Ufunc(1, 1, None, - 'returns not x elementwise.', + docstrings.get('numpy.core.umath.logical_not'), TD(noobj, out='?'), TD(M, f='logical_not'), ), 'logical_or' : Ufunc(2, 1, Zero, - 'returns x1 or x2 elementwise.', + docstrings.get('numpy.core.umath.logical_or'), TD(noobj, out='?'), TD(M, f='logical_or'), ), 'logical_xor' : Ufunc(2, 1, None, - 'returns x1 xor x2 elementwise.', + docstrings.get('numpy.core.umath.logical_xor'), TD(noobj, out='?'), TD(M, f='logical_xor'), ), 'maximum' : Ufunc(2, 1, None, - 'returns maximum (if x1 > x2: x1; else: x2) elementwise.', + docstrings.get('numpy.core.umath.maximum'), TD(noobj), TD(O, f='_npy_ObjectMax') ), 'minimum' : Ufunc(2, 1, None, - 'returns minimum (if x1 < x2: x1; else: x2) elementwise', + docstrings.get('numpy.core.umath.minimum'), TD(noobj), TD(O, f='_npy_ObjectMin') ), 'bitwise_and' : Ufunc(2, 1, One, - 'computes x1 & x2 elementwise.', + docstrings.get('numpy.core.umath.bitwise_and'), TD(bints), TD(O, f='PyNumber_And'), ), 'bitwise_or' : Ufunc(2, 1, Zero, - 'computes x1 | x2 elementwise.', + docstrings.get('numpy.core.umath.bitwise_or'), TD(bints), TD(O, f='PyNumber_Or'), ), 'bitwise_xor' : Ufunc(2, 1, None, - 'computes x1 ^ x2 elementwise.', + docstrings.get('numpy.core.umath.bitwise_xor'), TD(bints), TD(O, f='PyNumber_Xor'), ), 'invert' : Ufunc(1, 1, None, - 'computes ~x (bit inversion) elementwise.', + docstrings.get('numpy.core.umath.invert'), TD(bints), TD(O, f='PyNumber_Invert'), ), 'left_shift' : Ufunc(2, 1, None, - 'computes x1 << x2 (x1 shifted to left by x2 bits) elementwise.', + docstrings.get('numpy.core.umath.left_shift'), TD(ints), TD(O, f='PyNumber_Lshift'), ), 'right_shift' : Ufunc(2, 1, None, - 'computes x1 >> x2 (x1 shifted to right by x2 bits) elementwise.', + docstrings.get('numpy.core.umath.right_shift'), TD(ints), TD(O, f='PyNumber_Rshift'), ), 'degrees' : Ufunc(1, 1, None, - 'converts angle from radians to degrees', + docstrings.get('numpy.core.umath.degrees'), TD(fltsM, f='degrees'), ), 'radians' : Ufunc(1, 1, None, - 'converts angle from degrees to radians', + docstrings.get('numpy.core.umath.radians'), TD(fltsM, f='radians'), ), 'arccos' : Ufunc(1, 1, None, - 'inverse cosine elementwise.', + docstrings.get('numpy.core.umath.arccos'), TD(inexact, f='acos'), TD(M, f='arccos'), ), 'arccosh' : Ufunc(1, 1, None, - 'inverse hyperbolic cosine elementwise.', + docstrings.get('numpy.core.umath.arccosh'), TD(inexact, f='acosh'), TD(M, f='arccosh'), ), 'arcsin' : Ufunc(1, 1, None, - 'inverse sine elementwise.', + docstrings.get('numpy.core.umath.arcsin'), TD(inexact, f='asin'), TD(M, f='arcsin'), ), 'arcsinh' : Ufunc(1, 1, None, - 'inverse hyperbolic sine elementwise.', + docstrings.get('numpy.core.umath.arcsinh'), TD(inexact, f='asinh'), TD(M, f='arcsinh'), ), 'arctan' : Ufunc(1, 1, None, - 'inverse tangent elementwise.', + docstrings.get('numpy.core.umath.arctan'), TD(inexact, f='atan'), TD(M, f='arctan'), ), 'arctanh' : Ufunc(1, 1, None, - 'inverse hyperbolic tangent elementwise.', + docstrings.get('numpy.core.umath.arctanh'), TD(inexact, f='atanh'), TD(M, f='arctanh'), ), 'cos' : Ufunc(1, 1, None, - 'cosine elementwise.', + docstrings.get('numpy.core.umath.cos'), TD(inexact, f='cos'), TD(M, f='cos'), ), 'sin' : Ufunc(1, 1, None, - 'sine elementwise.', + docstrings.get('numpy.core.umath.sin'), TD(inexact, f='sin'), TD(M, f='sin'), ), 'tan' : Ufunc(1, 1, None, - 'tangent elementwise.', + docstrings.get('numpy.core.umath.tan'), TD(inexact, f='tan'), TD(M, f='tan'), ), 'cosh' : Ufunc(1, 1, None, - 'hyperbolic cosine elementwise.', + docstrings.get('numpy.core.umath.cosh'), TD(inexact, f='cosh'), TD(M, f='cosh'), ), 'sinh' : Ufunc(1, 1, None, - 'hyperbolic sine elementwise.', + docstrings.get('numpy.core.umath.sinh'), TD(inexact, f='sinh'), TD(M, f='sinh'), ), 'tanh' : Ufunc(1, 1, None, - 'hyperbolic tangent elementwise.', + docstrings.get('numpy.core.umath.tanh'), TD(inexact, f='tanh'), TD(M, f='tanh'), ), 'exp' : Ufunc(1, 1, None, - 'e**x elementwise.', + docstrings.get('numpy.core.umath.exp'), TD(inexact, f='exp'), TD(M, f='exp'), ), 'expm1' : Ufunc(1, 1, None, - 'e**x-1 elementwise.', + docstrings.get('numpy.core.umath.expm1'), TD(inexact, f='expm1'), TD(M, f='expm1'), ), 'log' : Ufunc(1, 1, None, - 'logarithm base e elementwise.', + docstrings.get('numpy.core.umath.log'), TD(inexact, f='log'), TD(M, f='log'), ), 'log10' : Ufunc(1, 1, None, - 'logarithm base 10 elementwise.', + docstrings.get('numpy.core.umath.log10'), TD(inexact, f='log10'), TD(M, f='log10'), ), 'log1p' : Ufunc(1, 1, None, - 'log(1+x) to base e elementwise.', + docstrings.get('numpy.core.umath.log1p'), TD(inexact, f='log1p'), TD(M, f='log1p'), ), 'sqrt' : Ufunc(1, 1, None, - 'square-root elementwise. For real x, the domain is restricted to x>=0.', + docstrings.get('numpy.core.umath.sqrt'), TD(inexact, f='sqrt'), TD(M, f='sqrt'), ), 'ceil' : Ufunc(1, 1, None, - 'elementwise smallest integer >= x.', + docstrings.get('numpy.core.umath.ceil'), TD(flts, f='ceil'), TD(M, f='ceil'), ), 'fabs' : Ufunc(1, 1, None, - 'absolute values.', + docstrings.get('numpy.core.umath.fabs'), TD(flts, f='fabs'), TD(M, f='fabs'), ), 'floor' : Ufunc(1, 1, None, - 'elementwise largest integer <= x', + docstrings.get('numpy.core.umath.floor'), TD(flts, f='floor'), TD(M, f='floor'), ), 'rint' : Ufunc(1, 1, None, - 'round x elementwise to the nearest integer, round halfway cases away from zero', + docstrings.get('numpy.core.umath.rint'), TD(inexact, f='rint'), TD(M, f='rint'), ), 'arctan2' : Ufunc(2, 1, None, - 'a safe and correct arctan(x1/x2)', + docstrings.get('numpy.core.umath.arctan2'), TD(flts, f='atan2'), TD(M, f='arctan2'), ), 'remainder' : Ufunc(2, 1, None, - 'computes x1-n*x2 where n is floor(x1 / x2)', + docstrings.get('numpy.core.umath.remainder'), TD(intflt), TD(O, f='PyNumber_Remainder'), ), 'hypot' : Ufunc(2, 1, None, - 'sqrt(x1**2 + x2**2) elementwise', + docstrings.get('numpy.core.umath.hypot'), TD(flts, f='hypot'), TD(M, f='hypot'), ), 'isnan' : Ufunc(1, 1, None, - 'returns True where x is Not-A-Number', + docstrings.get('numpy.core.umath.isnan'), TD(inexact, out='?'), ), 'isinf' : Ufunc(1, 1, None, - 'returns True where x is +inf or -inf', + docstrings.get('numpy.core.umath.isinf'), TD(inexact, out='?'), ), 'isfinite' : Ufunc(1, 1, None, - 'returns True where x is finite', + docstrings.get('numpy.core.umath.isfinite'), TD(inexact, out='?'), ), 'signbit' : Ufunc(1, 1, None, - 'returns True where signbit of x is set (x<0).', + docstrings.get('numpy.core.umath.signbit'), TD(flts, out='?'), ), 'modf' : Ufunc(1, 2, None, - 'breaks x into fractional (y1) and integral (y2) parts.\\n\\n Each output has the same sign as the input.', + docstrings.get('numpy.core.umath.modf'), TD(flts), ), } @@ -667,6 +671,8 @@ for name in names: uf = funcdict[name] mlist = [] + docstring = textwrap.dedent(uf.docstring).strip() + docstring = docstring.encode('string-escape').replace(r'"', r'\"') mlist.append(\ r"""f = PyUFunc_FromFuncAndData(%s_functions, %s_data, %s_signatures, %d, %d, %d, %s, "%s", @@ -674,7 +680,7 @@ len(uf.type_descriptions), uf.nin, uf.nout, uf.identity, - name, uf.docstring)) + name, docstring)) mlist.append(r"""PyDict_SetItemString(dictionary, "%s", f);""" % name) mlist.append(r"""Py_DECREF(f);""") code3list.append('\n'.join(mlist)) Deleted: branches/cdavid/numpy/core/code_generators/multiarray_api_order.txt =================================================================== --- branches/cdavid/numpy/core/code_generators/multiarray_api_order.txt 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/code_generators/multiarray_api_order.txt 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,84 +0,0 @@ -PyArray_Transpose -PyArray_TakeFrom -PyArray_PutTo -PyArray_PutMask -PyArray_Repeat -PyArray_Choose -PyArray_Sort -PyArray_ArgSort -PyArray_SearchSorted -PyArray_ArgMax -PyArray_ArgMin -PyArray_Reshape -PyArray_Newshape -PyArray_Squeeze -PyArray_View -PyArray_SwapAxes -PyArray_Max -PyArray_Min -PyArray_Ptp -PyArray_Mean -PyArray_Trace -PyArray_Diagonal -PyArray_Clip -PyArray_Conjugate -PyArray_Nonzero -PyArray_Std -PyArray_Sum -PyArray_CumSum -PyArray_Prod -PyArray_CumProd -PyArray_All -PyArray_Any -PyArray_Compress -PyArray_Flatten -PyArray_Ravel -PyArray_MultiplyList -PyArray_MultiplyIntList -PyArray_GetPtr -PyArray_CompareLists -PyArray_AsCArray -PyArray_As1D -PyArray_As2D -PyArray_Free -PyArray_Converter -PyArray_IntpFromSequence -PyArray_Concatenate -PyArray_InnerProduct -PyArray_MatrixProduct -PyArray_CopyAndTranspose -PyArray_Correlate -PyArray_TypestrConvert -PyArray_DescrConverter -PyArray_DescrConverter2 -PyArray_IntpConverter -PyArray_BufferConverter -PyArray_AxisConverter -PyArray_BoolConverter -PyArray_ByteorderConverter -PyArray_OrderConverter -PyArray_EquivTypes -PyArray_Zeros -PyArray_Empty -PyArray_Where -PyArray_Arange -PyArray_ArangeObj -PyArray_SortkindConverter -PyArray_LexSort -PyArray_Round -PyArray_EquivTypenums -PyArray_RegisterDataType -PyArray_RegisterCastFunc -PyArray_RegisterCanCast -PyArray_InitArrFuncs -PyArray_IntTupleFromIntp -PyArray_TypeNumFromName -PyArray_ClipmodeConverter -PyArray_OutputConverter -PyArray_BroadcastToShape -_PyArray_SigintHandler -_PyArray_GetSigintBuf -PyArray_DescrAlignConverter -PyArray_DescrAlignConverter2 -PyArray_SearchsideConverter -PyArray_CheckAxis Copied: branches/cdavid/numpy/core/code_generators/numpy_api_order.txt (from rev 5301, trunk/numpy/core/code_generators/numpy_api_order.txt) Modified: branches/cdavid/numpy/core/scons_support.py =================================================================== --- branches/cdavid/numpy/core/scons_support.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/scons_support.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,4 +1,4 @@ -#! Last Change: Mon Apr 21 07:00 PM 2008 J +#! Last Change: Thu Jun 12 02:00 PM 2008 J """Code to support special facilities to scons which are only useful for numpy.core, hence not put into numpy.distutils.scons""" @@ -9,17 +9,13 @@ from os.path import join as pjoin, dirname as pdirname, basename as pbasename from copy import deepcopy -from code_generators.generate_array_api import \ - do_generate_api as nowrap_do_generate_array_api +from code_generators.generate_numpy_api import \ + do_generate_api as nowrap_do_generate_numpy_api from code_generators.generate_ufunc_api import \ do_generate_api as nowrap_do_generate_ufunc_api from numscons.numdist import process_c_str as process_str from numscons.core.utils import rsplit, isstring -try: - from numscons import distutils_dirs_emitter -except ImportError: - raise ImportError("You need numscons >= 0.5.2") import SCons.Node import SCons @@ -35,8 +31,8 @@ #------------------------------------ # Ufunc and multiarray API generators #------------------------------------ -def do_generate_array_api(target, source, env): - nowrap_do_generate_array_api([str(i) for i in target], +def do_generate_numpy_api(target, source, env): + nowrap_do_generate_numpy_api([str(i) for i in target], [str(i) for i in source]) return 0 @@ -188,17 +184,15 @@ nosmp = 0 return nosmp == 1 -array_api_gen_bld = Builder(action = Action(do_generate_array_api, '$ARRAPIGENCOMSTR'), - emitter = [generate_api_emitter, - distutils_dirs_emitter]) +array_api_gen_bld = Builder(action = Action(do_generate_numpy_api, '$ARRAPIGENCOMSTR'), + emitter = generate_api_emitter) + -ufunc_api_gen_bld = Builder(action = Action(do_generate_ufunc_api, '$UFUNCAPIGENCOMSTR'), - emitter = [generate_api_emitter, - distutils_dirs_emitter]) +ufunc_api_gen_bld = Builder(action = Action(do_generate_ufunc_api, '$UFUNCAPIGENCOMSTR'), + emitter = generate_api_emitter) -template_bld = Builder(action = Action(generate_from_template, '$TEMPLATECOMSTR'), - emitter = [generate_from_template_emitter, - distutils_dirs_emitter]) +template_bld = Builder(action = Action(generate_from_template, '$TEMPLATECOMSTR'), + emitter = generate_from_template_emitter) -umath_bld = Builder(action = Action(generate_umath, '$UMATHCOMSTR'), - emitter = [generate_umath_emitter, distutils_dirs_emitter]) +umath_bld = Builder(action = Action(generate_umath, '$UMATHCOMSTR'), + emitter = generate_umath_emitter) Modified: branches/cdavid/numpy/core/setup.py =================================================================== --- branches/cdavid/numpy/core/setup.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/setup.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -206,7 +206,7 @@ return (h_file,) return generate_api - generate_array_api = generate_api_func('generate_array_api') + generate_numpy_api = generate_api_func('generate_numpy_api') generate_ufunc_api = generate_api_func('generate_ufunc_api') def generate_umath_c(ext,build_dir): @@ -246,10 +246,10 @@ sources = [join('src','multiarraymodule.c'), generate_config_h, generate_numpyconfig_h, - generate_array_api, + generate_numpy_api, join('src','scalartypes.inc.src'), join('src','arraytypes.inc.src'), - join(codegen_dir,'generate_array_api.py'), + join(codegen_dir,'generate_numpy_api.py'), join('*.py') ], depends = deps, @@ -274,7 +274,7 @@ sources=[join('src','_sortmodule.c.src'), generate_config_h, generate_numpyconfig_h, - generate_array_api, + generate_numpy_api, ], ) @@ -282,7 +282,7 @@ sources=[join('src','scalarmathmodule.c.src'), generate_config_h, generate_numpyconfig_h, - generate_array_api, + generate_numpy_api, generate_ufunc_api], ) Modified: branches/cdavid/numpy/core/setupscons.py =================================================================== --- branches/cdavid/numpy/core/setupscons.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/setupscons.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -50,7 +50,7 @@ # XXX: I really have to think about how to communicate path info # between scons and distutils, and set the options at one single # location. - target = join(scons_build_dir, local_dir, 'numpyconfig.h') + target = join(scons_build_dir, local_dir, 'include/numpy/numpyconfig.h') incl_dir = os.path.dirname(target) if incl_dir not in config.numpy_include_dirs: config.numpy_include_dirs.append(incl_dir) Modified: branches/cdavid/numpy/core/src/_sortmodule.c.src =================================================================== --- branches/cdavid/numpy/core/src/_sortmodule.c.src 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/src/_sortmodule.c.src 2008-06-20 05:59:26 UTC (rev 5302) @@ -389,35 +389,10 @@ * for strings and unicode is compiled with proper flags. */ -static int -compare_string(char *s1, char *s2, size_t len) -{ - const unsigned char *c1 = (unsigned char *)s1; - const unsigned char *c2 = (unsigned char *)s2; - size_t i; +#define copy_string memcpy - for(i = 0; i < len; ++i) { - if (c1[i] != c2[i]) { - return (c1[i] > c2[i]) ? 1 : -1; - } - } - return 0; -} static void -copy_string(char *s1, char *s2, size_t len) -{ - if (len < SMALL_STRING) { - while(len--) { - *s1++ = *s2++; - } - } - else { - memcpy(s1, s2, len); - } -} - -static void swap_string(char *s1, char *s2, size_t len) { while(len--) { @@ -429,13 +404,15 @@ static int -compare_ucs4(npy_ucs4 *s1, npy_ucs4 *s2, size_t len) +compare_string(char *s1, char *s2, size_t len) { + const unsigned char *c1 = (unsigned char *)s1; + const unsigned char *c2 = (unsigned char *)s2; size_t i; for(i = 0; i < len; ++i) { - if (s1[i] != s2[i]) { - return (s1[i] > s2[i]) ? 1 : -1; + if (c1[i] != c2[i]) { + return (c1[i] > c2[i]) ? 1 : -1; } } return 0; @@ -461,6 +438,21 @@ } } + +static int +compare_ucs4(npy_ucs4 *s1, npy_ucs4 *s2, size_t len) +{ + size_t i; + + for(i = 0; i < len; ++i) { + if (s1[i] != s2[i]) { + return (s1[i] > s2[i]) ? 1 : -1; + } + } + return 0; +} + + /**begin repeat #TYPE=STRING, UNICODE# #type=char, PyArray_UCS4# Modified: branches/cdavid/numpy/core/src/arraymethods.c =================================================================== --- branches/cdavid/numpy/core/src/arraymethods.c 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/src/arraymethods.c 2008-06-20 05:59:26 UTC (rev 5302) @@ -246,7 +246,7 @@ /* steals typed reference */ -/*OBJECT_API +/*NUMPY_API Get a subset of bytes from each element of the array */ static PyObject * @@ -295,7 +295,7 @@ } -/*OBJECT_API +/*NUMPY_API Set a subset of bytes from each element of the array */ static int @@ -351,7 +351,7 @@ /* This doesn't change the descriptor just the actual data... */ -/*OBJECT_API*/ +/*NUMPY_API*/ static PyObject * PyArray_Byteswap(PyArrayObject *self, Bool inplace) { @@ -1351,7 +1351,7 @@ return Py_None; } -/*OBJECT_API*/ +/*NUMPY_API*/ static int PyArray_Dump(PyObject *self, PyObject *file, int protocol) { @@ -1376,7 +1376,7 @@ return 0; } -/*OBJECT_API*/ +/*NUMPY_API*/ static PyObject * PyArray_Dumps(PyObject *self, int protocol) { Modified: branches/cdavid/numpy/core/src/arrayobject.c =================================================================== --- branches/cdavid/numpy/core/src/arrayobject.c 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/src/arrayobject.c 2008-06-20 05:59:26 UTC (rev 5302) @@ -13,7 +13,8 @@ Travis Oliphant, oliphant at ee.byu.edu Brigham Young Univeristy - maintainer email: oliphant.travis at ieee.org +:8613 +maintainer email: oliphant.travis at ieee.org Numarray design (which provided guidance) by Space Science Telescope Institute @@ -21,7 +22,7 @@ */ /*#include */ -/*OBJECT_API +/*NUMPY_API * Get Priority from object */ static double @@ -68,7 +69,7 @@ */ -/*OBJECT_API +/*NUMPY_API Get pointer to zero of correct type for array. */ static char * @@ -103,7 +104,7 @@ return zeroval; } -/*OBJECT_API +/*NUMPY_API Get pointer to one of correct type for array */ static char * @@ -149,7 +150,7 @@ /* Incref all objects found at this record */ -/*OBJECT_API +/*NUMPY_API */ static void PyArray_Item_INCREF(char *data, PyArray_Descr *descr) @@ -181,7 +182,7 @@ } /* XDECREF all objects found at this record */ -/*OBJECT_API +/*NUMPY_API */ static void PyArray_Item_XDECREF(char *data, PyArray_Descr *descr) @@ -216,7 +217,7 @@ /* Used for arrays of python objects to increment the reference count of */ /* every python object in the array. */ -/*OBJECT_API +/*NUMPY_API For object arrays, increment all internal references. */ static int @@ -272,7 +273,7 @@ return 0; } -/*OBJECT_API +/*NUMPY_API Decrement all internal references for object arrays. (or arrays with object fields) */ @@ -535,7 +536,7 @@ /* Helper functions */ -/*OBJECT_API*/ +/*NUMPY_API*/ static intp PyArray_PyIntAsIntp(PyObject *o) { @@ -635,7 +636,7 @@ static PyObject *array_int(PyArrayObject *v); -/*OBJECT_API*/ +/*NUMPY_API*/ static int PyArray_PyIntAsInt(PyObject *o) { @@ -745,7 +746,7 @@ return NULL; } -/*OBJECT_API +/*NUMPY_API Compute the size of an array (in number of items) */ static intp @@ -1137,7 +1138,7 @@ } } -/*OBJECT_API +/*NUMPY_API * Copy an Array into another array -- memory must not overlap * Does not require src and dest to have "broadcastable" shapes * (only the same number of elements). @@ -1216,7 +1217,7 @@ return 0; } -/*OBJECT_API +/*NUMPY_API * Copy an Array into another array -- memory must not overlap. */ static int @@ -1226,7 +1227,7 @@ } -/*OBJECT_API +/*NUMPY_API * Move the memory of one array into another. */ static int @@ -1236,7 +1237,7 @@ } -/*OBJECT_API*/ +/*NUMPY_API*/ static int PyArray_CopyObject(PyArrayObject *dest, PyObject *src_object) { @@ -1300,7 +1301,7 @@ /* They all zero-out the memory as previously done */ /* steals reference to descr -- and enforces native byteorder on it.*/ -/*OBJECT_API +/*NUMPY_API Like FromDimsAndData but uses the Descr structure instead of typecode as input. */ @@ -1333,7 +1334,7 @@ return ret; } -/*OBJECT_API +/*NUMPY_API Construct an empty array from dimensions and typenum */ static PyObject * @@ -1356,7 +1357,7 @@ /* end old calls */ -/*OBJECT_API +/*NUMPY_API Copy an array. */ static PyObject * @@ -1388,7 +1389,7 @@ static PyObject *array_big_item(PyArrayObject *, intp); /* Does nothing with descr (cannot be NULL) */ -/*OBJECT_API +/*NUMPY_API Get scalar-equivalent to a region of memory described by a descriptor. */ static PyObject * @@ -1542,7 +1543,7 @@ /* Return Array Scalar if 0-d array object is encountered */ -/*OBJECT_API +/*NUMPY_API Return either an array or the appropriate Python object if the array is 0d and matches a Python type. */ @@ -1572,7 +1573,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API Initialize arrfuncs to NULL */ static void @@ -1643,7 +1644,7 @@ found. Only works for user-defined data-types. */ -/*MULTIARRAY_API +/*NUMPY_API */ static int PyArray_TypeNumFromName(char *str) @@ -1665,7 +1666,7 @@ needs the userdecrs table and PyArray_NUMUSER variables defined in arraytypes.inc */ -/*MULTIARRAY_API +/*NUMPY_API Register Data type Does not change the reference count of descr */ @@ -1717,7 +1718,7 @@ return typenum; } -/*MULTIARRAY_API +/*NUMPY_API Register Casting Function Replaces any function currently stored. */ @@ -1762,7 +1763,7 @@ return newtypes; } -/*MULTIARRAY_API +/*NUMPY_API Register a type number indicating that a descriptor can be cast to it safely */ @@ -1811,7 +1812,7 @@ This will need the addition of a Fortran-order iterator. */ -/*OBJECT_API +/*NUMPY_API To File */ static int @@ -1952,7 +1953,7 @@ return 0; } -/*OBJECT_API +/*NUMPY_API * To List */ static PyObject * @@ -1972,27 +1973,22 @@ sz = self->dimensions[0]; lp = PyList_New(sz); for(i = 0; i < sz; i++) { - if (PyArray_CheckExact(self)) { - v=(PyArrayObject *)array_big_item(self, i); + v = (PyArrayObject *)array_big_item(self, i); + if (PyArray_Check(v) && (v->nd >= self->nd)) { + PyErr_SetString(PyExc_RuntimeError, + "array_item not returning smaller-" \ + "dimensional array"); + Py_DECREF(v); + Py_DECREF(lp); + return NULL; } - else { - v = (PyArrayObject *)PySequence_GetItem((PyObject *)self, i); - if ((!PyArray_Check(v)) || (v->nd >= self->nd)) { - PyErr_SetString(PyExc_RuntimeError, - "array_item not returning smaller-" \ - "dimensional array"); - Py_DECREF(v); - Py_DECREF(lp); - return NULL; - } - } PyList_SetItem(lp, i, PyArray_ToList(v)); Py_DECREF(v); } return lp; } -/*OBJECT_API*/ +/*NUMPY_API*/ static PyObject * PyArray_ToString(PyArrayObject *self, NPY_ORDER order) { @@ -3370,7 +3366,7 @@ } -/*OBJECT_API +/*NUMPY_API Set internal structure with number functions that all arrays will use */ int @@ -3418,7 +3414,7 @@ (PyDict_SetItemString(dict, #op, n_ops.op)==-1)) \ goto fail; -/*OBJECT_API +/*NUMPY_API Get dictionary showing number functions that all arrays will use */ static PyObject * @@ -4367,7 +4363,7 @@ static PyObject *PyArray_StrFunction=NULL; static PyObject *PyArray_ReprFunction=NULL; -/*OBJECT_API +/*NUMPY_API Set the array print function to be a Python function. */ static void @@ -4422,7 +4418,7 @@ -/*OBJECT_API +/*NUMPY_API */ static int PyArray_CompareUCS4(npy_ucs4 *s1, npy_ucs4 *s2, register size_t len) @@ -4438,7 +4434,7 @@ return 0; } -/* +/*NUMPY_API */ static int PyArray_CompareString(char *s1, char *s2, size_t len) @@ -5033,7 +5029,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API PyArray_CheckAxis */ static PyObject * @@ -5084,7 +5080,7 @@ #include "arraymethods.c" /* Lifted from numarray */ -/*MULTIARRAY_API +/*NUMPY_API PyArray_IntTupleFromIntp */ static PyObject * @@ -5112,7 +5108,7 @@ /* Returns the number of dimensions or -1 if an error occurred */ /* vals must be large enough to hold maxvals */ -/*MULTIARRAY_API +/*NUMPY_API PyArray_IntpFromSequence */ static int @@ -5260,7 +5256,7 @@ } -/*OBJECT_API +/*NUMPY_API */ static int PyArray_ElementStrides(PyObject *arr) @@ -5276,7 +5272,7 @@ return 1; } -/*OBJECT_API +/*NUMPY_API Update Several Flags at once. */ static void @@ -5326,7 +5322,7 @@ or negative). */ -/*OBJECT_API*/ +/*NUMPY_API*/ static Bool PyArray_CheckStrides(int elsize, int nd, intp numbytes, intp offset, intp *dims, intp *newstrides) @@ -5394,7 +5390,7 @@ return itemsize; } -/*OBJECT_API +/*NUMPY_API Generic new array creation routine. */ static PyObject * @@ -5497,7 +5493,7 @@ /* steals a reference to descr (even on failure) */ -/*OBJECT_API +/*NUMPY_API Generic new array creation routine. */ static PyObject * @@ -5569,7 +5565,7 @@ return NULL; } size *= dims[i]; - if (size > largest) { + if (size > largest || size < 0) { PyErr_SetString(PyExc_ValueError, "dimensions too large."); Py_DECREF(descr); @@ -5723,7 +5719,7 @@ } -/*OBJECT_API +/*NUMPY_API Resize (reallocate data). Only works if nothing else is referencing this array and it is contiguous. If refcheck is 0, then the reference count is not checked @@ -5893,7 +5889,7 @@ } /* Assumes contiguous */ -/*OBJECT_API*/ +/*NUMPY_API*/ static void PyArray_FillObjectArray(PyArrayObject *arr, PyObject *obj) { @@ -5925,7 +5921,7 @@ } } -/*OBJECT_API*/ +/*NUMPY_API*/ static int PyArray_FillWithScalar(PyArrayObject *arr, PyObject *obj) { @@ -6645,6 +6641,27 @@ } } + +static int +_zerofill(PyArrayObject *ret) +{ + if (PyDataType_REFCHK(ret->descr)) { + PyObject *zero = PyInt_FromLong(0); + PyArray_FillObjectArray(ret, zero); + Py_DECREF(zero); + if (PyErr_Occurred()) { + Py_DECREF(ret); + return -1; + } + } + else { + intp n = PyArray_NBYTES(ret); + memset(ret->data, 0, n); + } + return 0; +} + + /* Create a view of a complex array with an equivalent data-type except it is real instead of complex. */ @@ -6726,29 +6743,26 @@ array_imag_get(PyArrayObject *self) { PyArrayObject *ret; - PyArray_Descr *type; if (PyArray_ISCOMPLEX(self)) { ret = _get_part(self, 1); - return (PyObject *) ret; } else { - type = self->descr; - Py_INCREF(type); - ret = (PyArrayObject *)PyArray_Zeros(self->nd, - self->dimensions, - type, - PyArray_ISFORTRAN(self)); + Py_INCREF(self->descr); + ret = (PyArrayObject *)PyArray_NewFromDescr(self->ob_type, + self->descr, + self->nd, + self->dimensions, + NULL, NULL, + PyArray_ISFORTRAN(self), + (PyObject *)self); + if (ret == NULL) return NULL; + + if (_zerofill(ret) < 0) return NULL; + ret->flags &= ~WRITEABLE; - if (PyArray_CheckExact(self)) - return (PyObject *)ret; - else { - PyObject *newret; - newret = PyArray_View(ret, NULL, self->ob_type); - Py_DECREF(ret); - return newret; - } } + return (PyObject *) ret; } static int @@ -7743,7 +7757,7 @@ } -/*OBJECT_API +/*NUMPY_API Is the typenum valid? */ static int @@ -7763,7 +7777,7 @@ /* For backward compatibility */ /* steals reference to at --- cannot be NULL*/ -/*OBJECT_API +/*NUMPY_API *Cast an array using typecode structure. */ static PyObject * @@ -7824,7 +7838,7 @@ } -/*OBJECT_API +/*NUMPY_API Get a cast function to cast from the input descriptor to the output type_number (must be a registered data-type). Returns NULL if un-successful. @@ -8022,7 +8036,7 @@ * as the size of the casting buffer. */ -/*OBJECT_API +/*NUMPY_API * Cast to an already created array. */ static int @@ -8186,7 +8200,7 @@ return retval; } -/*OBJECT_API +/*NUMPY_API Cast to an already created array. Arrays don't have to be "broadcastable" Only requirement is they have the same number of elements. */ @@ -8236,7 +8250,7 @@ /* steals reference to newtype --- acc. NULL */ -/*OBJECT_API*/ +/*NUMPY_API*/ static PyObject * PyArray_FromArray(PyArrayObject *arr, PyArray_Descr *newtype, int flags) { @@ -8494,7 +8508,7 @@ return descr; } -/*OBJECT_API */ +/*NUMPY_API */ static PyObject * PyArray_FromStructInterface(PyObject *input) { @@ -8550,7 +8564,7 @@ #define PyIntOrLong_Check(obj) (PyInt_Check(obj) || PyLong_Check(obj)) -/*OBJECT_API*/ +/*NUMPY_API*/ static PyObject * PyArray_FromInterface(PyObject *input) { @@ -8701,7 +8715,7 @@ return NULL; } -/*OBJECT_API*/ +/*NUMPY_API*/ static PyObject * PyArray_FromArrayAttr(PyObject *op, PyArray_Descr *typecode, PyObject *context) { @@ -8750,7 +8764,7 @@ /* Does not check for ENSURECOPY and NOTSWAPPED in flags */ /* Steals a reference to newtype --- which can be NULL */ -/*OBJECT_API*/ +/*NUMPY_API*/ static PyObject * PyArray_FromAny(PyObject *op, PyArray_Descr *newtype, int min_depth, int max_depth, int flags, PyObject *context) @@ -8806,7 +8820,7 @@ else if (newtype->type_num == PyArray_OBJECT) { isobject = 1; } - if (!PyString_Check(op) && PySequence_Check(op)) { + if (PySequence_Check(op)) { PyObject *thiserr = NULL; /* necessary but not sufficient */ @@ -8878,14 +8892,14 @@ } /* new reference -- accepts NULL for mintype*/ -/*OBJECT_API*/ +/*NUMPY_API*/ static PyArray_Descr * PyArray_DescrFromObject(PyObject *op, PyArray_Descr *mintype) { return _array_find_type(op, mintype, MAX_DIMS); } -/*OBJECT_API +/*NUMPY_API Return the typecode of the array a Python object would be converted to */ @@ -8948,7 +8962,7 @@ /* steals a reference to descr -- accepts NULL */ -/*OBJECT_API*/ +/*NUMPY_API*/ static PyObject * PyArray_CheckFromAny(PyObject *op, PyArray_Descr *descr, int min_depth, int max_depth, int requires, PyObject *context) @@ -8989,7 +9003,7 @@ /* Because it decrefs op if any conversion needs to take place so it can be used like PyArray_EnsureArray(some_function(...)) */ -/*OBJECT_API*/ +/*NUMPY_API*/ static PyObject * PyArray_EnsureArray(PyObject *op) { @@ -9014,7 +9028,7 @@ return new; } -/*OBJECT_API*/ +/*NUMPY_API*/ static PyObject * PyArray_EnsureAnyArray(PyObject *op) { @@ -9022,7 +9036,7 @@ return PyArray_EnsureArray(op); } -/*OBJECT_API +/*NUMPY_API Check the type coercion rules. */ static int @@ -9129,7 +9143,7 @@ } /* leaves reference count alone --- cannot be NULL*/ -/*OBJECT_API*/ +/*NUMPY_API*/ static Bool PyArray_CanCastTo(PyArray_Descr *from, PyArray_Descr *to) { @@ -9161,7 +9175,7 @@ return ret; } -/*OBJECT_API +/*NUMPY_API See if array scalars can be cast. */ static Bool @@ -9182,7 +9196,7 @@ /* Aided by Peter J. Verveer's nd_image package and numpy's arraymap ****/ /* and Python's array iterator ***/ -/*OBJECT_API +/*NUMPY_API Get Iterator. */ static PyObject * @@ -9226,7 +9240,7 @@ return (PyObject *)it; } -/*MULTIARRAY_API +/*NUMPY_API Get Iterator broadcast to a particular shape */ static PyObject * @@ -9293,7 +9307,7 @@ -/*OBJECT_API +/*NUMPY_API Get Iterator that iterates over all but one axis (don't use this with PyArray_ITER_GOTO1D). The axis will be over-written if negative with the axis having the smallest stride. @@ -9343,7 +9357,7 @@ /* don't use with PyArray_ITER_GOTO1D because factors are not adjusted */ -/*OBJECT_API +/*NUMPY_API Adjusts previously broadcasted iterators so that the axis with the smallest sum of iterator strides is not iterated over. Returns dimension which is smallest in the range [0,multi->nd). @@ -10124,7 +10138,7 @@ /* Adjust dimensionality and strides for index object iterators --- i.e. broadcast */ -/*OBJECT_API*/ +/*NUMPY_API*/ static int PyArray_Broadcast(PyArrayMultiIterObject *mit) { @@ -10163,8 +10177,13 @@ /* Reset the iterator dimensions and strides of each iterator object -- using 0 valued strides for broadcasting */ - - tmp = PyArray_MultiplyList(mit->dimensions, mit->nd); + /* Need to check for overflow */ + tmp = PyArray_OverflowMultiplyList(mit->dimensions, mit->nd); + if (tmp < 0) { + PyErr_SetString(PyExc_ValueError, + "broadcast dimensions too large."); + return -1; + } mit->size = tmp; for(i=0; inumiter; i++) { it = mit->iters[i]; @@ -10413,7 +10432,12 @@ } finish: /* Here check the indexes (now that we have iteraxes) */ - mit->size = PyArray_MultiplyList(mit->dimensions, mit->nd); + mit->size = PyArray_OverflowMultiplyList(mit->dimensions, mit->nd); + if (mit->size < 0) { + PyErr_SetString(PyExc_ValueError, + "dimensions too large in fancy indexing"); + goto fail; + } if (mit->ait->size == 0 && mit->size != 0) { PyErr_SetString(PyExc_ValueError, "invalid index into a 0-size array"); @@ -10756,7 +10780,7 @@ /** END of Subscript Iterator **/ -/*OBJECT_API +/*NUMPY_API Get MultiIterator, */ static PyObject * @@ -11028,7 +11052,7 @@ 0 /* tp_weaklist */ }; -/*OBJECT_API*/ +/*NUMPY_API*/ static PyArray_Descr * PyArray_DescrNewFromType(int type_num) { @@ -11055,7 +11079,7 @@ **/ /* base cannot be NULL */ -/*OBJECT_API*/ +/*NUMPY_API*/ static PyArray_Descr * PyArray_DescrNew(PyArray_Descr *base) { @@ -11696,7 +11720,7 @@ byte-order is not changed but any fields are: */ -/*OBJECT_API +/*NUMPY_API Deep bytorder change of a data-type descriptor *** Leaves reference count of self unchanged --- does not DECREF self *** */ @@ -12083,7 +12107,7 @@ /** Array Flags Object **/ -/*OBJECT_API +/*NUMPY_API Get New ArrayFlagsObject */ static PyObject * Modified: branches/cdavid/numpy/core/src/arraytypes.inc.src =================================================================== --- branches/cdavid/numpy/core/src/arraytypes.inc.src 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/src/arraytypes.inc.src 2008-06-20 05:59:26 UTC (rev 5302) @@ -2458,7 +2458,7 @@ &VOID_Descr, }; -/*OBJECT_API +/*NUMPY_API Get the PyArray_Descr structure for a type. */ static PyArray_Descr * Modified: branches/cdavid/numpy/core/src/multiarraymodule.c =================================================================== --- branches/cdavid/numpy/core/src/multiarraymodule.c 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/src/multiarraymodule.c 2008-06-20 05:59:26 UTC (rev 5302) @@ -100,7 +100,7 @@ /* An Error object -- rarely used? */ static PyObject *MultiArrayError; -/*MULTIARRAY_API +/*NUMPY_API Multiply a List of ints */ static int @@ -111,7 +111,7 @@ return s; } -/*MULTIARRAY_API +/*NUMPY_API Multiply a List */ static intp @@ -122,7 +122,23 @@ return s; } -/*MULTIARRAY_API +/*NUMPY_API + Multiply a List of Non-negative numbers with over-flow detection. +*/ +static intp +PyArray_OverflowMultiplyList(register intp *l1, register int n) +{ + register intp s=1; + while (n--) { + if (*l1 == 0) return 0; + if ((s > MAX_INTP / *l1) || (*l1 > MAX_INTP / s)) + return -1; + s *= (*l1++); + } + return s; +} + +/*NUMPY_API Produce a pointer into array */ static void * @@ -136,7 +152,7 @@ return (void *)dptr; } -/*MULTIARRAY_API +/*NUMPY_API Get axis from an object (possibly None) -- a converter function, */ static int @@ -154,7 +170,7 @@ return PY_SUCCEED; } -/*MULTIARRAY_API +/*NUMPY_API Compare Lists */ static int @@ -168,7 +184,7 @@ } /* steals a reference to type -- accepts NULL */ -/*MULTIARRAY_API +/*NUMPY_API View */ static PyObject * @@ -206,7 +222,7 @@ /* Returns a contiguous array */ -/*MULTIARRAY_API +/*NUMPY_API Ravel */ static PyObject * @@ -244,7 +260,7 @@ return ret; } -/*MULTIARRAY_API +/*NUMPY_API Round */ static PyObject * @@ -366,7 +382,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API Flatten */ static PyObject * @@ -400,7 +416,7 @@ / * Not recommended */ -/*MULTIARRAY_API +/*NUMPY_API Reshape an array */ static PyObject * @@ -608,7 +624,7 @@ copy-only-if-necessary */ -/*MULTIARRAY_API +/*NUMPY_API New shape for an array */ static PyObject * @@ -755,7 +771,7 @@ return the same array. */ -/*MULTIARRAY_API*/ +/*NUMPY_API*/ static PyObject * PyArray_Squeeze(PyArrayObject *self) { @@ -795,7 +811,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API Mean */ static PyObject * @@ -827,7 +843,7 @@ } /* Set variance to 1 to by-pass square-root calculation and return variance */ -/*MULTIARRAY_API +/*NUMPY_API Std */ static PyObject * @@ -946,7 +962,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API Sum */ static PyObject * @@ -962,7 +978,7 @@ return ret; } -/*MULTIARRAY_API +/*NUMPY_API Prod */ static PyObject * @@ -978,7 +994,7 @@ return ret; } -/*MULTIARRAY_API +/*NUMPY_API CumSum */ static PyObject * @@ -994,7 +1010,7 @@ return ret; } -/*MULTIARRAY_API +/*NUMPY_API CumProd */ static PyObject * @@ -1011,7 +1027,7 @@ return ret; } -/*MULTIARRAY_API +/*NUMPY_API Any */ static PyObject * @@ -1028,7 +1044,7 @@ return ret; } -/*MULTIARRAY_API +/*NUMPY_API All */ static PyObject * @@ -1046,7 +1062,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API Compress */ static PyObject * @@ -1075,7 +1091,7 @@ return ret; } -/*MULTIARRAY_API +/*NUMPY_API Nonzero */ static PyObject * @@ -1172,7 +1188,7 @@ return res2; } -/*MULTIARRAY_API +/*NUMPY_API Clip */ static PyObject * @@ -1329,7 +1345,7 @@ self->dimensions, NULL, NULL, PyArray_ISFORTRAN(self), - NULL); + (PyObject *)self); if (out == NULL) goto fail; outgood = 1; } @@ -1401,7 +1417,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API Conjugate */ static PyObject * @@ -1431,7 +1447,7 @@ } } -/*MULTIARRAY_API +/*NUMPY_API Trace */ static PyObject * @@ -1447,7 +1463,7 @@ return ret; } -/*MULTIARRAY_API +/*NUMPY_API Diagonal */ static PyObject * @@ -1582,7 +1598,7 @@ */ /* steals a reference to typedescr -- can be NULL*/ -/*MULTIARRAY_API +/*NUMPY_API Simulat a C-array */ static int @@ -1641,7 +1657,7 @@ /* Deprecated --- Use PyArray_AsCArray instead */ -/*MULTIARRAY_API +/*NUMPY_API Convert to a 1D C-array */ static int @@ -1657,7 +1673,7 @@ return 0; } -/*MULTIARRAY_API +/*NUMPY_API Convert to a 2D C-array */ static int @@ -1677,7 +1693,7 @@ /* End Deprecated */ -/*MULTIARRAY_API +/*NUMPY_API Free pointers created if As2D is called */ static int @@ -1732,7 +1748,7 @@ /* If axis is MAX_DIMS or bigger, then each sequence object will be flattened before concatenation */ -/*MULTIARRAY_API +/*NUMPY_API Concatenate an arbitrary Python sequence into an array. */ static PyObject * @@ -1843,7 +1859,7 @@ return NULL; } -/*MULTIARRAY_API +/*NUMPY_API SwapAxes */ static PyObject * @@ -1890,7 +1906,7 @@ return ret; } -/*MULTIARRAY_API +/*NUMPY_API Return Transpose. */ static PyObject * @@ -1962,7 +1978,7 @@ return (PyObject *)ret; } -/*MULTIARRAY_API +/*NUMPY_API Repeat the array. */ static PyObject * @@ -2085,7 +2101,7 @@ } -/*OBJECT_API*/ +/*NUMPY_API*/ static NPY_SCALARKIND PyArray_ScalarKind(int typenum, PyArrayObject **arr) { @@ -2112,7 +2128,7 @@ return PyArray_OBJECT_SCALAR; } -/*OBJECT_API*/ +/*NUMPY_API*/ static int PyArray_CanCoerceScalar(int thistype, int neededtype, NPY_SCALARKIND scalar) @@ -2152,7 +2168,7 @@ } -/*OBJECT_API*/ +/*NUMPY_API*/ static PyArrayObject ** PyArray_ConvertToCommonType(PyObject *op, int *retn) { @@ -2268,7 +2284,7 @@ return NULL; } -/*MULTIARRAY_API +/*NUMPY_API */ static PyObject * PyArray_Choose(PyArrayObject *ip, PyObject *op, PyArrayObject *ret, @@ -2629,7 +2645,7 @@ } \ } -/*MULTIARRAY_API +/*NUMPY_API Sort an array in-place */ static int @@ -2715,7 +2731,7 @@ global_obj); } -/*MULTIARRAY_API +/*NUMPY_API ArgSort an array */ static PyObject * @@ -2805,7 +2821,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API LexSort an array providing indices that will sort a collection of arrays lexicographically. The first key is sorted on first, followed by the second key -- requires that arg"merge"sort is available for each sort_key @@ -3070,7 +3086,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API Convert object to searchsorted side */ static int @@ -3098,7 +3114,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API Numeric.searchsorted(a,v) */ static PyObject * @@ -3198,7 +3214,7 @@ /* Could perhaps be redone to not make contiguous arrays */ -/*MULTIARRAY_API +/*NUMPY_API Numeric.innerproduct(a,v) */ static PyObject * @@ -3310,7 +3326,7 @@ /* just like inner product but does the swapaxes stuff on the fly */ -/*MULTIARRAY_API +/*NUMPY_API Numeric.matrixproduct(a,v) */ static PyObject * @@ -3442,7 +3458,7 @@ return NULL; } -/*MULTIARRAY_API +/*NUMPY_API Fast Copy and Transpose */ static PyObject * @@ -3505,7 +3521,7 @@ return ret; } -/*MULTIARRAY_API +/*NUMPY_API Numeric.correlate(a1,a2,mode) */ static PyObject * @@ -3615,7 +3631,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API ArgMin */ static PyObject * @@ -3645,7 +3661,7 @@ return ret; } -/*MULTIARRAY_API +/*NUMPY_API Max */ static PyObject * @@ -3662,7 +3678,7 @@ return ret; } -/*MULTIARRAY_API +/*NUMPY_API Min */ static PyObject * @@ -3679,7 +3695,7 @@ return ret; } -/*MULTIARRAY_API +/*NUMPY_API Ptp */ static PyObject * @@ -3714,7 +3730,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API ArgMax */ static PyObject * @@ -3823,7 +3839,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API Take */ static PyObject * @@ -3897,6 +3913,7 @@ flags); if (obj != ret) copyret = 1; ret = obj; + if (ret == NULL) goto fail; } max_item = self->dimensions[axis]; @@ -3984,7 +4001,7 @@ return NULL; } -/*MULTIARRAY_API +/*NUMPY_API Put values into an array */ static PyObject * @@ -4150,7 +4167,7 @@ return PyArray_PutMask((PyArrayObject *)array, values, mask); } -/*MULTIARRAY_API +/*NUMPY_API Put values into an array according to a mask. */ static PyObject * @@ -4259,7 +4276,7 @@ as you get a new reference to it. */ -/*MULTIARRAY_API +/*NUMPY_API Useful to pass as converter function for O& processing in PyArgs_ParseTuple. */ @@ -4278,7 +4295,7 @@ } } -/*MULTIARRAY_API +/*NUMPY_API Useful to pass as converter function for O& processing in PyArgs_ParseTuple for output arrays */ @@ -4302,7 +4319,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API Convert an object to true / false */ static int @@ -4316,7 +4333,7 @@ return PY_SUCCEED; } -/*MULTIARRAY_API +/*NUMPY_API Convert an object to FORTRAN / C / ANY */ static int @@ -4355,7 +4372,7 @@ return PY_SUCCEED; } -/*MULTIARRAY_API +/*NUMPY_API Convert an object to NPY_RAISE / NPY_CLIP / NPY_WRAP */ static int @@ -4401,7 +4418,7 @@ -/*MULTIARRAY_API +/*NUMPY_API Typestr converter */ static int @@ -4534,7 +4551,7 @@ */ -/*MULTIARRAY_API +/*NUMPY_API Get buffer chunk from object */ static int @@ -4574,7 +4591,7 @@ PyDimMem_FREE(seq.ptr)** */ -/*MULTIARRAY_API +/*NUMPY_API Get intp chunk from sequence */ static int @@ -5221,7 +5238,7 @@ */ -/*MULTIARRAY_API +/*NUMPY_API Get type-descriptor from an object forcing alignment if possible None goes to DEFAULT type. */ @@ -5250,7 +5267,7 @@ return PY_SUCCEED; } -/*MULTIARRAY_API +/*NUMPY_API Get type-descriptor from an object forcing alignment if possible None goes to NULL. */ @@ -5280,7 +5297,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API Get typenum from an object -- None goes to NULL */ static int @@ -5304,7 +5321,7 @@ */ /* new reference in *at */ -/*MULTIARRAY_API +/*NUMPY_API Get typenum from an object -- None goes to PyArray_DEFAULT */ static int @@ -5488,7 +5505,7 @@ return PY_FAIL; } -/*MULTIARRAY_API +/*NUMPY_API Convert object to endian */ static int @@ -5526,7 +5543,7 @@ return PY_SUCCEED; } -/*MULTIARRAY_API +/*NUMPY_API Convert object to sort kind */ static int @@ -5579,7 +5596,7 @@ equivalent (same basic kind and same itemsize). */ -/*MULTIARRAY_API*/ +/*NUMPY_API*/ static unsigned char PyArray_EquivTypes(PyArray_Descr *typ1, PyArray_Descr *typ2) { @@ -5601,7 +5618,7 @@ return (typ1->kind == typ2->kind); } -/*MULTIARRAY_API*/ +/*NUMPY_API*/ static unsigned char PyArray_EquivTypenums(int typenum1, int typenum2) { @@ -5751,7 +5768,7 @@ /* accepts NULL type */ /* steals referenct to type */ -/*MULTIARRAY_API +/*NUMPY_API Empty */ static PyObject * @@ -5869,17 +5886,15 @@ return ret; } - /* steal a reference */ /* accepts NULL type */ -/*MULTIARRAY_API +/*NUMPY_API Zeros */ static PyObject * PyArray_Zeros(int nd, intp *dims, PyArray_Descr *type, int fortran) { PyArrayObject *ret; - intp n; if (!type) type = PyArray_DescrFromType(PyArray_DEFAULT); ret = (PyArrayObject *)PyArray_NewFromDescr(&PyArray_Type, @@ -5889,16 +5904,7 @@ fortran, NULL); if (ret == NULL) return NULL; - if (PyDataType_REFCHK(type)) { - PyObject *zero = PyInt_FromLong(0); - PyArray_FillObjectArray(ret, zero); - Py_DECREF(zero); - if (PyErr_Occurred()) {Py_DECREF(ret); return NULL;} - } - else { - n = PyArray_NBYTES(ret); - memset(ret->data, 0, n); - } + if (_zerofill(ret) < 0) return NULL; return (PyObject *)ret; } @@ -6155,7 +6161,7 @@ } #undef FROM_BUFFER_SIZE -/*OBJECT_API +/*NUMPY_API Given a pointer to a string ``data``, a string length ``slen``, and a ``PyArray_Descr``, return an array corresponding to the data @@ -6315,7 +6321,7 @@ return r; } -/*OBJECT_API +/*NUMPY_API Given a ``FILE *`` pointer ``fp``, and a ``PyArray_Descr``, return an array corresponding to the data encoded in that file. @@ -6435,7 +6441,7 @@ /* steals a reference to dtype (which cannot be NULL) */ -/*OBJECT_API */ +/*NUMPY_API */ static PyObject * PyArray_FromIter(PyObject *obj, PyArray_Descr *dtype, intp count) { @@ -6550,7 +6556,7 @@ } -/*OBJECT_API*/ +/*NUMPY_API*/ static PyObject * PyArray_FromBuffer(PyObject *buf, PyArray_Descr *type, intp count, intp offset) @@ -6720,7 +6726,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API Arange, */ static PyObject * @@ -6824,7 +6830,7 @@ } /* this doesn't change the references */ -/*MULTIARRAY_API +/*NUMPY_API ArangeObj, */ static PyObject * @@ -7048,7 +7054,7 @@ } -/*MULTIARRAY_API +/*NUMPY_API Where */ static PyObject * @@ -7356,7 +7362,7 @@ SIGJMP_BUF _NPY_SIGINT_BUF; -/*MULTIARRAY_API +/*NUMPY_API */ static void _PyArray_SigintHandler(int signum) @@ -7365,7 +7371,7 @@ SIGLONGJMP(_NPY_SIGINT_BUF, signum); } -/*MULTIARRAY_API +/*NUMPY_API */ static void* _PyArray_GetSigintBuf(void) Modified: branches/cdavid/numpy/core/src/scalartypes.inc.src =================================================================== --- branches/cdavid/numpy/core/src/scalartypes.inc.src 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/src/scalartypes.inc.src 2008-06-20 05:59:26 UTC (rev 5302) @@ -131,7 +131,7 @@ /* no error checking is performed -- ctypeptr must be same type as scalar */ /* in case of flexible type, the data is not copied into ctypeptr which is expected to be a pointer to pointer */ -/*OBJECT_API +/*NUMPY_API Convert to c-type */ static void @@ -160,7 +160,7 @@ /* This may not work right on narrow builds for NumPy unicode scalars. */ -/*OBJECT_API +/*NUMPY_API Cast Scalar to c-type */ static int @@ -197,7 +197,7 @@ return 0; } -/*OBJECT_API +/*NUMPY_API Cast Scalar to c-type */ static int @@ -220,7 +220,7 @@ */ /* steals reference to outcode */ -/*OBJECT_API +/*NUMPY_API Get 0-dim array from scalar */ static PyObject * @@ -292,7 +292,7 @@ return ret; } -/*OBJECT_API +/*NUMPY_API Get an Array Scalar From a Python Object Returns NULL if unsuccessful but error is only set if another error occurred. Currently only Numeric-like @@ -2720,7 +2720,7 @@ } /*New reference */ -/*OBJECT_API +/*NUMPY_API */ static PyArray_Descr * PyArray_DescrFromTypeObject(PyObject *type) @@ -2785,7 +2785,7 @@ return _descr_from_subtype(type); } -/*OBJECT_API +/*NUMPY_API Return the tuple of ordered field names from a dictionary. */ static PyObject * @@ -2812,7 +2812,7 @@ } /* New reference */ -/*OBJECT_API +/*NUMPY_API Return descr object from array scalar. */ static PyArray_Descr * @@ -2856,7 +2856,7 @@ } /* New reference */ -/*OBJECT_API +/*NUMPY_API Get a typeobject from a type-number -- can return NULL. */ static PyObject * Modified: branches/cdavid/numpy/core/src/ufuncobject.c =================================================================== --- branches/cdavid/numpy/core/src/ufuncobject.c 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/src/ufuncobject.c 2008-06-20 05:59:26 UTC (rev 5302) @@ -2742,7 +2742,7 @@ while(loop->index < loop->size) { if (loop->obj) Py_INCREF(*((PyObject **)loop->it->dataptr)); - memcpy(loop->bufptr[0], loop->it->dataptr, + memmove(loop->bufptr[0], loop->it->dataptr, loop->outsize); PyArray_ITER_NEXT(loop->it); loop->bufptr[0] += loop->outsize; @@ -2755,7 +2755,7 @@ /* Copy first element to output */ if (loop->obj) Py_INCREF(*((PyObject **)loop->it->dataptr)); - memcpy(loop->bufptr[0], loop->it->dataptr, + memmove(loop->bufptr[0], loop->it->dataptr, loop->outsize); /* Adjust input pointer */ loop->bufptr[1] = loop->it->dataptr+loop->steps[1]; @@ -4007,7 +4007,7 @@ PyObject *outargs, *inargs, *doc; outargs = _makeargs(self->nout, "y"); inargs = _makeargs(self->nin, "x"); - doc = PyString_FromFormat("%s = %s(%s) %s", + doc = PyString_FromFormat("%s = %s(%s)\n\n%s", PyString_AS_STRING(outargs), self->name, PyString_AS_STRING(inargs), Modified: branches/cdavid/numpy/core/tests/test_defmatrix.py =================================================================== --- branches/cdavid/numpy/core/tests/test_defmatrix.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/tests/test_defmatrix.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,12 +1,12 @@ +import sys from numpy.testing import * set_package_path() -import numpy.core;reload(numpy.core) from numpy.core import * import numpy as np restore_path() -class TestCtor(NumpyTestCase): - def check_basic(self): +class TestCtor(TestCase): + def test_basic(self): A = array([[1,2],[3,4]]) mA = matrix(A) assert all(mA.A == A) @@ -24,8 +24,9 @@ mvec = matrix(vec) assert mvec.shape == (1,5) -class TestProperties(NumpyTestCase): - def check_sum(self): + +class TestProperties(TestCase): + def test_sum(self): """Test whether matrix.sum(axis=1) preserves orientation. Fails in NumPy <= 0.9.6.2127. """ @@ -40,7 +41,7 @@ assert_array_equal(sum1, M.sum(axis=1)) assert sumall == M.sum() - def check_basic(self): + def test_basic(self): import numpy.linalg as linalg A = array([[1., 2.], @@ -57,7 +58,7 @@ assert all(array(transpose(B) == mB.T)) assert all(array(conjugate(transpose(B)) == mB.H)) - def check_comparisons(self): + def test_comparisons(self): A = arange(100).reshape(10,10) mA = matrix(A) mB = matrix(A) + 0.1 @@ -81,19 +82,20 @@ assert not all(abs(mA) > 0) assert all(abs(mB > 0)) - def check_asmatrix(self): + def test_asmatrix(self): A = arange(100).reshape(10,10) mA = asmatrix(A) A[0,0] = -10 assert A[0,0] == mA[0,0] - def check_noaxis(self): + def test_noaxis(self): A = matrix([[1,0],[0,1]]) assert A.sum() == matrix(2) assert A.mean() == matrix(0.5) -class TestCasting(NumpyTestCase): - def check_basic(self): + +class TestCasting(TestCase): + def test_basic(self): A = arange(100).reshape(10,10) mA = matrix(A) @@ -110,8 +112,9 @@ assert mC.dtype.type == complex128 assert all(mA != mB) -class TestAlgebra(NumpyTestCase): - def check_basic(self): + +class TestAlgebra(TestCase): + def test_basic(self): import numpy.linalg as linalg A = array([[1., 2.], @@ -133,8 +136,9 @@ assert allclose((mA + mA).A, (A + A)) assert allclose((3*mA).A, (3*A)) -class TestMatrixReturn(NumpyTestCase): - def check_instance_methods(self): + +class TestMatrixReturn(TestCase): + def test_instance_methods(self): a = matrix([1.0], dtype='f8') methodargs = { 'astype' : ('intc',), @@ -172,33 +176,35 @@ assert type(c) is matrix assert type(d) is matrix -class TestIndexing(NumpyTestCase): - def check_basic(self): + +class TestIndexing(TestCase): + def test_basic(self): x = asmatrix(zeros((3,2),float)) y = zeros((3,1),float) y[:,0] = [0.8,0.2,0.3] x[:,1] = y>0.5 assert_equal(x, [[0,1],[0,0],[0,0]]) -class TestNewScalarIndexing(NumpyTestCase): + +class TestNewScalarIndexing(TestCase): def setUp(self): self.a = matrix([[1, 2],[3,4]]) - def check_dimesions(self): + def test_dimesions(self): a = self.a x = a[0] assert_equal(x.ndim, 2) - def check_array_from_matrix_list(self): + def test_array_from_matrix_list(self): a = self.a x = array([a, a]) assert_equal(x.shape, [2,2,2]) - def check_array_to_list(self): + def test_array_to_list(self): a = self.a assert_equal(a.tolist(),[[1, 2], [3, 4]]) - def check_fancy_indexing(self): + def test_fancy_indexing(self): a = self.a x = a[1, [0,1,0]] assert isinstance(x, matrix) @@ -216,30 +222,36 @@ ## assert_equal(x[0].shape,(1,3)) ## assert_equal(x[:,0].shape,(2,1)) -## x = matrix(0) -## assert_equal(x[0,0],0) -## assert_equal(x[0],0) -## assert_equal(x[:,0].shape,x.shape) + def test_matrix_element(self): + x = matrix([[1,2,3],[4,5,6]]) + assert_equal(x[0][0].shape,(1,3)) + assert_equal(x[0].shape,(1,3)) + assert_equal(x[:,0].shape,(2,1)) - def check_scalar_indexing(self): + x = matrix(0) + assert_equal(x[0,0],0) + assert_equal(x[0],0) + assert_equal(x[:,0].shape,x.shape) + + def test_scalar_indexing(self): x = asmatrix(zeros((3,2),float)) assert_equal(x[0,0],x[0][0]) - def check_row_column_indexing(self): + def test_row_column_indexing(self): x = asmatrix(np.eye(2)) assert_array_equal(x[0,:],[[1,0]]) assert_array_equal(x[1,:],[[0,1]]) assert_array_equal(x[:,0],[[1],[0]]) assert_array_equal(x[:,1],[[0],[1]]) - def check_boolean_indexing(self): + def test_boolean_indexing(self): A = arange(6) A.shape = (3,2) x = asmatrix(A) assert_array_equal(x[:,array([True,False])],x[:,0]) assert_array_equal(x[array([True,False,False]),:],x[0,:]) - def check_list_indexing(self): + def test_list_indexing(self): A = arange(6) A.shape = (3,2) x = asmatrix(A) @@ -247,6 +259,5 @@ assert_array_equal(x[[2,1,0],:],x[::-1,:]) - if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/core/tests/test_errstate.py =================================================================== --- branches/cdavid/numpy/core/tests/test_errstate.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/tests/test_errstate.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -12,11 +12,7 @@ from numpy.random import rand, randint from numpy.testing import * - - -class TestErrstate(NumpyTestCase): - - +class TestErrstate(TestCase): def test_invalid(self): with errstate(all='raise', under='ignore'): a = -arange(3) @@ -57,6 +53,5 @@ """ -if __name__ == '__main__': - from numpy.testing import * - NumpyTest().run() +if __name__ == "__main__": + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/core/tests/test_memmap.py =================================================================== --- branches/cdavid/numpy/core/tests/test_memmap.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/tests/test_memmap.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,13 +1,12 @@ from tempfile import NamedTemporaryFile, mktemp import os +import warnings from numpy.core import memmap from numpy import arange, allclose from numpy.testing import * -import warnings - -class TestMemmap(NumpyTestCase): +class TestMemmap(TestCase): def setUp(self): self.tmpfp = NamedTemporaryFile(prefix='mmap') self.shape = (3,4) @@ -46,5 +45,6 @@ fp.sync() warnings.simplefilter('default', DeprecationWarning) -if __name__ == '__main__': - NumpyTest().run() + +if __name__ == "__main__": + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/core/tests/test_multiarray.py =================================================================== --- branches/cdavid/numpy/core/tests/test_multiarray.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/tests/test_multiarray.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,15 +1,14 @@ import tempfile - +import sys import numpy as np from numpy.testing import * from numpy.core import * - -class TestFlags(NumpyTestCase): +class TestFlags(TestCase): def setUp(self): self.a = arange(10) - def check_writeable(self): + def test_writeable(self): mydict = locals() self.a.flags.writeable = False self.assertRaises(RuntimeError, runstring, 'self.a[0] = 3', mydict) @@ -17,7 +16,7 @@ self.a[0] = 5 self.a[0] = 0 - def check_otherflags(self): + def test_otherflags(self): assert_equal(self.a.flags.carray, True) assert_equal(self.a.flags.farray, False) assert_equal(self.a.flags.behaved, True) @@ -29,13 +28,13 @@ assert_equal(self.a.flags.updateifcopy, False) -class TestAttributes(NumpyTestCase): +class TestAttributes(TestCase): def setUp(self): self.one = arange(10) self.two = arange(20).reshape(4,5) self.three = arange(60,dtype=float64).reshape(2,5,6) - def check_attributes(self): + def test_attributes(self): assert_equal(self.one.shape, (10,)) assert_equal(self.two.shape, (4,5)) assert_equal(self.three.shape, (2,5,6)) @@ -56,7 +55,7 @@ assert_equal(self.two.itemsize, self.two.dtype.itemsize) assert_equal(self.two.base, arange(20)) - def check_dtypeattr(self): + def test_dtypeattr(self): assert_equal(self.one.dtype, dtype(int_)) assert_equal(self.three.dtype, dtype(float_)) assert_equal(self.one.dtype.char, 'l') @@ -65,7 +64,7 @@ assert_equal(self.one.dtype.str[1], 'i') assert_equal(self.three.dtype.str[1], 'f') - def check_stridesattr(self): + def test_stridesattr(self): x = self.one def make_array(size, offset, strides): return ndarray([size], buffer=x, dtype=int, @@ -79,7 +78,7 @@ #self.failUnlessRaises(ValueError, lambda: ndarray([1], strides=4)) - def check_set_stridesattr(self): + def test_set_stridesattr(self): x = self.one def make_array(size, offset, strides): try: @@ -94,7 +93,7 @@ self.failUnlessRaises(ValueError, make_array, 8, 3, 1) #self.failUnlessRaises(ValueError, make_array, 8, 3, 0) - def check_fill(self): + def test_fill(self): for t in "?bhilqpBHILQPfdgFDGO": x = empty((3,2,1), t) y = empty((3,2,1), t) @@ -106,82 +105,85 @@ x.fill(x[0]) assert_equal(x['f1'][1], x['f1'][0]) -class TestDtypedescr(NumpyTestCase): - def check_construction(self): + +class TestDtypedescr(TestCase): + def test_construction(self): d1 = dtype('i4') assert_equal(d1, dtype(int32)) d2 = dtype('f8') assert_equal(d2, dtype(float64)) -class TestFromstring(NumpyTestCase): - def check_binary(self): + +class TestFromstring(TestCase): + def test_binary(self): a = fromstring('\x00\x00\x80?\x00\x00\x00@\x00\x00@@\x00\x00\x80@',dtype=' g2, [g1[i] > g2[i] for i in [0,1,2]]) - def check_mixed(self): + def test_mixed(self): g1 = array(["spam","spa","spammer","and eggs"]) g2 = "spam" assert_array_equal(g1 == g2, [x == g2 for x in g1]) @@ -575,7 +582,7 @@ assert_array_equal(g1 >= g2, [x >= g2 for x in g1]) - def check_unicode(self): + def test_unicode(self): g1 = array([u"This",u"is",u"example"]) g2 = array([u"This",u"was",u"example"]) assert_array_equal(g1 == g2, [g1[i] == g2[i] for i in [0,1,2]]) @@ -586,8 +593,8 @@ assert_array_equal(g1 > g2, [g1[i] > g2[i] for i in [0,1,2]]) -class TestArgmax(NumpyTestCase): - def check_all(self): +class TestArgmax(TestCase): + def test_all(self): a = np.random.normal(0,1,(4,5,6,7,8)) for i in xrange(a.ndim): amax = a.max(i) @@ -596,13 +603,15 @@ axes.remove(i) assert all(amax == aargmax.choose(*a.transpose(i,*axes))) -class TestNewaxis(NumpyTestCase): - def check_basic(self): + +class TestNewaxis(TestCase): + def test_basic(self): sk = array([0,-0.1,0.1]) res = 250*sk[:,newaxis] assert_almost_equal(res.ravel(),250*sk) -class TestClip(NumpyTestCase): + +class TestClip(TestCase): def _check_range(self,x,cmin,cmax): assert np.all(x >= cmin) assert np.all(x <= cmax) @@ -636,7 +645,7 @@ self._check_range(x,expected_min,expected_max) return x - def check_basic(self): + def test_basic(self): for inplace in [False, True]: self._clip_type('float',1024,-12.8,100.2, inplace=inplace) self._clip_type('float',1024,0,0, inplace=inplace) @@ -647,13 +656,13 @@ x = self._clip_type('uint',1024,-120,100,expected_min=0, inplace=inplace) x = self._clip_type('uint',1024,0,0, inplace=inplace) - def check_record_array(self): + def test_record_array(self): rec = np.array([(-5, 2.0, 3.0), (5.0, 4.0, 3.0)], dtype=[('x', '= 3) @@ -662,24 +671,24 @@ x = val.clip(max=4) assert np.all(x <= 4) -class TestPutmask(ParametricTestCase): + +class TestPutmask(TestCase): def tst_basic(self,x,T,mask,val): np.putmask(x,mask,val) assert np.all(x[mask] == T(val)) assert x.dtype == T - def testip_types(self): + def test_ip_types(self): unchecked_types = [str, unicode, np.void, object] x = np.random.random(1000)*100 mask = x < 40 - tests = [] for val in [-100,0,15]: for types in np.sctypes.itervalues(): - tests.extend([(self.tst_basic,x.copy().astype(T),T,mask,val) - for T in types if T not in unchecked_types]) - return tests + for T in types: + if T not in unchecked_types: + yield self.tst_basic,x.copy().astype(T),T,mask,val def test_mask_size(self): self.failUnlessRaises(ValueError, np.putmask, @@ -690,8 +699,9 @@ np.putmask(x,[True,False,True],-1) assert_array_equal(x,[-1,2,-1]) - def testip_byteorder(self): - return [(self.tst_byteorder,dtype) for dtype in ('>i4','i4','i4','i4','']: for dtype in [float,int,np.complex]: dt = np.dtype(dtype).newbyteorder(byteorder) x = (np.random.random((4,7))*5).astype(dt) buf = x.tostring() - tests.append((self.tst_basic,buf,x.flat,{'dtype':dt})) - return tests + yield self.tst_basic,buf,x.flat,{'dtype':dt} -class TestResize(NumpyTestCase): + +class TestResize(TestCase): def test_basic(self): x = np.eye(3) x.resize((5,5)) @@ -827,20 +840,23 @@ y = x self.failUnlessRaises(ValueError,x.resize,(5,1)) -class TestRecord(NumpyTestCase): + +class TestRecord(TestCase): def test_field_rename(self): dt = np.dtype([('f',float),('i',int)]) dt.names = ['p','q'] assert_equal(dt.names,['p','q']) -class TestView(NumpyTestCase): + +class TestView(TestCase): def test_basic(self): x = np.array([(1,2,3,4),(5,6,7,8)],dtype=[('r',np.int8),('g',np.int8), ('b',np.int8),('a',np.int8)]) # We must be specific about the endianness here: y = x.view(dtype='0],a[1][V>0],a[2][V>0]]) == a[:,V>0]).all() -class TestBinaryRepr(NumpyTestCase): + +class TestBinaryRepr(TestCase): def test_zero(self): assert_equal(binary_repr(0),'0') @@ -252,6 +255,7 @@ assert_equal(binary_repr(-1), '-1') assert_equal(binary_repr(-1, width=8), '11111111') + def assert_array_strict_equal(x, y): assert_array_equal(x, y) # Check flags @@ -260,7 +264,7 @@ assert x.dtype.isnative == y.dtype.isnative -class TestClip(NumpyTestCase): +class TestClip(TestCase): def setUp(self): self.nr = 5 self.nc = 3 @@ -509,7 +513,7 @@ ac = self.clip(a,m,M) assert_array_strict_equal(ac, act) - def test_type_cast_04(self): + def test_type_cast_05(self): "Test native int32 with double arrays min/max." a = self._generate_int_data(self.nr, self.nc) m = -0.5 @@ -518,7 +522,7 @@ act = self.clip(a, m * zeros(a.shape), M) assert_array_strict_equal(ac, act) - def test_type_cast_05(self): + def test_type_cast_06(self): "Test native with NON native scalar min/max." a = self._generate_data(self.nr, self.nc) m = 0.5 @@ -528,7 +532,7 @@ ac = self.fastclip(a, m_s, M) assert_array_strict_equal(ac, act) - def test_type_cast_06(self): + def test_type_cast_07(self): "Test NON native with native array min/max." a = self._generate_data(self.nr, self.nc) m = -0.5 * ones(a.shape) @@ -539,7 +543,7 @@ ac = self.fastclip(a_s, m, M) assert_array_strict_equal(ac, act) - def test_type_cast_07(self): + def test_type_cast_08(self): "Test NON native with native scalar min/max." a = self._generate_data(self.nr, self.nc) m = -0.5 @@ -550,7 +554,7 @@ act = a_s.clip(m, M) assert_array_strict_equal(ac, act) - def test_type_cast_08(self): + def test_type_cast_09(self): "Test native with NON native array min/max." a = self._generate_data(self.nr, self.nc) m = -0.5 * ones(a.shape) @@ -561,7 +565,7 @@ act = self.clip(a, m_s, M) assert_array_strict_equal(ac, act) - def test_type_cast_09(self): + def test_type_cast_10(self): """Test native int32 with float min/max and float out for output argument.""" a = self._generate_int_data(self.nr, self.nc) b = zeros(a.shape, dtype = float32) @@ -571,7 +575,7 @@ ac = self.fastclip(a, m , M, out = b) assert_array_strict_equal(ac, act) - def test_type_cast_10(self): + def test_type_cast_11(self): "Test non native with native scalar, min/max, out non native" a = self._generate_non_native_data(self.nr, self.nc) b = a.copy() @@ -583,7 +587,7 @@ self.clip(a, m, M, out = bt) assert_array_strict_equal(b, bt) - def test_type_cast_11(self): + def test_type_cast_12(self): "Test native int32 input and min/max and float out" a = self._generate_int_data(self.nr, self.nc) b = zeros(a.shape, dtype = float32) @@ -681,7 +685,7 @@ self.assert_(a2 is a) -class test_allclose_inf(ParametricTestCase): +class test_allclose_inf(TestCase): rtol = 1e-5 atol = 1e-8 @@ -691,7 +695,7 @@ def tst_not_allclose(self,x,y): assert not allclose(x,y), "%s and %s shouldn't be close" % (x,y) - def testip_allclose(self): + def test_ip_allclose(self): """Parametric test factory.""" arr = array([100,1000]) aran = arange(125).reshape((5,5,5)) @@ -709,7 +713,7 @@ for (x,y) in data: yield (self.tst_allclose,x,y) - def testip_not_allclose(self): + def test_ip_not_allclose(self): """Parametric test factory.""" aran = arange(125).reshape((5,5,5)) @@ -737,7 +741,8 @@ assert_array_equal(x,array([inf,1])) assert_array_equal(y,array([0,inf])) -class TestStdVar(NumpyTestCase): + +class TestStdVar(TestCase): def setUp(self): self.A = array([1,-1,1,-1]) self.real_var = 1 @@ -745,25 +750,27 @@ def test_basic(self): assert_almost_equal(var(self.A),self.real_var) assert_almost_equal(std(self.A)**2,self.real_var) + def test_ddof1(self): - assert_almost_equal(var(self.A,ddof=1),self.real_var*len(self.A)/float(len(self.A)-1)) - assert_almost_equal(std(self.A,ddof=1)**2,self.real_var*len(self.A)/float(len(self.A)-1)) + assert_almost_equal(var(self.A,ddof=1), + self.real_var*len(self.A)/float(len(self.A)-1)) + assert_almost_equal(std(self.A,ddof=1)**2, + self.real_var*len(self.A)/float(len(self.A)-1)) + def test_ddof2(self): - assert_almost_equal(var(self.A,ddof=2),self.real_var*len(self.A)/float(len(self.A)-2)) - assert_almost_equal(std(self.A,ddof=2)**2,self.real_var*len(self.A)/float(len(self.A)-2)) + assert_almost_equal(var(self.A,ddof=2), + self.real_var*len(self.A)/float(len(self.A)-2)) + assert_almost_equal(std(self.A,ddof=2)**2, + self.real_var*len(self.A)/float(len(self.A)-2)) -class TestStdVarComplex(NumpyTestCase): + +class TestStdVarComplex(TestCase): def test_basic(self): A = array([1,1.j,-1,-1.j]) real_var = 1 assert_almost_equal(var(A),real_var) assert_almost_equal(std(A)**2,real_var) -import sys -if sys.version_info[:2] >= (2, 5): - set_local_path() - from test_errstate import * - restore_path() -if __name__ == '__main__': - NumpyTest().run() +if __name__ == "__main__": + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/core/tests/test_numerictypes.py =================================================================== --- branches/cdavid/numpy/core/tests/test_numerictypes.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/tests/test_numerictypes.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -3,7 +3,6 @@ import numpy from numpy import zeros, ones, array - # This is the structure of the table used for plain objects: # # +-+-+-+ @@ -102,7 +101,7 @@ class create_zeros: """Check the creation of heterogeneous arrays zero-valued""" - def check_zeros0D(self): + def test_zeros0D(self): """Check creation of 0-dimensional objects""" h = zeros((), dtype=self._descr) self.assert_(normalize_descr(self._descr) == h.dtype.descr) @@ -112,7 +111,7 @@ # A small check that data is ok assert_equal(h['z'], zeros((), dtype='u1')) - def check_zerosSD(self): + def test_zerosSD(self): """Check creation of single-dimensional objects""" h = zeros((2,), dtype=self._descr) self.assert_(normalize_descr(self._descr) == h.dtype.descr) @@ -122,7 +121,7 @@ # A small check that data is ok assert_equal(h['z'], zeros((2,), dtype='u1')) - def check_zerosMD(self): + def test_zerosMD(self): """Check creation of multi-dimensional objects""" h = zeros((2,3), dtype=self._descr) self.assert_(normalize_descr(self._descr) == h.dtype.descr) @@ -133,11 +132,11 @@ assert_equal(h['z'], zeros((2,3), dtype='u1')) -class test_create_zeros_plain(create_zeros, NumpyTestCase): +class test_create_zeros_plain(create_zeros, TestCase): """Check the creation of heterogeneous arrays zero-valued (plain)""" _descr = Pdescr -class test_create_zeros_nested(create_zeros, NumpyTestCase): +class test_create_zeros_nested(create_zeros, TestCase): """Check the creation of heterogeneous arrays zero-valued (nested)""" _descr = Ndescr @@ -145,7 +144,7 @@ class create_values: """Check the creation of heterogeneous arrays with values""" - def check_tuple(self): + def test_tuple(self): """Check creation from tuples""" h = array(self._buffer, dtype=self._descr) self.assert_(normalize_descr(self._descr) == h.dtype.descr) @@ -154,7 +153,7 @@ else: self.assert_(h.shape == ()) - def check_list_of_tuple(self): + def test_list_of_tuple(self): """Check creation from list of tuples""" h = array([self._buffer], dtype=self._descr) self.assert_(normalize_descr(self._descr) == h.dtype.descr) @@ -163,7 +162,7 @@ else: self.assert_(h.shape == (1,)) - def check_list_of_list_of_tuple(self): + def test_list_of_list_of_tuple(self): """Check creation from list of list of tuples""" h = array([[self._buffer]], dtype=self._descr) self.assert_(normalize_descr(self._descr) == h.dtype.descr) @@ -173,25 +172,25 @@ self.assert_(h.shape == (1,1)) -class test_create_values_plain_single(create_values, NumpyTestCase): +class test_create_values_plain_single(create_values, TestCase): """Check the creation of heterogeneous arrays (plain, single row)""" _descr = Pdescr multiple_rows = 0 _buffer = PbufferT[0] -class test_create_values_plain_multiple(create_values, NumpyTestCase): +class test_create_values_plain_multiple(create_values, TestCase): """Check the creation of heterogeneous arrays (plain, multiple rows)""" _descr = Pdescr multiple_rows = 1 _buffer = PbufferT -class test_create_values_nested_single(create_values, NumpyTestCase): +class test_create_values_nested_single(create_values, TestCase): """Check the creation of heterogeneous arrays (nested, single row)""" _descr = Ndescr multiple_rows = 0 _buffer = NbufferT[0] -class test_create_values_nested_multiple(create_values, NumpyTestCase): +class test_create_values_nested_multiple(create_values, TestCase): """Check the creation of heterogeneous arrays (nested, multiple rows)""" _descr = Ndescr multiple_rows = 1 @@ -205,7 +204,7 @@ class read_values_plain: """Check the reading of values in heterogeneous arrays (plain)""" - def check_access_fields(self): + def test_access_fields(self): h = array(self._buffer, dtype=self._descr) if not self.multiple_rows: self.assert_(h.shape == ()) @@ -222,13 +221,13 @@ self._buffer[1][2]], dtype='u1')) -class test_read_values_plain_single(read_values_plain, NumpyTestCase): +class test_read_values_plain_single(read_values_plain, TestCase): """Check the creation of heterogeneous arrays (plain, single row)""" _descr = Pdescr multiple_rows = 0 _buffer = PbufferT[0] -class test_read_values_plain_multiple(read_values_plain, NumpyTestCase): +class test_read_values_plain_multiple(read_values_plain, TestCase): """Check the values of heterogeneous arrays (plain, multiple rows)""" _descr = Pdescr multiple_rows = 1 @@ -238,7 +237,7 @@ """Check the reading of values in heterogeneous arrays (nested)""" - def check_access_top_fields(self): + def test_access_top_fields(self): """Check reading the top fields of a nested array""" h = array(self._buffer, dtype=self._descr) if not self.multiple_rows: @@ -256,7 +255,7 @@ self._buffer[1][5]], dtype='u1')) - def check_nested1_acessors(self): + def test_nested1_acessors(self): """Check reading the nested fields of a nested array (1st level)""" h = array(self._buffer, dtype=self._descr) if not self.multiple_rows: @@ -286,7 +285,7 @@ self._buffer[1][3][1]], dtype='c16')) - def check_nested2_acessors(self): + def test_nested2_acessors(self): """Check reading the nested fields of a nested array (2nd level)""" h = array(self._buffer, dtype=self._descr) if not self.multiple_rows: @@ -304,7 +303,7 @@ self._buffer[1][1][2][3]], dtype='u4')) - def check_nested1_descriptor(self): + def test_nested1_descriptor(self): """Check access nested descriptors of a nested array (1st level)""" h = array(self._buffer, dtype=self._descr) self.assert_(h.dtype['Info']['value'].name == 'complex128') @@ -312,53 +311,49 @@ self.assert_(h.dtype['info']['Name'].name == 'unicode256') self.assert_(h.dtype['info']['Value'].name == 'complex128') - def check_nested2_descriptor(self): + def test_nested2_descriptor(self): """Check access nested descriptors of a nested array (2nd level)""" h = array(self._buffer, dtype=self._descr) self.assert_(h.dtype['Info']['Info2']['value'].name == 'void256') self.assert_(h.dtype['Info']['Info2']['z3'].name == 'void64') -class test_read_values_nested_single(read_values_nested, NumpyTestCase): +class test_read_values_nested_single(read_values_nested, TestCase): """Check the values of heterogeneous arrays (nested, single row)""" _descr = Ndescr multiple_rows = False _buffer = NbufferT[0] -class test_read_values_nested_multiple(read_values_nested, NumpyTestCase): +class test_read_values_nested_multiple(read_values_nested, TestCase): """Check the values of heterogeneous arrays (nested, multiple rows)""" _descr = Ndescr multiple_rows = True _buffer = NbufferT -class TestEmptyField(NumpyTestCase): - def check_assign(self): +class TestEmptyField(TestCase): + def test_assign(self): a = numpy.arange(10, dtype=numpy.float32) a.dtype = [("int", "<0i4"),("float", "<2f4")] assert(a['int'].shape == (5,0)) assert(a['float'].shape == (5,2)) -class TestCommonType(NumpyTestCase): - def check_scalar_loses1(self): +class TestCommonType(TestCase): + def test_scalar_loses1(self): res = numpy.find_common_type(['f4','f4','i4'],['f8']) assert(res == 'f4') - def check_scalar_loses2(self): + def test_scalar_loses2(self): res = numpy.find_common_type(['f4','f4'],['i8']) assert(res == 'f4') - def check_scalar_wins(self): + def test_scalar_wins(self): res = numpy.find_common_type(['f4','f4','i4'],['c8']) assert(res == 'c8') - def check_scalar_wins2(self): + def test_scalar_wins2(self): res = numpy.find_common_type(['u4','i4','i4'],['f4']) assert(res == 'f8') - def check_scalar_wins3(self): # doesn't go up to 'f16' on purpose + def test_scalar_wins3(self): # doesn't go up to 'f16' on purpose res = numpy.find_common_type(['u8','i8','i8'],['f8']) assert(res == 'f8') - - - - if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/core/tests/test_records.py =================================================================== --- branches/cdavid/numpy/core/tests/test_records.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/tests/test_records.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,29 +1,33 @@ - +from os import path from numpy.testing import * set_package_path() -from os import path -import numpy.core;reload(numpy.core) +import numpy.core +reload(numpy.core) +import numpy from numpy.core import * restore_path() -class TestFromrecords(NumpyTestCase): - def check_fromrecords(self): - r = rec.fromrecords([[456,'dbe',1.2],[2,'de',1.3]],names='col1,col2,col3') +class TestFromrecords(TestCase): + def test_fromrecords(self): + r = rec.fromrecords([[456,'dbe',1.2],[2,'de',1.3]], + names='col1,col2,col3') assert_equal(r[0].item(),(456, 'dbe', 1.2)) - def check_method_array(self): + def test_method_array(self): r = rec.array('abcdefg'*100,formats='i2,a3,i4',shape=3,byteorder='big') assert_equal(r[1].item(),(25444, 'efg', 1633837924)) - def check_method_array2(self): - r=rec.array([(1,11,'a'),(2,22,'b'),(3,33,'c'),(4,44,'d'),(5,55,'ex'),(6,66,'f'),(7,77,'g')],formats='u1,f4,a1') + def test_method_array2(self): + r=rec.array([(1,11,'a'),(2,22,'b'),(3,33,'c'),(4,44,'d'),(5,55,'ex'), + (6,66,'f'),(7,77,'g')],formats='u1,f4,a1') assert_equal(r[1].item(),(2, 22.0, 'b')) - def check_recarray_slices(self): - r=rec.array([(1,11,'a'),(2,22,'b'),(3,33,'c'),(4,44,'d'),(5,55,'ex'),(6,66,'f'),(7,77,'g')],formats='u1,f4,a1') + def test_recarray_slices(self): + r=rec.array([(1,11,'a'),(2,22,'b'),(3,33,'c'),(4,44,'d'),(5,55,'ex'), + (6,66,'f'),(7,77,'g')],formats='u1,f4,a1') assert_equal(r[1::2][1].item(),(4, 44.0, 'd')) - def check_recarray_fromarrays(self): + def test_recarray_fromarrays(self): x1 = array([1,2,3,4]) x2 = array(['a','dd','xyz','12']) x3 = array([1.1,2,3,4]) @@ -32,14 +36,14 @@ x1[1] = 34 assert_equal(r.a,array([1,2,3,4])) - def check_recarray_fromfile(self): + def test_recarray_fromfile(self): data_dir = path.join(path.dirname(__file__),'data') filename = path.join(data_dir,'recarray_from_file.fits') fd = open(filename) fd.seek(2880*2) r = rec.fromfile(fd, formats='f8,i4,a5', shape=3, byteorder='big') - def check_recarray_from_obj(self): + def test_recarray_from_obj(self): count = 10 a = zeros(count, dtype='O') b = zeros(count, dtype='f8') @@ -54,7 +58,7 @@ assert(mine.data1[i]==0.0) assert(mine.data2[i]==0.0) - def check_recarray_from_names(self): + def test_recarray_from_names(self): ra = rec.array([ (1, 'abc', 3.7000002861022949, 0), (2, 'xy', 6.6999998092651367, 1), @@ -70,7 +74,7 @@ for k in xrange(len(ra)): assert ra[k].item() == pa[k].item() - def check_recarray_conflict_fields(self): + def test_recarray_conflict_fields(self): ra = rec.array([(1,'abc',2.3),(2,'xyz',4.2), (3,'wrs',1.3)], names='field, shape, mean') @@ -85,7 +89,7 @@ assert_array_equal(ra['field'], [[5,5,5]]) assert callable(ra.field) -class TestRecord(NumpyTestCase): +class TestRecord(TestCase): def setUp(self): self.data = rec.fromrecords([(1,2,3),(4,5,6)], dtype=[("col1", "= rc) -class TestRegression(NumpyTestCase): - def check_invalid_round(self,level=rlevel): +class TestRegression(TestCase): + def test_invalid_round(self,level=rlevel): """Ticket #3""" v = 4.7599999999999998 assert_array_equal(np.array([v]),np.array(v)) - def check_mem_empty(self,level=rlevel): + def test_mem_empty(self,level=rlevel): """Ticket #7""" np.empty((1,),dtype=[('x',np.int64)]) - def check_pickle_transposed(self,level=rlevel): + def test_pickle_transposed(self,level=rlevel): """Ticket #16""" a = np.transpose(np.array([[2,9],[7,0],[3,8]])) f = StringIO() @@ -44,43 +42,43 @@ f.close() assert_array_equal(a,b) - def check_masked_array_create(self,level=rlevel): + def test_masked_array_create(self,level=rlevel): """Ticket #17""" x = np.ma.masked_array([0,1,2,3,0,4,5,6],mask=[0,0,0,1,1,1,0,0]) assert_array_equal(np.ma.nonzero(x),[[1,2,6,7]]) - def check_poly1d(self,level=rlevel): + def test_poly1d(self,level=rlevel): """Ticket #28""" assert_equal(np.poly1d([1]) - np.poly1d([1,0]), np.poly1d([-1,1])) - def check_typeNA(self,level=rlevel): + def test_typeNA(self,level=rlevel): """Ticket #31""" assert_equal(np.typeNA[np.int64],'Int64') assert_equal(np.typeNA[np.uint64],'UInt64') - def check_dtype_names(self,level=rlevel): + def test_dtype_names(self,level=rlevel): """Ticket #35""" dt = np.dtype([(('name','label'),np.int32,3)]) - def check_reduce(self,level=rlevel): + def test_reduce(self,level=rlevel): """Ticket #40""" assert_almost_equal(np.add.reduce([1.,.5],dtype=None), 1.5) - def check_zeros_order(self,level=rlevel): + def test_zeros_order(self,level=rlevel): """Ticket #43""" np.zeros([3], int, 'C') np.zeros([3], order='C') np.zeros([3], int, order='C') - def check_sort_bigendian(self,level=rlevel): + def test_sort_bigendian(self,level=rlevel): """Ticket #47""" a = np.linspace(0, 10, 11) c = a.astype(np.dtype('f8') b = np.arange(10.,dtype=' 0.5)) assert(np.all(b[yb] > 0.5)) - def check_mem_dot(self,level=rlevel): + def test_mem_dot(self,level=rlevel): """Ticket #106""" x = np.random.randn(0,1) y = np.random.randn(10,1) z = np.dot(x, np.transpose(y)) - def check_arange_endian(self,level=rlevel): + def test_arange_endian(self,level=rlevel): """Ticket #111""" ref = np.arange(10) x = np.arange(10,dtype=' 8: # a = np.exp(np.array([1000],dtype=np.longfloat)) # assert(str(a)[1:9] == str(a[0])[:8]) - def check_argmax(self,level=rlevel): + def test_argmax(self,level=rlevel): """Ticket #119""" a = np.random.normal(0,1,(4,5,6,7,8)) for i in xrange(a.ndim): aargmax = a.argmax(i) - def check_matrix_properties(self,level=rlevel): + def test_matrix_properties(self,level=rlevel): """Ticket #125""" a = np.matrix([1.0],dtype=float) assert(type(a.real) is np.matrix) @@ -252,34 +250,34 @@ assert(type(c) is np.matrix) assert(type(d) is np.matrix) - def check_mem_divmod(self,level=rlevel): + def test_mem_divmod(self,level=rlevel): """Ticket #126""" for i in range(10): divmod(np.array([i])[0],10) - def check_hstack_invalid_dims(self,level=rlevel): + def test_hstack_invalid_dims(self,level=rlevel): """Ticket #128""" x = np.arange(9).reshape((3,3)) y = np.array([0,0,0]) self.failUnlessRaises(ValueError,np.hstack,(x,y)) - def check_squeeze_type(self,level=rlevel): + def test_squeeze_type(self,level=rlevel): """Ticket #133""" a = np.array([3]) b = np.array(3) assert(type(a.squeeze()) is np.ndarray) assert(type(b.squeeze()) is np.ndarray) - def check_add_identity(self,level=rlevel): + def test_add_identity(self,level=rlevel): """Ticket #143""" assert_equal(0,np.add.identity) - def check_binary_repr_0(self,level=rlevel): + def test_binary_repr_0(self,level=rlevel): """Ticket #151""" assert_equal('0',np.binary_repr(0)) - def check_rec_iterate(self,level=rlevel): + def test_rec_iterate(self,level=rlevel): """Ticket #160""" descr = np.dtype([('i',int),('f',float),('s','|S3')]) x = np.rec.array([(1,1.1,'1.0'), @@ -287,19 +285,19 @@ x[0].tolist() [i for i in x[0]] - def check_unicode_string_comparison(self,level=rlevel): + def test_unicode_string_comparison(self,level=rlevel): """Ticket #190""" a = np.array('hello',np.unicode_) b = np.array('world') a == b - def check_tostring_FORTRANORDER_discontiguous(self,level=rlevel): + def test_tostring_FORTRANORDER_discontiguous(self,level=rlevel): """Fix in r2836""" # Create discontiguous Fortran-ordered array x = np.array(np.random.rand(3,3),order='F')[:,:2] assert_array_almost_equal(x.ravel(),np.fromstring(x.tostring())) - def check_flat_assignment(self,level=rlevel): + def test_flat_assignment(self,level=rlevel): """Correct behaviour of ticket #194""" x = np.empty((3,1)) x.flat = np.arange(3) @@ -307,7 +305,7 @@ x.flat = np.arange(3,dtype=float) assert_array_almost_equal(x,[[0],[1],[2]]) - def check_broadcast_flat_assignment(self,level=rlevel): + def test_broadcast_flat_assignment(self,level=rlevel): """Ticket #194""" x = np.empty((3,1)) def bfa(): x[:] = np.arange(3) @@ -315,7 +313,7 @@ self.failUnlessRaises(ValueError, bfa) self.failUnlessRaises(ValueError, bfb) - def check_unpickle_dtype_with_object(self,level=rlevel): + def test_unpickle_dtype_with_object(self,level=rlevel): """Implemented in r2840""" dt = np.dtype([('x',int),('y',np.object_),('z','O')]) f = StringIO() @@ -325,7 +323,7 @@ f.close() assert_equal(dt,dt_) - def check_mem_array_creation_invalid_specification(self,level=rlevel): + def test_mem_array_creation_invalid_specification(self,level=rlevel): """Ticket #196""" dt = np.dtype([('x',int),('y',np.object_)]) # Wrong way @@ -333,7 +331,7 @@ # Correct way np.array([(1,'object')],dt) - def check_recarray_single_element(self,level=rlevel): + def test_recarray_single_element(self,level=rlevel): """Ticket #202""" a = np.array([1,2,3],dtype=np.int32) b = a.copy() @@ -341,24 +339,24 @@ assert_array_equal(a,b) assert_equal(a,r[0][0]) - def check_zero_sized_array_indexing(self,level=rlevel): + def test_zero_sized_array_indexing(self,level=rlevel): """Ticket #205""" tmp = np.array([]) def index_tmp(): tmp[np.array(10)] self.failUnlessRaises(IndexError, index_tmp) - def check_unique_zero_sized(self,level=rlevel): + def test_unique_zero_sized(self,level=rlevel): """Ticket #205""" assert_array_equal([], np.unique(np.array([]))) - def check_chararray_rstrip(self,level=rlevel): + def test_chararray_rstrip(self,level=rlevel): """Ticket #222""" x = np.chararray((1,),5) x[0] = 'a ' x = x.rstrip() assert_equal(x[0], 'a') - def check_object_array_shape(self,level=rlevel): + def test_object_array_shape(self,level=rlevel): """Ticket #239""" assert_equal(np.array([[1,2],3,4],dtype=object).shape, (3,)) assert_equal(np.array([[1,2],[3,4]],dtype=object).shape, (2,2)) @@ -367,29 +365,29 @@ assert_equal(np.array([[],[],[]],dtype=object).shape, (3,0)) assert_equal(np.array([[3,4],[5,6],None],dtype=object).shape, (3,)) - def check_mem_around(self,level=rlevel): + def test_mem_around(self,level=rlevel): """Ticket #243""" x = np.zeros((1,)) y = [0] decimal = 6 np.around(abs(x-y),decimal) <= 10.0**(-decimal) - def check_character_array_strip(self,level=rlevel): + def test_character_array_strip(self,level=rlevel): """Ticket #246""" x = np.char.array(("x","x ","x ")) for c in x: assert_equal(c,"x") - def check_lexsort(self,level=rlevel): + def test_lexsort(self,level=rlevel): """Lexsort memory error""" v = np.array([1,2,3,4,5,6,7,8,9,10]) assert_equal(np.lexsort(v),0) - def check_pickle_dtype(self,level=rlevel): + def test_pickle_dtype(self,level=rlevel): """Ticket #251""" import pickle pickle.dumps(np.float) - def check_masked_array_multiply(self,level=rlevel): + def test_masked_array_multiply(self,level=rlevel): """Ticket #254""" a = np.ma.zeros((4,1)) a[2,0] = np.ma.masked @@ -397,41 +395,41 @@ a*b b*a - def check_swap_real(self, level=rlevel): + def test_swap_real(self, level=rlevel): """Ticket #265""" assert_equal(np.arange(4,dtype='>c8').imag.max(),0.0) assert_equal(np.arange(4,dtype=' 1 and x['two'] > 2) - def check_method_args(self, level=rlevel): + def test_method_args(self, level=rlevel): # Make sure methods and functions have same default axis # keyword and arguments funcs1= ['argmax', 'argmin', 'sum', ('product', 'prod'), @@ -470,17 +468,17 @@ res2 = getattr(np, func)(arr1, arr2) assert abs(res1-res2).max() < 1e-8, func - def check_mem_lexsort_strings(self, level=rlevel): + def test_mem_lexsort_strings(self, level=rlevel): """Ticket #298""" lst = ['abc','cde','fgh'] np.lexsort((lst,)) - def check_fancy_index(self, level=rlevel): + def test_fancy_index(self, level=rlevel): """Ticket #302""" x = np.array([1,2])[np.array([0])] assert_equal(x.shape,(1,)) - def check_recarray_copy(self, level=rlevel): + def test_recarray_copy(self, level=rlevel): """Ticket #312""" dt = [('x',np.int16),('y',np.float64)] ra = np.array([(1,2.3)], dtype=dt) @@ -488,68 +486,68 @@ rb['x'] = 2. assert ra['x'] != rb['x'] - def check_rec_fromarray(self, level=rlevel): + def test_rec_fromarray(self, level=rlevel): """Ticket #322""" x1 = np.array([[1,2],[3,4],[5,6]]) x2 = np.array(['a','dd','xyz']) x3 = np.array([1.1,2,3]) np.rec.fromarrays([x1,x2,x3], formats="(2,)i4,a3,f8") - def check_object_array_assign(self, level=rlevel): + def test_object_array_assign(self, level=rlevel): x = np.empty((2,2),object) x.flat[2] = (1,2,3) assert_equal(x.flat[2],(1,2,3)) - def check_ndmin_float64(self, level=rlevel): + def test_ndmin_float64(self, level=rlevel): """Ticket #324""" x = np.array([1,2,3],dtype=np.float64) assert_equal(np.array(x,dtype=np.float32,ndmin=2).ndim,2) assert_equal(np.array(x,dtype=np.float64,ndmin=2).ndim,2) - def check_mem_vectorise(self, level=rlevel): + def test_mem_vectorise(self, level=rlevel): """Ticket #325""" vt = np.vectorize(lambda *args: args) vt(np.zeros((1,2,1)), np.zeros((2,1,1)), np.zeros((1,1,2))) vt(np.zeros((1,2,1)), np.zeros((2,1,1)), np.zeros((1,1,2)), np.zeros((2,2))) - def check_mem_axis_minimization(self, level=rlevel): + def test_mem_axis_minimization(self, level=rlevel): """Ticket #327""" data = np.arange(5) data = np.add.outer(data,data) - def check_mem_float_imag(self, level=rlevel): + def test_mem_float_imag(self, level=rlevel): """Ticket #330""" np.float64(1.0).imag - def check_dtype_tuple(self, level=rlevel): + def test_dtype_tuple(self, level=rlevel): """Ticket #334""" assert np.dtype('i4') == np.dtype(('i4',())) - def check_dtype_posttuple(self, level=rlevel): + def test_dtype_posttuple(self, level=rlevel): """Ticket #335""" np.dtype([('col1', '()i4')]) - def check_mgrid_single_element(self, level=rlevel): + def test_mgrid_single_element(self, level=rlevel): """Ticket #339""" assert_array_equal(np.mgrid[0:0:1j],[0]) assert_array_equal(np.mgrid[0:0],[]) - def check_numeric_carray_compare(self, level=rlevel): + def test_numeric_carray_compare(self, level=rlevel): """Ticket #341""" assert_equal(np.array([ 'X' ], 'c'),'X') - def check_string_array_size(self, level=rlevel): + def test_string_array_size(self, level=rlevel): """Ticket #342""" self.failUnlessRaises(ValueError, np.array,[['X'],['X','X','X']],'|S1') - def check_dtype_repr(self, level=rlevel): + def test_dtype_repr(self, level=rlevel): """Ticket #344""" dt1=np.dtype(('uint32', 2)) dt2=np.dtype(('uint32', (2,))) assert_equal(dt1.__repr__(), dt2.__repr__()) - def check_reshape_order(self, level=rlevel): + def test_reshape_order(self, level=rlevel): """Make sure reshape order works.""" a = np.arange(6).reshape(2,3,order='F') assert_equal(a,[[0,2,4],[1,3,5]]) @@ -557,22 +555,22 @@ b = a[:,1] assert_equal(b.reshape(2,2,order='F'), [[2,6],[4,8]]) - def check_repeat_discont(self, level=rlevel): + def test_repeat_discont(self, level=rlevel): """Ticket #352""" a = np.arange(12).reshape(4,3)[:,2] assert_equal(a.repeat(3), [2,2,2,5,5,5,8,8,8,11,11,11]) - def check_array_index(self, level=rlevel): + def test_array_index(self, level=rlevel): """Make sure optimization is not called in this case.""" a = np.array([1,2,3]) a2 = np.array([[1,2,3]]) assert_equal(a[np.where(a==3)], a2[np.where(a2==3)]) - def check_object_argmax(self, level=rlevel): + def test_object_argmax(self, level=rlevel): a = np.array([1,2,3],dtype=object) assert a.argmax() == 2 - def check_recarray_fields(self, level=rlevel): + def test_recarray_fields(self, level=rlevel): """Ticket #372""" dt0 = np.dtype([('f0','i4'),('f1','i4')]) dt1 = np.dtype([('f0','i8'),('f1','i8')]) @@ -583,33 +581,33 @@ np.rec.fromarrays([(1,2),(3,4)])]: assert(a.dtype in [dt0,dt1]) - def check_random_shuffle(self, level=rlevel): + def test_random_shuffle(self, level=rlevel): """Ticket #374""" a = np.arange(5).reshape((5,1)) b = a.copy() np.random.shuffle(b) assert_equal(np.sort(b, axis=0),a) - def check_refcount_vectorize(self, level=rlevel): + def test_refcount_vectorize(self, level=rlevel): """Ticket #378""" def p(x,y): return 123 v = np.vectorize(p) assert_valid_refcount(v) - def check_poly1d_nan_roots(self, level=rlevel): + def test_poly1d_nan_roots(self, level=rlevel): """Ticket #396""" p = np.poly1d([np.nan,np.nan,1], r=0) self.failUnlessRaises(np.linalg.LinAlgError,getattr,p,"r") - def check_refcount_vdot(self, level=rlevel): + def test_refcount_vdot(self, level=rlevel): """Changeset #3443""" assert_valid_refcount(np.vdot) - def check_startswith(self, level=rlevel): + def test_startswith(self, level=rlevel): ca = np.char.array(['Hi','There']) assert_equal(ca.startswith('H'),[True,False]) - def check_noncommutative_reduce_accumulate(self, level=rlevel): + def test_noncommutative_reduce_accumulate(self, level=rlevel): """Ticket #413""" tosubtract = np.arange(5) todivide = np.array([2.0, 0.5, 0.25]) @@ -620,44 +618,44 @@ assert_array_equal(np.divide.accumulate(todivide), np.array([2., 4., 16.])) - def check_mem_polymul(self, level=rlevel): + def test_mem_polymul(self, level=rlevel): """Ticket #448""" np.polymul([],[1.]) - def check_convolve_empty(self, level=rlevel): + def test_convolve_empty(self, level=rlevel): """Convolve should raise an error for empty input array.""" self.failUnlessRaises(AssertionError,np.convolve,[],[1]) self.failUnlessRaises(AssertionError,np.convolve,[1],[]) - def check_multidim_byteswap(self, level=rlevel): + def test_multidim_byteswap(self, level=rlevel): """Ticket #449""" r=np.array([(1,(0,1,2))], dtype="i2,3i2") assert_array_equal(r.byteswap(), np.array([(256,(0,256,512))],r.dtype)) - def check_string_NULL(self, level=rlevel): + def test_string_NULL(self, level=rlevel): """Changeset 3557""" assert_equal(np.array("a\x00\x0b\x0c\x00").item(), 'a\x00\x0b\x0c') - def check_mem_string_concat(self, level=rlevel): + def test_mem_string_concat(self, level=rlevel): """Ticket #469""" x = np.array([]) np.append(x,'asdasd\tasdasd') - def check_matrix_multiply_by_1d_vector(self, level=rlevel) : + def test_matrix_multiply_by_1d_vector(self, level=rlevel) : """Ticket #473""" def mul() : np.mat(np.eye(2))*np.ones(2) self.failUnlessRaises(ValueError,mul) - def check_junk_in_string_fields_of_recarray(self, level=rlevel): + def test_junk_in_string_fields_of_recarray(self, level=rlevel): """Ticket #483""" r = np.array([['abc']], dtype=[('var1', '|S20')]) assert str(r['var1'][0][0]) == 'abc' - def check_take_output(self, level=rlevel): + def test_take_output(self, level=rlevel): """Ensure that 'take' honours output parameter.""" x = np.arange(12).reshape((3,4)) a = np.take(x,[0,2],axis=1) @@ -665,7 +663,7 @@ np.take(x,[0,2],axis=1,out=b) assert_array_equal(a,b) - def check_array_str_64bit(self, level=rlevel): + def test_array_str_64bit(self, level=rlevel): """Ticket #501""" s = np.array([1, np.nan],dtype=np.float64) errstate = np.seterr(all='raise') @@ -674,7 +672,7 @@ finally: np.seterr(**errstate) - def check_frompyfunc_endian(self, level=rlevel): + def test_frompyfunc_endian(self, level=rlevel): """Ticket #503""" from math import radians uradians = np.frompyfunc(radians, 1, 1) @@ -683,66 +681,66 @@ assert_almost_equal(uradians(big_endian).astype(float), uradians(little_endian).astype(float)) - def check_mem_string_arr(self, level=rlevel): + def test_mem_string_arr(self, level=rlevel): """Ticket #514""" s = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" t = [] np.hstack((t, s )) - def check_arr_transpose(self, level=rlevel): + def test_arr_transpose(self, level=rlevel): """Ticket #516""" x = np.random.rand(*(2,)*16) y = x.transpose(range(16)) - def check_string_mergesort(self, level=rlevel): + def test_string_mergesort(self, level=rlevel): """Ticket #540""" x = np.array(['a']*32) assert_array_equal(x.argsort(kind='m'), np.arange(32)) - def check_argmax_byteorder(self, level=rlevel): + def test_argmax_byteorder(self, level=rlevel): """Ticket #546""" a = np.arange(3, dtype='>f') assert a[a.argmax()] == a.max() - def check_numeric_random(self, level=rlevel): + def test_numeric_random(self, level=rlevel): """Ticket #552""" from numpy.oldnumeric.random_array import randint randint(0,50,[2,3]) - def check_poly_div(self, level=rlevel): + def test_poly_div(self, level=rlevel): """Ticket #553""" u = np.poly1d([1,2,3]) v = np.poly1d([1,2,3,4,5]) q,r = np.polydiv(u,v) assert_equal(q*v + r, u) - def check_poly_eq(self, level=rlevel): + def test_poly_eq(self, level=rlevel): """Ticket #554""" x = np.poly1d([1,2,3]) y = np.poly1d([3,4]) assert x != y assert x == x - def check_rand_seed(self, level=rlevel): + def test_rand_seed(self, level=rlevel): """Ticket #555""" for l in np.arange(4): np.random.seed(l) - def check_mem_deallocation_leak(self, level=rlevel): + def test_mem_deallocation_leak(self, level=rlevel): """Ticket #562""" a = np.zeros(5,dtype=float) b = np.array(a,dtype=float) del a, b - def check_mem_insert(self, level=rlevel): + def test_mem_insert(self, level=rlevel): """Ticket #572""" np.lib.place(1,1,1) - def check_mem_on_invalid_dtype(self): + def test_mem_on_invalid_dtype(self): "Ticket #583" self.failUnlessRaises(ValueError, np.fromiter, [['12',''],['13','']], str) - def check_dot_negative_stride(self, level=rlevel): + def test_dot_negative_stride(self, level=rlevel): """Ticket #588""" x = np.array([[1,5,25,125.,625]]) y = np.array([[20.],[160.],[640.],[1280.],[1024.]]) @@ -750,14 +748,14 @@ y2 = y[::-1] assert_equal(np.dot(x,z),np.dot(x,y2)) - def check_object_casting(self, level=rlevel): + def test_object_casting(self, level=rlevel): def rs(): x = np.ones([484,286]) y = np.zeros([484,286]) x |= y self.failUnlessRaises(TypeError,rs) - def check_unicode_scalar(self, level=rlevel): + def test_unicode_scalar(self, level=rlevel): """Ticket #600""" import cPickle x = np.array(["DROND", "DROND1"], dtype="U6") @@ -765,7 +763,7 @@ new = cPickle.loads(cPickle.dumps(el)) assert_equal(new, el) - def check_arange_non_native_dtype(self, level=rlevel): + def test_arange_non_native_dtype(self, level=rlevel): """Ticket #616""" for T in ('>f4','0)]=1.0 self.failUnlessRaises(ValueError,ia,x,s) - def check_mem_scalar_indexing(self, level=rlevel): + def test_mem_scalar_indexing(self, level=rlevel): """Ticket #603""" x = np.array([0],dtype=float) index = np.array(0,dtype=np.int32) x[index] - def check_binary_repr_0_width(self, level=rlevel): + def test_binary_repr_0_width(self, level=rlevel): assert_equal(np.binary_repr(0,width=3),'000') - def check_fromstring(self, level=rlevel): + def test_fromstring(self, level=rlevel): assert_equal(np.fromstring("12:09:09", dtype=int, sep=":"), [12,9,9]) - def check_searchsorted_variable_length(self, level=rlevel): + def test_searchsorted_variable_length(self, level=rlevel): x = np.array(['a','aa','b']) y = np.array(['d','e']) assert_equal(x.searchsorted(y), [3,3]) - def check_string_argsort_with_zeros(self, level=rlevel): + def test_string_argsort_with_zeros(self, level=rlevel): """Check argsort for strings containing zeros.""" x = np.fromstring("\x00\x02\x00\x01", dtype="|S2") assert_array_equal(x.argsort(kind='m'), np.array([1,0])) assert_array_equal(x.argsort(kind='q'), np.array([1,0])) - def check_string_sort_with_zeros(self, level=rlevel): + def test_string_sort_with_zeros(self, level=rlevel): """Check sort for strings containing zeros.""" x = np.fromstring("\x00\x02\x00\x01", dtype="|S2") y = np.fromstring("\x00\x01\x00\x02", dtype="|S2") assert_array_equal(np.sort(x, kind="q"), y) - def check_hist_bins_as_list(self, level=rlevel): + def test_hist_bins_as_list(self, level=rlevel): """Ticket #632""" hist,edges = np.histogram([1,2,3,4],[1,2]) assert_array_equal(hist,[1,3]) assert_array_equal(edges,[1,2]) - def check_copy_detection_zero_dim(self, level=rlevel): + def test_copy_detection_zero_dim(self, level=rlevel): """Ticket #658""" np.indices((0,3,4)).T.reshape(-1,3) - def check_flat_byteorder(self, level=rlevel): + def test_flat_byteorder(self, level=rlevel): """Ticket #657""" x = np.arange(10) assert_array_equal(x.astype('>i4'),x.astype('i4').flat[:],x.astype('i4')): x = np.array([-1,0,1],dtype=dt) assert_equal(x.flat[0].dtype, x[0].dtype) - def check_copy_detection_corner_case(self, level=rlevel): + def test_copy_detection_corner_case(self, level=rlevel): """Ticket #658""" np.indices((0,3,4)).T.reshape(-1,3) - def check_object_array_refcounting(self, level=rlevel): + def test_object_array_refcounting(self, level=rlevel): """Ticket #633""" if not hasattr(sys, 'getrefcount'): return @@ -942,7 +940,7 @@ assert cnt(a) == cnt0_a + 5 + 2 assert cnt(b) == cnt0_b + 5 + 3 - def check_mem_custom_float_to_array(self, level=rlevel): + def test_mem_custom_float_to_array(self, level=rlevel): """Ticket 702""" class MyFloat: def __float__(self): @@ -951,7 +949,7 @@ tmp = np.atleast_1d([MyFloat()]) tmp2 = tmp.astype(float) - def check_object_array_refcount_self_assign(self, level=rlevel): + def test_object_array_refcount_self_assign(self, level=rlevel): """Ticket #711""" class VictimObject(object): deleted = False @@ -966,23 +964,23 @@ arr[:] = arr # trying to induce a segfault by doing it again... assert not arr[0].deleted - def check_mem_fromiter_invalid_dtype_string(self, level=rlevel): + def test_mem_fromiter_invalid_dtype_string(self, level=rlevel): x = [1,2,3] self.failUnlessRaises(ValueError, np.fromiter, [xi for xi in x], dtype='S') - def check_reduce_big_object_array(self, level=rlevel): + def test_reduce_big_object_array(self, level=rlevel): """Ticket #713""" oldsize = np.setbufsize(10*16) a = np.array([None]*161, object) assert not np.any(a) np.setbufsize(oldsize) - def check_mem_0d_array_index(self, level=rlevel): + def test_mem_0d_array_index(self, level=rlevel): """Ticket #714""" np.zeros(10)[np.array(0)] - def check_floats_from_string(self, level=rlevel): + def test_floats_from_string(self, level=rlevel): """Ticket #640, floats from string""" fsingle = np.single('1.234') fdouble = np.double('1.234') @@ -991,7 +989,7 @@ assert_almost_equal(fdouble, 1.234) assert_almost_equal(flongdouble, 1.234) - def check_complex_dtype_printing(self, level=rlevel): + def test_complex_dtype_printing(self, level=rlevel): dt = np.dtype([('top', [('tiles', ('>f4', (64, 64)), (1,)), ('rtile', '>f4', (64, 36))], (3,)), ('bottom', [('bleft', ('>f4', (8, 64)), (1,)), @@ -1002,7 +1000,7 @@ "('bottom', [('bleft', ('>f4', (8, 64)), (1,)), " "('bright', '>f4', (8, 36))])]") - def check_nonnative_endian_fill(self, level=rlevel): + def test_nonnative_endian_fill(self, level=rlevel): """ Non-native endian arrays were incorrectly filled with scalars before r5034. """ @@ -1014,11 +1012,11 @@ x.fill(1) assert_equal(x, np.array([1], dtype=dtype)) - def check_asfarray_none(self, level=rlevel): + def test_asfarray_none(self, level=rlevel): """Test for changeset r5065""" assert_array_equal(np.array([np.nan]), np.asfarray([None])) - def check_dot_alignment_sse2(self, level=rlevel): + def test_dot_alignment_sse2(self, level=rlevel): """Test for ticket #551, changeset r5140""" x = np.zeros((30,40)) y = pickle.loads(pickle.dumps(x)) @@ -1027,8 +1025,8 @@ # This shouldn't cause a segmentation fault: np.dot(z, y) - def check_astype_copy(self, level=rlevel): - """Ticket 788, changeset r5155""" + def test_astype_copy(self, level=rlevel): + """Ticket #788, changeset r5155""" # The test data file was generated by scipy.io.savemat. # The dtype is float64, but the isbuiltin attribute is 0. data_dir = path.join(path.dirname(__file__), 'data') @@ -1038,5 +1036,122 @@ assert (xp.__array_interface__['data'][0] != xpd.__array_interface__['data'][0]) + def test_compress_small_type(self, level=rlevel): + """Ticket #789, changeset 5217. + """ + # compress with out argument segfaulted if cannot cast safely + import numpy as np + a = np.array([[1, 2], [3, 4]]) + b = np.zeros((2, 1), dtype = np.single) + try: + a.compress([True, False], axis = 1, out = b) + raise AssertionError("compress with an out which cannot be " \ + "safely casted should not return "\ + "successfully") + except TypeError: + pass + + def test_attributes(self, level=rlevel): + """Ticket #791 + """ + import numpy as np + class TestArray(np.ndarray): + def __new__(cls, data, info): + result = np.array(data) + result = result.view(cls) + result.info = info + return result + def __array_finalize__(self, obj): + self.info = getattr(obj, 'info', '') + dat = TestArray([[1,2,3,4],[5,6,7,8]],'jubba') + assert dat.info == 'jubba' + dat.resize((4,2)) + assert dat.info == 'jubba' + dat.sort() + assert dat.info == 'jubba' + dat.fill(2) + assert dat.info == 'jubba' + dat.put([2,3,4],[6,3,4]) + assert dat.info == 'jubba' + dat.setfield(4, np.int32,0) + assert dat.info == 'jubba' + dat.setflags() + assert dat.info == 'jubba' + assert dat.all(1).info == 'jubba' + assert dat.any(1).info == 'jubba' + assert dat.argmax(1).info == 'jubba' + assert dat.argmin(1).info == 'jubba' + assert dat.argsort(1).info == 'jubba' + assert dat.astype(TestArray).info == 'jubba' + assert dat.byteswap().info == 'jubba' + assert dat.clip(2,7).info == 'jubba' + assert dat.compress([0,1,1]).info == 'jubba' + assert dat.conj().info == 'jubba' + assert dat.conjugate().info == 'jubba' + assert dat.copy().info == 'jubba' + dat2 = TestArray([2, 3, 1, 0],'jubba') + choices = [[0, 1, 2, 3], [10, 11, 12, 13], + [20, 21, 22, 23], [30, 31, 32, 33]] + assert dat2.choose(choices).info == 'jubba' + assert dat.cumprod(1).info == 'jubba' + assert dat.cumsum(1).info == 'jubba' + assert dat.diagonal().info == 'jubba' + assert dat.flatten().info == 'jubba' + assert dat.getfield(np.int32,0).info == 'jubba' + assert dat.imag.info == 'jubba' + assert dat.max(1).info == 'jubba' + assert dat.mean(1).info == 'jubba' + assert dat.min(1).info == 'jubba' + assert dat.newbyteorder().info == 'jubba' + assert dat.nonzero()[0].info == 'jubba' + assert dat.nonzero()[1].info == 'jubba' + assert dat.prod(1).info == 'jubba' + assert dat.ptp(1).info == 'jubba' + assert dat.ravel().info == 'jubba' + assert dat.real.info == 'jubba' + assert dat.repeat(2).info == 'jubba' + assert dat.reshape((2,4)).info == 'jubba' + assert dat.round().info == 'jubba' + assert dat.squeeze().info == 'jubba' + assert dat.std(1).info == 'jubba' + assert dat.sum(1).info == 'jubba' + assert dat.swapaxes(0,1).info == 'jubba' + assert dat.take([2,3,5]).info == 'jubba' + assert dat.transpose().info == 'jubba' + assert dat.T.info == 'jubba' + assert dat.var(1).info == 'jubba' + assert dat.view(TestArray).info == 'jubba' + + def test_recarray_tolist(self, level=rlevel): + """Ticket #793, changeset r5215 + """ + a = np.recarray(2, formats="i4,f8,f8", names="id,x,y") + b = a.tolist() + assert( a[0].tolist() == b[0]) + assert( a[1].tolist() == b[1]) + + def test_large_fancy_indexing(self, level=rlevel): + # Large enough to fail on 64-bit. + nbits = np.dtype(np.intp).itemsize * 8 + thesize = int((2**nbits)**(1.0/5.0)+1) + def dp(): + n = 3 + a = np.ones((n,)*5) + i = np.random.randint(0,n,size=thesize) + a[np.ix_(i,i,i,i,i)] = 0 + def dp2(): + n = 3 + a = np.ones((n,)*5) + i = np.random.randint(0,n,size=thesize) + g = a[np.ix_(i,i,i,i,i)] + self.failUnlessRaises(ValueError, dp) + self.failUnlessRaises(ValueError, dp2) + + def test_char_array_creation(self, level=rlevel): + a = np.array('123', dtype='c') + b = np.array(['1','2','3']) + assert_equal(a,b) + + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/core/tests/test_scalarmath.py =================================================================== --- branches/cdavid/numpy/core/tests/test_scalarmath.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/tests/test_scalarmath.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -10,13 +10,13 @@ # This compares scalarmath against ufuncs. -class TestTypes(NumpyTestCase): - def check_types(self, level=1): +class TestTypes(TestCase): + def test_types(self, level=1): for atype in types: a = atype(1) assert a == 1, "error with %r: got %r" % (atype,a) - def check_type_add(self, level=1): + def test_type_add(self, level=1): # list of types for k, atype in enumerate(types): vala = atype(3) @@ -30,20 +30,21 @@ val.dtype.char == valo.dtype.char, \ "error with (%d,%d)" % (k,l) - def check_type_create(self, level=1): + def test_type_create(self, level=1): for k, atype in enumerate(types): a = array([1,2,3],atype) b = atype([1,2,3]) assert_equal(a,b) -class TestPower(NumpyTestCase): - def check_small_types(self): + +class TestPower(TestCase): + def test_small_types(self): for t in [np.int8, np.int16]: a = t(3) b = a ** 4 assert b == 81, "error with %r: got %r" % (t,b) - def check_large_types(self): + def test_large_types(self): for t in [np.int32, np.int64, np.float32, np.float64, np.longdouble]: a = t(51) b = a ** 4 @@ -53,7 +54,8 @@ else: assert_almost_equal(b, 6765201, err_msg=msg) -class TestConversion(NumpyTestCase): + +class TestConversion(TestCase): def test_int_from_long(self): l = [1e6, 1e12, 1e18, -1e6, -1e12, -1e18] li = [10**6, 10**12, 10**18, -10**6, -10**12, -10**18] @@ -64,6 +66,7 @@ a = np.array(l[:3], dtype=np.uint64) assert_equal(map(int,a), li[:3]) + #class TestRepr(NumpyTestCase): # def check_repr(self): # for t in types: @@ -72,8 +75,9 @@ # val2 = eval(val_repr) # assert_equal( val, val2 ) -class TestRepr(NumpyTestCase): - def check_float_repr(self): + +class TestRepr(TestCase): + def test_float_repr(self): from numpy import nan, inf for t in [np.float32, np.float64, np.longdouble]: if t is np.longdouble: # skip it for now. @@ -82,7 +86,8 @@ last_fraction_bit_idx = finfo.nexp + finfo.nmant last_exponent_bit_idx = finfo.nexp storage_bytes = np.dtype(t).itemsize*8 - for which in ['small denorm','small norm']: # could add some more types here + # could add some more types to the list below + for which in ['small denorm','small norm']: # Values from http://en.wikipedia.org/wiki/IEEE_754 constr = array([0x00]*storage_bytes,dtype=np.uint8) if which == 'small denorm': @@ -106,5 +111,6 @@ if not (val2 == 0 and val < 1e-100): assert_equal(val, val2) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/core/tests/test_ufunc.py =================================================================== --- branches/cdavid/numpy/core/tests/test_ufunc.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/tests/test_ufunc.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,14 +1,14 @@ import numpy as np from numpy.testing import * -class TestUfunc(NumpyTestCase): +class TestUfunc(TestCase): def test_reduceat_shifting_sum(self) : L = 6 x = np.arange(L) idx = np.array(zip(np.arange(L-2), np.arange(L-2)+2)).ravel() assert_array_equal(np.add.reduceat(x,idx)[::2], [1,3,5,7]) - def check_generic_loops(self) : + def test_generic_loops(self) : """Test generic loops. The loops to be tested are: @@ -147,5 +147,91 @@ # check PyUFunc_On_Om # fixme -- I don't know how to do this yet + def test_all_ufunc(self) : + """Try to check presence and results of all ufuncs. + + The list of ufuncs comes from generate_umath.py and is as follows: + + ===== ============= =============== ======================== + done function types notes + ===== ============= =============== ======================== + n add bool + nums + O boolean + is || + n subtract bool + nums + O boolean - is ^ + n multiply bool + nums + O boolean * is & + n divide nums + O + n floor_divide nums + O + n true_divide nums + O bBhH -> f, iIlLqQ -> d + n conjugate nums + O + n fmod nums + M + n square nums + O + n reciprocal nums + O + n ones_like nums + O + n power nums + O + n absolute nums + O complex -> real + n negative nums + O + n sign nums + O -> int + n greater bool + nums + O -> bool + n greater_equal bool + nums + O -> bool + n less bool + nums + O -> bool + n less_equal bool + nums + O -> bool + n equal bool + nums + O -> bool + n not_equal bool + nums + O -> bool + n logical_and bool + nums + M -> bool + n logical_not bool + nums + M -> bool + n logical_or bool + nums + M -> bool + n logical_xor bool + nums + M -> bool + n maximum bool + nums + O + n minimum bool + nums + O + n bitwise_and bool + ints + O flts raise an error + n bitwise_or bool + ints + O flts raise an error + n bitwise_xor bool + ints + O flts raise an error + n invert bool + ints + O flts raise an error + n left_shift ints + O flts raise an error + n right_shift ints + O flts raise an error + n degrees real + M cmplx raise an error + n radians real + M cmplx raise an error + n arccos flts + M + n arccosh flts + M + n arcsin flts + M + n arcsinh flts + M + n arctan flts + M + n arctanh flts + M + n cos flts + M + n sin flts + M + n tan flts + M + n cosh flts + M + n sinh flts + M + n tanh flts + M + n exp flts + M + n expm1 flts + M + n log flts + M + n log10 flts + M + n log1p flts + M + n sqrt flts + M real x < 0 raises error + n ceil real + M + n floor real + M + n fabs real + M + n rint flts + M + n arctan2 real + M + n remainder ints + real + O + n hypot real + M + n isnan flts -> bool + n isinf flts -> bool + n isfinite flts -> bool + n signbit real -> bool + n modf real -> (frac, int) + ===== ============= =============== ======================== + + Types other than those listed will be accepted, but they are cast to + the smallest compatible type for which the function is defined. The + casting rules are: + + bool -> int8 -> float32 + ints -> double + + """ + pass + + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/core/tests/test_umath.py =================================================================== --- branches/cdavid/numpy/core/tests/test_umath.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/tests/test_umath.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -6,16 +6,16 @@ import numpy as np restore_path() -class TestDivision(NumpyTestCase): - def check_division_int(self): +class TestDivision(TestCase): + def test_division_int(self): # int division should return the floor of the result, a la Python x = array([5, 10, 90, 100, -5, -10, -90, -100, -120]) assert_equal(x / 100, [0, 0, 0, 1, -1, -1, -1, -1, -2]) assert_equal(x // 100, [0, 0, 0, 1, -1, -1, -1, -1, -2]) assert_equal(x % 100, [5, 10, 90, 0, 95, 90, 10, 0, 80]) -class TestPower(NumpyTestCase): - def check_power_float(self): +class TestPower(TestCase): + def test_power_float(self): x = array([1., 2., 3.]) assert_equal(x**0, [1., 1., 1.]) assert_equal(x**1, x) @@ -26,7 +26,7 @@ assert_almost_equal(x**(-1), [1., 0.5, 1./3]) assert_almost_equal(x**(0.5), [1., ncu.sqrt(2), ncu.sqrt(3)]) - def check_power_complex(self): + def test_power_complex(self): x = array([1+2j, 2+3j, 3+4j]) assert_equal(x**0, [1., 1., 1.]) assert_equal(x**1, x) @@ -39,41 +39,42 @@ assert_almost_equal(x**14, [-76443+16124j, 23161315+58317492j, 5583548873 + 2465133864j]) -class TestLog1p(NumpyTestCase): - def check_log1p(self): + +class TestLog1p(TestCase): + def test_log1p(self): assert_almost_equal(ncu.log1p(0.2), ncu.log(1.2)) assert_almost_equal(ncu.log1p(1e-6), ncu.log(1+1e-6)) -class TestExpm1(NumpyTestCase): - def check_expm1(self): +class TestExpm1(TestCase): + def test_expm1(self): assert_almost_equal(ncu.expm1(0.2), ncu.exp(0.2)-1) assert_almost_equal(ncu.expm1(1e-6), ncu.exp(1e-6)-1) -class TestMaximum(NumpyTestCase): - def check_reduce_complex(self): +class TestMaximum(TestCase): + def test_reduce_complex(self): assert_equal(maximum.reduce([1,2j]),1) assert_equal(maximum.reduce([1+3j,2j]),1+3j) -class TestMinimum(NumpyTestCase): - def check_reduce_complex(self): +class TestMinimum(TestCase): + def test_reduce_complex(self): assert_equal(minimum.reduce([1,2j]),2j) -class TestFloatingPoint(NumpyTestCase): - def check_floating_point(self): +class TestFloatingPoint(TestCase): + def test_floating_point(self): assert_equal(ncu.FLOATING_POINT_SUPPORT, 1) -def TestDegrees(NumpyTestCase): - def check_degrees(self): +class TestDegrees(TestCase): + def test_degrees(self): assert_almost_equal(ncu.degrees(pi), 180.0) assert_almost_equal(ncu.degrees(-0.5*pi), -90.0) -def TestRadians(NumpyTestCase): - def check_radians(self): +class TestRadians(TestCase): + def test_radians(self): assert_almost_equal(ncu.radians(180.0), pi) - assert_almost_equal(ncu.degrees(-90.0), -0.5*pi) + assert_almost_equal(ncu.radians(-90.0), -0.5*pi) -class TestSpecialMethods(NumpyTestCase): - def check_wrap(self): +class TestSpecialMethods(TestCase): + def test_wrap(self): class with_wrap(object): def __array__(self): return zeros(1) @@ -92,7 +93,7 @@ assert_equal(args[1], a) self.failUnlessEqual(i, 0) - def check_old_wrap(self): + def test_old_wrap(self): class with_wrap(object): def __array__(self): return zeros(1) @@ -104,7 +105,7 @@ x = minimum(a, a) assert_equal(x.arr, zeros(1)) - def check_priority(self): + def test_priority(self): class A(object): def __array__(self): return zeros(1) @@ -142,7 +143,7 @@ self.failUnless(type(exp(b) is B)) self.failUnless(type(exp(c) is C)) - def check_failing_wrap(self): + def test_failing_wrap(self): class A(object): def __array__(self): return zeros(1) @@ -151,7 +152,7 @@ a = A() self.failUnlessRaises(RuntimeError, maximum, a, a) - def check_array_with_context(self): + def test_array_with_context(self): class A(object): def __array__(self, dtype=None, context=None): func, args, i = context @@ -174,19 +175,20 @@ assert_equal(maximum(a, B()), 0) assert_equal(maximum(a, C()), 0) -class TestChoose(NumpyTestCase): - def check_mixed(self): + +class TestChoose(TestCase): + def test_mixed(self): c = array([True,True]) a = array([True,True]) assert_equal(choose(c, (a, 1)), array([1,1])) -class TestComplexFunctions(NumpyTestCase): +class TestComplexFunctions(TestCase): funcs = [np.arcsin , np.arccos , np.arctan, np.arcsinh, np.arccosh, np.arctanh, np.sin , np.cos , np.tan , np.exp, np.log , np.sqrt , np.log10] - def check_it(self): + def test_it(self): for f in self.funcs: if f is np.arccosh : x = 1.5 @@ -197,7 +199,7 @@ assert_almost_equal(fz.real, fr, err_msg='real part %s'%f) assert_almost_equal(fz.imag, 0., err_msg='imag part %s'%f) - def check_precisions_consistent(self) : + def test_precisions_consistent(self) : z = 1 + 1j for f in self.funcs : fcf = f(np.csingle(z)) @@ -207,16 +209,17 @@ assert_almost_equal(fcl, fcd, decimal=15, err_msg='fch-fcl %s'%f) -class TestChoose(NumpyTestCase): - def check_attributes(self): +class TestAttributes(TestCase): + def test_attributes(self): add = ncu.add assert_equal(add.__name__, 'add') - assert_equal(add.__doc__, 'y = add(x1,x2) adds the arguments elementwise.') + assert add.__doc__.startswith('y = add(x1,x2)\n\n') self.failUnless(add.ntypes >= 18) # don't fail if types added self.failUnless('ii->i' in add.types) assert_equal(add.nin, 2) assert_equal(add.nout, 1) assert_equal(add.identity, 0) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/core/tests/test_unicode.py =================================================================== --- branches/cdavid/numpy/core/tests/test_unicode.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/core/tests/test_unicode.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -18,10 +18,10 @@ # Creation tests ############################################################ -class create_zeros(NumpyTestCase): +class create_zeros: """Check the creation of zero-valued arrays""" - def content_test(self, ua, ua_scalar, nbytes): + def content_check(self, ua, ua_scalar, nbytes): # Check the length of the unicode base type self.assert_(int(ua.dtype.str[2:]) == self.ulen) @@ -37,41 +37,43 @@ else: self.assert_(len(buffer(ua_scalar)) == 0) - def check_zeros0D(self): + def test_zeros0D(self): """Check creation of 0-dimensional objects""" ua = zeros((), dtype='U%s' % self.ulen) - self.content_test(ua, ua[()], 4*self.ulen) + self.content_check(ua, ua[()], 4*self.ulen) - def check_zerosSD(self): + def test_zerosSD(self): """Check creation of single-dimensional objects""" ua = zeros((2,), dtype='U%s' % self.ulen) - self.content_test(ua, ua[0], 4*self.ulen*2) - self.content_test(ua, ua[1], 4*self.ulen*2) + self.content_check(ua, ua[0], 4*self.ulen*2) + self.content_check(ua, ua[1], 4*self.ulen*2) - def check_zerosMD(self): + def test_zerosMD(self): """Check creation of multi-dimensional objects""" ua = zeros((2,3,4), dtype='U%s' % self.ulen) - self.content_test(ua, ua[0,0,0], 4*self.ulen*2*3*4) - self.content_test(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) + self.content_check(ua, ua[0,0,0], 4*self.ulen*2*3*4) + self.content_check(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) -class test_create_zeros_1(create_zeros): +class test_create_zeros_1(create_zeros, TestCase): """Check the creation of zero-valued arrays (size 1)""" ulen = 1 -class test_create_zeros_2(create_zeros): + +class test_create_zeros_2(create_zeros, TestCase): """Check the creation of zero-valued arrays (size 2)""" ulen = 2 -class test_create_zeros_1009(create_zeros): + +class test_create_zeros_1009(create_zeros, TestCase): """Check the creation of zero-valued arrays (size 1009)""" ulen = 1009 -class create_values(NumpyTestCase): +class create_values: """Check the creation of unicode arrays with values""" - def content_test(self, ua, ua_scalar, nbytes): + def content_check(self, ua, ua_scalar, nbytes): # Check the length of the unicode base type self.assert_(int(ua.dtype.str[2:]) == self.ulen) @@ -95,50 +97,55 @@ # regular 2-byte word self.assert_(len(buffer(ua_scalar)) == 2*self.ulen) - def check_values0D(self): + def test_values0D(self): """Check creation of 0-dimensional objects with values""" ua = array(self.ucs_value*self.ulen, dtype='U%s' % self.ulen) - self.content_test(ua, ua[()], 4*self.ulen) + self.content_check(ua, ua[()], 4*self.ulen) - def check_valuesSD(self): + def test_valuesSD(self): """Check creation of single-dimensional objects with values""" ua = array([self.ucs_value*self.ulen]*2, dtype='U%s' % self.ulen) - self.content_test(ua, ua[0], 4*self.ulen*2) - self.content_test(ua, ua[1], 4*self.ulen*2) + self.content_check(ua, ua[0], 4*self.ulen*2) + self.content_check(ua, ua[1], 4*self.ulen*2) - def check_valuesMD(self): + def test_valuesMD(self): """Check creation of multi-dimensional objects with values""" ua = array([[[self.ucs_value*self.ulen]*2]*3]*4, dtype='U%s' % self.ulen) - self.content_test(ua, ua[0,0,0], 4*self.ulen*2*3*4) - self.content_test(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) + self.content_check(ua, ua[0,0,0], 4*self.ulen*2*3*4) + self.content_check(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) -class test_create_values_1_ucs2(create_values): +class test_create_values_1_ucs2(create_values, TestCase): """Check the creation of valued arrays (size 1, UCS2 values)""" ulen = 1 ucs_value = ucs2_value -class test_create_values_1_ucs4(create_values): + +class test_create_values_1_ucs4(create_values, TestCase): """Check the creation of valued arrays (size 1, UCS4 values)""" ulen = 1 ucs_value = ucs4_value -class test_create_values_2_ucs2(create_values): + +class test_create_values_2_ucs2(create_values, TestCase): """Check the creation of valued arrays (size 2, UCS2 values)""" ulen = 2 ucs_value = ucs2_value -class test_create_values_2_ucs4(create_values): + +class test_create_values_2_ucs4(create_values, TestCase): """Check the creation of valued arrays (size 2, UCS4 values)""" ulen = 2 ucs_value = ucs4_value -class test_create_values_1009_ucs2(create_values): + +class test_create_values_1009_ucs2(create_values, TestCase): """Check the creation of valued arrays (size 1009, UCS2 values)""" ulen = 1009 ucs_value = ucs2_value -class test_create_values_1009_ucs4(create_values): + +class test_create_values_1009_ucs4(create_values, TestCase): """Check the creation of valued arrays (size 1009, UCS4 values)""" ulen = 1009 ucs_value = ucs4_value @@ -148,10 +155,10 @@ # Assignment tests ############################################################ -class assign_values(NumpyTestCase): +class assign_values: """Check the assignment of unicode arrays with values""" - def content_test(self, ua, ua_scalar, nbytes): + def content_check(self, ua, ua_scalar, nbytes): # Check the length of the unicode base type self.assert_(int(ua.dtype.str[2:]) == self.ulen) @@ -175,68 +182,74 @@ # regular 2-byte word self.assert_(len(buffer(ua_scalar)) == 2*self.ulen) - def check_values0D(self): + def test_values0D(self): """Check assignment of 0-dimensional objects with values""" ua = zeros((), dtype='U%s' % self.ulen) ua[()] = self.ucs_value*self.ulen - self.content_test(ua, ua[()], 4*self.ulen) + self.content_check(ua, ua[()], 4*self.ulen) - def check_valuesSD(self): + def test_valuesSD(self): """Check assignment of single-dimensional objects with values""" ua = zeros((2,), dtype='U%s' % self.ulen) ua[0] = self.ucs_value*self.ulen - self.content_test(ua, ua[0], 4*self.ulen*2) + self.content_check(ua, ua[0], 4*self.ulen*2) ua[1] = self.ucs_value*self.ulen - self.content_test(ua, ua[1], 4*self.ulen*2) + self.content_check(ua, ua[1], 4*self.ulen*2) - def check_valuesMD(self): + def test_valuesMD(self): """Check assignment of multi-dimensional objects with values""" ua = zeros((2,3,4), dtype='U%s' % self.ulen) ua[0,0,0] = self.ucs_value*self.ulen - self.content_test(ua, ua[0,0,0], 4*self.ulen*2*3*4) + self.content_check(ua, ua[0,0,0], 4*self.ulen*2*3*4) ua[-1,-1,-1] = self.ucs_value*self.ulen - self.content_test(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) + self.content_check(ua, ua[-1,-1,-1], 4*self.ulen*2*3*4) -class test_assign_values_1_ucs2(assign_values): +class test_assign_values_1_ucs2(assign_values, TestCase): """Check the assignment of valued arrays (size 1, UCS2 values)""" ulen = 1 ucs_value = ucs2_value -class test_assign_values_1_ucs4(assign_values): + +class test_assign_values_1_ucs4(assign_values, TestCase): """Check the assignment of valued arrays (size 1, UCS4 values)""" ulen = 1 ucs_value = ucs4_value + -class test_assign_values_2_ucs2(assign_values): +class test_assign_values_2_ucs2(assign_values, TestCase): """Check the assignment of valued arrays (size 2, UCS2 values)""" ulen = 2 ucs_value = ucs2_value + -class test_assign_values_2_ucs4(assign_values): +class test_assign_values_2_ucs4(assign_values, TestCase): """Check the assignment of valued arrays (size 2, UCS4 values)""" ulen = 2 ucs_value = ucs4_value + -class test_assign_values_1009_ucs2(assign_values): +class test_assign_values_1009_ucs2(assign_values, TestCase): """Check the assignment of valued arrays (size 1009, UCS2 values)""" ulen = 1009 ucs_value = ucs2_value + -class test_assign_values_1009_ucs4(assign_values): +class test_assign_values_1009_ucs4(assign_values, TestCase): """Check the assignment of valued arrays (size 1009, UCS4 values)""" ulen = 1009 ucs_value = ucs4_value + ############################################################ # Byteorder tests ############################################################ -class byteorder_values(NumpyTestCase): +class byteorder_values: """Check the byteorder of unicode arrays in round-trip conversions""" - def check_values0D(self): + def test_values0D(self): """Check byteorder of 0-dimensional objects""" ua = array(self.ucs_value*self.ulen, dtype='U%s' % self.ulen) ua2 = ua.newbyteorder() @@ -248,7 +261,7 @@ # Arrays must be equal after the round-trip assert_equal(ua, ua3) - def check_valuesSD(self): + def test_valuesSD(self): """Check byteorder of single-dimensional objects""" ua = array([self.ucs_value*self.ulen]*2, dtype='U%s' % self.ulen) ua2 = ua.newbyteorder() @@ -258,7 +271,7 @@ # Arrays must be equal after the round-trip assert_equal(ua, ua3) - def check_valuesMD(self): + def test_valuesMD(self): """Check byteorder of multi-dimensional objects""" ua = array([[[self.ucs_value*self.ulen]*2]*3]*4, dtype='U%s' % self.ulen) @@ -269,36 +282,43 @@ # Arrays must be equal after the round-trip assert_equal(ua, ua3) -class test_byteorder_1_ucs2(byteorder_values): + +class test_byteorder_1_ucs2(byteorder_values, TestCase): """Check the byteorder in unicode (size 1, UCS2 values)""" ulen = 1 ucs_value = ucs2_value + -class test_byteorder_1_ucs4(byteorder_values): +class test_byteorder_1_ucs4(byteorder_values, TestCase): """Check the byteorder in unicode (size 1, UCS4 values)""" ulen = 1 ucs_value = ucs4_value + -class test_byteorder_2_ucs2(byteorder_values): +class test_byteorder_2_ucs2(byteorder_values, TestCase): """Check the byteorder in unicode (size 2, UCS2 values)""" ulen = 2 ucs_value = ucs2_value + -class test_byteorder_2_ucs4(byteorder_values): +class test_byteorder_2_ucs4(byteorder_values, TestCase): """Check the byteorder in unicode (size 2, UCS4 values)""" ulen = 2 ucs_value = ucs4_value + -class test_byteorder_1009_ucs2(byteorder_values): +class test_byteorder_1009_ucs2(byteorder_values, TestCase): """Check the byteorder in unicode (size 1009, UCS2 values)""" ulen = 1009 ucs_value = ucs2_value + -class test_byteorder_1009_ucs4(byteorder_values): +class test_byteorder_1009_ucs4(byteorder_values, TestCase): """Check the byteorder in unicode (size 1009, UCS4 values)""" ulen = 1009 ucs_value = ucs4_value if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) + Modified: branches/cdavid/numpy/ctypeslib.py =================================================================== --- branches/cdavid/numpy/ctypeslib.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/ctypeslib.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -29,7 +29,10 @@ import warnings warnings.warn("All features of ctypes interface may not work " \ "with ctypes < 1.0.1") - if '.' not in libname: + + ext = os.path.splitext(libname)[1] + + if not ext: # Try to load library with platform-specific name, otherwise # default to libname.[so|pyd]. Sometimes, these files are built # erroneously on non-linux platforms. @@ -38,6 +41,8 @@ libname_ext.insert(0, '%s.dll' % libname) elif sys.platform == 'darwin': libname_ext.insert(0, '%s.dylib' % libname) + else: + libname_ext = [libname] loader_path = os.path.abspath(loader_path) if not os.path.isdir(loader_path): Modified: branches/cdavid/numpy/distutils/__init__.py =================================================================== --- branches/cdavid/numpy/distutils/__init__.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/distutils/__init__.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -15,6 +15,6 @@ _INSTALLED = False if _INSTALLED: - def test(level=1, verbosity=1): - from numpy.testing import NumpyTest - return NumpyTest().test(level, verbosity) + from numpy.testing.pkgtester import Tester + test = Tester().test + bench = Tester().bench Modified: branches/cdavid/numpy/distutils/command/scons.py =================================================================== --- branches/cdavid/numpy/distutils/command/scons.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/distutils/command/scons.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -10,7 +10,6 @@ from numpy.distutils.fcompiler import FCompiler from numpy.distutils.exec_command import find_executable from numpy.distutils import log -from numpy.distutils.misc_util import get_numpy_include_dirs def get_scons_build_dir(): """Return the top path where everything produced by scons will be put. @@ -38,6 +37,14 @@ from numscons import get_scons_path return get_scons_path() +def get_distutils_libdir(cmd, sconscript_path): + """Returns the path where distutils install libraries, relatively to the + scons build directory.""" + from numscons import get_scons_build_dir + scdir = pjoin(get_scons_build_dir(), pdirname(sconscript_path)) + n = scdir.count(os.sep) + return pjoin(os.sep.join([os.pardir for i in range(n+1)]), cmd.build_lib) + def get_python_exec_invoc(): """This returns the python executable from which this file is invocated.""" # Do we need to take into account the PYTHONPATH, in a cross platform way, @@ -49,6 +56,21 @@ import sys return sys.executable +def get_numpy_include_dirs(sconscript_path): + """Return include dirs for numpy. + + The paths are relatively to the setup.py script path.""" + from numpy.distutils.misc_util import get_numpy_include_dirs as _incdir + from numscons import get_scons_build_dir + scdir = pjoin(get_scons_build_dir(), pdirname(sconscript_path)) + n = scdir.count(os.sep) + + dirs = _incdir() + rdirs = [] + for d in dirs: + rdirs.append(pjoin(os.sep.join([os.pardir for i in range(n+1)]), d)) + return rdirs + def dirl_to_str(dirlist): """Given a list of directories, returns a string where the paths are concatenated by the path separator. @@ -94,6 +116,9 @@ def dist2sconscxx(compiler): """This converts the name passed to distutils to scons name convention (C++ compiler). The argument should be a Compiler instance.""" + if compiler.compiler_type == 'msvc': + return compiler.compiler_type + return compiler.compiler_cxx[0] def get_compiler_executable(compiler): @@ -182,7 +207,7 @@ """Given two list, return the index of the common items. The index are relative to seq1. - + Note: do not handle duplicate items.""" dict2 = dict([(i, None) for i in seq2]) @@ -298,8 +323,11 @@ cxxcompiler.customize(self.distribution, need_cxx = 1) cxxcompiler.customize_cmd(self) self.cxxcompiler = cxxcompiler.cxx_compiler() - #print self.cxxcompiler.compiler_cxx[0] - + try: + get_cxx_tool_path(self.cxxcompiler) + except DistutilsSetupError: + self.cxxcompiler = None + if self.package_list: self.package_list = parse_package_list(self.package_list) @@ -311,6 +339,16 @@ raise RuntimeError("importing numscons failed (error was %s), using " \ "scons within distutils is not possible without " "this package " % str(e)) + + try: + from numscons import get_version + if get_version() < '0.8.0': + raise ValueError() + except ImportError, ValueError: + raise RuntimeError("You need numscons >= 0.8.0 to build numpy "\ + "with numscons (imported numscons path " \ + "is %s)." % numscons.__file__) + else: # nothing to do, just leave it here. return @@ -327,7 +365,7 @@ scons_exec = get_python_exec_invoc() scons_exec += ' ' + protect_path(pjoin(get_scons_local_path(), 'scons.py')) - if self.package_list is not None: + if self.package_list is not None: id = select_packages(self.pkg_names, self.package_list) sconscripts = [self.sconscripts[i] for i in id] pre_hooks = [self.pre_hooks[i] for i in id] @@ -358,7 +396,8 @@ cmd.append('pkg_name="%s"' % pkg_name) #cmd.append('distutils_libdir=%s' % protect_path(pjoin(self.build_lib, # pdirname(sconscript)))) - cmd.append('distutils_libdir=%s' % protect_path(pjoin(self.build_lib))) + cmd.append('distutils_libdir=%s' % + protect_path(get_distutils_libdir(self, sconscript))) if not self._bypass_distutils_cc: cmd.append('cc_opt=%s' % self.scons_compiler) @@ -375,7 +414,7 @@ cmd.append('cxx_opt=%s' % dist2sconscxx(self.cxxcompiler)) cmd.append('cxx_opt_path=%s' % protect_path(get_cxx_tool_path(self.cxxcompiler))) - cmd.append('include_bootstrap=%s' % dirl_to_str(get_numpy_include_dirs())) + cmd.append('include_bootstrap=%s' % dirl_to_str(get_numpy_include_dirs(sconscript))) if self.silent: if int(self.silent) == 2: cmd.append('-Q') Modified: branches/cdavid/numpy/distutils/conv_template.py =================================================================== --- branches/cdavid/numpy/distutils/conv_template.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/distutils/conv_template.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -286,4 +286,3 @@ allstr = fid.read() writestr = process_str(allstr) outfile.write(writestr) - Modified: branches/cdavid/numpy/distutils/tests/f2py_ext/tests/test_fib2.py =================================================================== --- branches/cdavid/numpy/distutils/tests/f2py_ext/tests/test_fib2.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/distutils/tests/f2py_ext/tests/test_fib2.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -4,10 +4,10 @@ from f2py_ext import fib2 del sys.path[0] -class TestFib2(NumpyTestCase): +class TestFib2(TestCase): - def check_fib(self): + def test_fib(self): assert_array_equal(fib2.fib(6),[0,1,1,2,3,5]) if __name__ == "__main__": - NumpyTest(fib2).run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/distutils/tests/f2py_f90_ext/tests/test_foo.py =================================================================== --- branches/cdavid/numpy/distutils/tests/f2py_f90_ext/tests/test_foo.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/distutils/tests/f2py_f90_ext/tests/test_foo.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -4,10 +4,10 @@ from f2py_f90_ext import foo del sys.path[0] -class TestFoo(NumpyTestCase): +class TestFoo(TestCase): - def check_foo_free(self): + def test_foo_free(self): assert_equal(foo.foo_free.bar13(),13) if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/distutils/tests/gen_ext/tests/test_fib3.py =================================================================== --- branches/cdavid/numpy/distutils/tests/gen_ext/tests/test_fib3.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/distutils/tests/gen_ext/tests/test_fib3.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -4,10 +4,10 @@ from gen_ext import fib3 del sys.path[0] -class TestFib3(NumpyTestCase): +class TestFib3(TestCase): - def check_fib(self): + def test_fib(self): assert_array_equal(fib3.fib(6),[0,1,1,2,3,5]) if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/distutils/tests/pyrex_ext/tests/test_primes.py =================================================================== --- branches/cdavid/numpy/distutils/tests/pyrex_ext/tests/test_primes.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/distutils/tests/pyrex_ext/tests/test_primes.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -5,9 +5,9 @@ from pyrex_ext.primes import primes restore_path() -class TestPrimes(NumpyTestCase): - def check_simple(self, level=1): +class TestPrimes(TestCase): + def test_simple(self, level=1): l = primes(10) assert_equal(l, [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]) if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/distutils/tests/swig_ext/tests/test_example.py =================================================================== --- branches/cdavid/numpy/distutils/tests/swig_ext/tests/test_example.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/distutils/tests/swig_ext/tests/test_example.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -4,15 +4,15 @@ from swig_ext import example restore_path() -class TestExample(NumpyTestCase): +class TestExample(TestCase): - def check_fact(self): + def test_fact(self): assert_equal(example.fact(10),3628800) - def check_cvar(self): + def test_cvar(self): assert_equal(example.cvar.My_variable,3.0) example.cvar.My_variable = 5 assert_equal(example.cvar.My_variable,5.0) if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/distutils/tests/swig_ext/tests/test_example2.py =================================================================== --- branches/cdavid/numpy/distutils/tests/swig_ext/tests/test_example2.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/distutils/tests/swig_ext/tests/test_example2.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -4,9 +4,9 @@ from swig_ext import example2 restore_path() -class TestExample2(NumpyTestCase): +class TestExample2(TestCase): - def check_zoo(self): + def test_zoo(self): z = example2.Zoo() z.shut_up('Tiger') z.shut_up('Lion') @@ -14,4 +14,4 @@ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/distutils/tests/test_fcompiler_gnu.py =================================================================== --- branches/cdavid/numpy/distutils/tests/test_fcompiler_gnu.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/distutils/tests/test_fcompiler_gnu.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -21,7 +21,7 @@ ('GNU Fortran (GCC) 4.3.0 20070316 (experimental)', '4.3.0'), ] -class TestG77Versions(NumpyTestCase): +class TestG77Versions(TestCase): def test_g77_version(self): fc = numpy.distutils.fcompiler.new_fcompiler(compiler='gnu') for vs, version in g77_version_strings: @@ -34,7 +34,7 @@ v = fc.version_match(vs) assert v is None, (vs, v) -class TestGortranVersions(NumpyTestCase): +class TestGortranVersions(TestCase): def test_gfortran_version(self): fc = numpy.distutils.fcompiler.new_fcompiler(compiler='gnu95') for vs, version in gfortran_version_strings: @@ -49,4 +49,4 @@ if __name__ == '__main__': - NumpyTest.run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/distutils/tests/test_misc_util.py =================================================================== --- branches/cdavid/numpy/distutils/tests/test_misc_util.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/distutils/tests/test_misc_util.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -8,15 +8,15 @@ ajoin = lambda *paths: join(*((sep,)+paths)) -class TestAppendpath(NumpyTestCase): +class TestAppendpath(TestCase): - def check_1(self): + def test_1(self): assert_equal(appendpath('prefix','name'),join('prefix','name')) assert_equal(appendpath('/prefix','name'),ajoin('prefix','name')) assert_equal(appendpath('/prefix','/name'),ajoin('prefix','name')) assert_equal(appendpath('prefix','/name'),join('prefix','name')) - def check_2(self): + def test_2(self): assert_equal(appendpath('prefix/sub','name'), join('prefix','sub','name')) assert_equal(appendpath('prefix/sub','sup/name'), @@ -24,7 +24,7 @@ assert_equal(appendpath('/prefix/sub','/prefix/name'), ajoin('prefix','sub','name')) - def check_3(self): + def test_3(self): assert_equal(appendpath('/prefix/sub','/prefix/sup/name'), ajoin('prefix','sub','sup','name')) assert_equal(appendpath('/prefix/sub/sub2','/prefix/sup/sup2/name'), @@ -32,9 +32,9 @@ assert_equal(appendpath('/prefix/sub/sub2','/prefix/sub/sup/name'), ajoin('prefix','sub','sub2','sup','name')) -class TestMinrelpath(NumpyTestCase): +class TestMinrelpath(TestCase): - def check_1(self): + def test_1(self): import os n = lambda path: path.replace('/',os.path.sep) assert_equal(minrelpath(n('aa/bb')),n('aa/bb')) @@ -47,14 +47,15 @@ assert_equal(minrelpath(n('.././..')),n('../..')) assert_equal(minrelpath(n('aa/bb/.././../dd')),n('dd')) -class TestGpaths(NumpyTestCase): +class TestGpaths(TestCase): - def check_gpaths(self): + def test_gpaths(self): local_path = minrelpath(os.path.join(os.path.dirname(__file__),'..')) ls = gpaths('command/*.py', local_path) assert os.path.join(local_path,'command','build_src.py') in ls,`ls` f = gpaths('system_info.py', local_path) assert os.path.join(local_path,'system_info.py')==f[0],`f` + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/doc/DISTUTILS.txt =================================================================== --- branches/cdavid/numpy/doc/DISTUTILS.txt 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/doc/DISTUTILS.txt 2008-06-20 05:59:26 UTC (rev 5302) @@ -465,12 +465,10 @@ Ideally, every Python code, extension module, or subpackage in Scipy package directory should have the corresponding ``test_.py`` file in ``tests/`` directory. This file should define classes -derived from ``NumpyTestCase`` (or from ``unittest.TestCase``) class -and have names starting with ``test``. The methods of these classes -which names start with ``bench``, ``check``, or ``test``, are passed -on to unittest machinery. In addition, the value of the first optional -argument of these methods determine the level of the corresponding -test. Default level is 1. +derived from the ``numpy.testing.TestCase`` class (or from +``unittest.TestCase``) and have names starting with ``test``. The methods +of these classes whose names contain ``test`` or start with ``bench`` are +automatically picked up by the test machinery. A minimal example of a ``test_yyy.py`` file that implements tests for a Scipy package module ``numpy.xxx.yyy`` containing a function @@ -489,20 +487,16 @@ # import modules that are located in the same directory as this file. restore_path() - class test_zzz(NumpyTestCase): - def check_simple(self, level=1): + class test_zzz(TestCase): + def test_simple(self, level=1): assert zzz()=='Hello from zzz' #... if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) -``NumpyTestCase`` is derived from ``unittest.TestCase`` and it -basically only implements an additional method ``measure(self, -code_str, times=1)``. - Note that all classes that are inherited from ``TestCase`` class, are -picked up by the test runner when using ``testoob``. +automatically picked up by the test runner. ``numpy.testing`` module provides also the following convenience functions:: @@ -514,25 +508,15 @@ assert_array_almost_equal(x,y,decimal=6,err_msg='') rand(*shape) # returns random array with a given shape -``NumpyTest`` can be used for running ``tests/test_*.py`` scripts. -For instance, to run all test scripts of the module ``xxx``, execute -in Python: +To run all test scripts of the module ``xxx``, execute in Python: - >>> NumpyTest('xxx').test(level=1,verbosity=1) + >>> import numpy + >>> numpy.xxx.test() -or equivalently, - - >>> import xxx - >>> NumpyTest(xxx).test(level=1,verbosity=1) - To run only tests for ``xxx.yyy`` module, execute: >>> NumpyTest('xxx.yyy').test(level=1,verbosity=1) -To take the level and verbosity parameters for tests from -``sys.argv``, use ``NumpyTest.run()`` method (this is supported only -when ``optparse`` is installed). - Extra features in NumPy Distutils ''''''''''''''''''''''''''''''''' Modified: branches/cdavid/numpy/doc/HOWTO_BUILD_DOCS.txt =================================================================== --- branches/cdavid/numpy/doc/HOWTO_BUILD_DOCS.txt 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/doc/HOWTO_BUILD_DOCS.txt 2008-06-20 05:59:26 UTC (rev 5302) @@ -2,6 +2,12 @@ Building the NumPy API and reference docs ========================================= +Using Sphinx_ +------------- +`Download `_ +the builder. Follow the instructions in ``README.txt``. + + Using Epydoc_ ------------- @@ -58,3 +64,8 @@ The output is placed in ``./html``, and may be viewed by loading the ``index.html`` file into your browser. + + + +.. _epydoc: http://epydoc.sourceforge.net/ +.. _sphinx: http://sphinx.pocoo.org Modified: branches/cdavid/numpy/doc/HOWTO_DOCUMENT.txt =================================================================== --- branches/cdavid/numpy/doc/HOWTO_DOCUMENT.txt 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/doc/HOWTO_DOCUMENT.txt 2008-06-20 05:59:26 UTC (rev 5302) @@ -25,15 +25,24 @@ * `pyflakes` easy_install pyflakes * `pep8.py `_ -For documentation purposes, use unabbreviated module names. If you -prefer the use of abbreviated module names in code (*not* the -docstrings), we suggest the import conventions used by NumPy itself:: +The following import conventions are used throughout the NumPy source +and documentation:: import numpy as np import scipy as sp import matplotlib as mpl import matplotlib.pyplot as plt +It is not necessary to do ``import numpy as np`` at the beginning of +an example. However, some sub-modules, such as ``fft``, are not +imported by default, and you have to include them explicitly:: + + import numpy.fft + +after which you may use it:: + + np.fft.fft2(...) + Docstring Standard ------------------ A documentation string (docstring) is a string that describes a module, @@ -65,15 +74,15 @@ A guiding principle is that human readers of the text are given precedence over contorting docstrings so our tools produce nice output. Rather than sacrificing the readability of the docstrings, we -have chosen to write pre-processors to assist tools like epydoc_ or -sphinx_ in their task. +have written pre-processors to assist tools like epydoc_ and sphinx_ in +their task. Status ------ We are busy converting existing docstrings to the new format, expanding them where they are lacking, as well as writing new ones for undocumented functions. Volunteers are welcome to join the effort on -our new wiki-based documentation system (see the `Developer Zone +our new documentation system (see the `Developer Zone `_). Sections @@ -179,6 +188,29 @@ -------- average : Weighted average + When referring to functions in the same sub-module, no prefix is + needed, and the tree is searched upwards for a match. + + Prefix functions from other sub-modules appropriately. E.g., + whilst documenting the ``random`` module, refer to a function in + ``fft`` by + + :: + + fft.fft2 : 2-D fast discrete Fourier transform + + When referring to an entirely different module:: + + scipy.random.norm : Random variates, PDFs, etc. + + Functions may be listed without descriptions:: + + See Also + -------- + func_a : Function a with its description. + func_b, func_c_, func_d + func_e + 8. **Notes** An optional section that provides additional information about the @@ -203,8 +235,14 @@ :: - The value of :math:`omega` is larger than 5. + The value of :math:`\omega` is larger than 5. + Variable names are displayed in typewriter font, obtained by using + ``\mathtt{var}``:: + + We square the input parameter `alpha` to obtain + :math:`\mathtt{alpha}^2`. + Note that LaTeX is not particularly easy to read, so use equations sparingly. @@ -277,6 +315,11 @@ b + The examples may assume that ``import numpy as np`` is executed before + the example code in *numpy*, and ``import scipy as sp`` in *scipy*. + Additional examples may make use of *matplotlib* for plotting, but should + import it explicitly, e.g., ``import matplotlib.pyplot as plt``. + 11. **Indexing tags*** Each function needs to be categorised for indexing purposes. Use @@ -286,17 +329,67 @@ :refguide: ufunc, trigonometry To index a function as a sub-category of a class, separate index - entries by a semi-colon, e.g. + entries by a colon, e.g. :: - :refguide: ufunc, numpy;reshape, other + :refguide: ufunc, numpy:reshape, other A `list of available categories `_ is available. +Documenting classes +------------------- +Class docstring +``````````````` +Use the same sections as outlined above (all except ``Returns`` are +applicable). The constructor (``__init__``) should also be documented +here. + +An ``Attributes`` section may be used to describe class variables:: + + Attributes + ---------- + x : float + The X coordinate. + y : float + The Y coordinate. + +In general, it is not necessary to list class methods. Those that are +not part of the public API have names that start with an underscore. +In some cases, however, a class may have a great many methods, of +which only a few are relevant (e.g., subclasses of ndarray). Then, it +becomes useful to have an additional ``Methods`` section:: + + class Photo(ndarray): + """ + Array with associated photographic information. + + ... + + Attributes + ---------- + exposure : float + Exposure in seconds. + + Methods + ------- + colorspace(c='rgb') + Represent the photo in the given colorspace. + gamma(n=1.0) + Change the photo's gamma exposure. + + """ + +Note that `self` is *not* listed as the first parameter of methods. + +Method docstrings +````````````````` +Document these as you would any other function. Do not include +``self`` in the list of parameters. + Common reST concepts -------------------- For paragraphs, indentation is significant and indicates indentation in the @@ -322,8 +415,9 @@ `An example `_ of the format shown here is available. Refer to `How to Build API/Reference -Documentation `_ on how to use epydoc_ or sphinx_ to -construct a manual and web page. +Documentation +`_ +on how to use epydoc_ or sphinx_ to construct a manual and web page. This document itself was written in ReStructuredText, and may be converted to HTML using:: Modified: branches/cdavid/numpy/doc/cython/Makefile =================================================================== --- branches/cdavid/numpy/doc/cython/Makefile 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/doc/cython/Makefile 2008-06-20 05:59:26 UTC (rev 5302) @@ -24,7 +24,7 @@ numpyx.pyx.html: numpyx.pyx cython -a numpyx.pyx - @echo "Annotated HTML of the C code generated in numpy.pyx.html" + @echo "Annotated HTML of the C code generated in numpyx.html" # Phony targets for cleanup and similar uses Deleted: branches/cdavid/numpy/doc/cython/Python.pxi =================================================================== --- branches/cdavid/numpy/doc/cython/Python.pxi 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/doc/cython/Python.pxi 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,62 +0,0 @@ -# :Author: Robert Kern -# :Copyright: 2004, Enthought, Inc. -# :License: BSD Style - - -cdef extern from "Python.h": - # Not part of the Python API, but we might as well define it here. - # Note that the exact type doesn't actually matter for Pyrex. - ctypedef int size_t - - # Some type declarations we need - ctypedef int Py_intptr_t - - - # String API - char* PyString_AsString(object string) - char* PyString_AS_STRING(object string) - object PyString_FromString(char* c_string) - object PyString_FromStringAndSize(char* c_string, int length) - object PyString_InternFromString(char *v) - - # Float API - object PyFloat_FromDouble(double v) - double PyFloat_AsDouble(object ob) - long PyInt_AsLong(object ob) - - - # Memory API - void* PyMem_Malloc(size_t n) - void* PyMem_Realloc(void* buf, size_t n) - void PyMem_Free(void* buf) - - void Py_DECREF(object obj) - void Py_XDECREF(object obj) - void Py_INCREF(object obj) - void Py_XINCREF(object obj) - - # CObject API - ctypedef void (*destructor1)(void* cobj) - ctypedef void (*destructor2)(void* cobj, void* desc) - int PyCObject_Check(object p) - object PyCObject_FromVoidPtr(void* cobj, destructor1 destr) - object PyCObject_FromVoidPtrAndDesc(void* cobj, void* desc, - destructor2 destr) - void* PyCObject_AsVoidPtr(object self) - void* PyCObject_GetDesc(object self) - int PyCObject_SetVoidPtr(object self, void* cobj) - - # TypeCheck API - int PyFloat_Check(object obj) - int PyInt_Check(object obj) - - # Error API - int PyErr_Occurred() - void PyErr_Clear() - int PyErr_CheckSignals() - -cdef extern from "string.h": - void *memcpy(void *s1, void *s2, int n) - -cdef extern from "math.h": - double fabs(double x) Copied: branches/cdavid/numpy/doc/cython/c_numpy.pxd (from rev 5301, trunk/numpy/doc/cython/c_numpy.pxd) Copied: branches/cdavid/numpy/doc/cython/c_python.pxd (from rev 5301, trunk/numpy/doc/cython/c_python.pxd) Deleted: branches/cdavid/numpy/doc/cython/numpy.pxi =================================================================== --- branches/cdavid/numpy/doc/cython/numpy.pxi 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/doc/cython/numpy.pxi 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,133 +0,0 @@ -# :Author: Travis Oliphant - -cdef extern from "numpy/arrayobject.h": - - cdef enum NPY_TYPES: - NPY_BOOL - NPY_BYTE - NPY_UBYTE - NPY_SHORT - NPY_USHORT - NPY_INT - NPY_UINT - NPY_LONG - NPY_ULONG - NPY_LONGLONG - NPY_ULONGLONG - NPY_FLOAT - NPY_DOUBLE - NPY_LONGDOUBLE - NPY_CFLOAT - NPY_CDOUBLE - NPY_CLONGDOUBLE - NPY_OBJECT - NPY_STRING - NPY_UNICODE - NPY_VOID - NPY_NTYPES - NPY_NOTYPE - - cdef enum requirements: - NPY_CONTIGUOUS - NPY_FORTRAN - NPY_OWNDATA - NPY_FORCECAST - NPY_ENSURECOPY - NPY_ENSUREARRAY - NPY_ELEMENTSTRIDES - NPY_ALIGNED - NPY_NOTSWAPPED - NPY_WRITEABLE - NPY_UPDATEIFCOPY - NPY_ARR_HAS_DESCR - - NPY_BEHAVED - NPY_BEHAVED_NS - NPY_CARRAY - NPY_CARRAY_RO - NPY_FARRAY - NPY_FARRAY_RO - NPY_DEFAULT - - NPY_IN_ARRAY - NPY_OUT_ARRAY - NPY_INOUT_ARRAY - NPY_IN_FARRAY - NPY_OUT_FARRAY - NPY_INOUT_FARRAY - - NPY_UPDATE_ALL - - cdef enum defines: - NPY_MAXDIMS - - ctypedef struct npy_cdouble: - double real - double imag - - ctypedef struct npy_cfloat: - double real - double imag - - ctypedef int npy_intp - - ctypedef extern class numpy.dtype [object PyArray_Descr]: - cdef int type_num, elsize, alignment - cdef char type, kind, byteorder, hasobject - cdef object fields, typeobj - - ctypedef extern class numpy.ndarray [object PyArrayObject]: - cdef char *data - cdef int nd - cdef npy_intp *dimensions - cdef npy_intp *strides - cdef object base - cdef dtype descr - cdef int flags - - ctypedef extern class numpy.flatiter [object PyArrayIterObject]: - cdef int nd_m1 - cdef npy_intp index, size - cdef ndarray ao - cdef char *dataptr - - ctypedef extern class numpy.broadcast [object PyArrayMultiIterObject]: - cdef int numiter - cdef npy_intp size, index - cdef int nd - cdef npy_intp *dimensions - cdef void **iters - - object PyArray_ZEROS(int ndims, npy_intp* dims, NPY_TYPES type_num, int fortran) - object PyArray_EMPTY(int ndims, npy_intp* dims, NPY_TYPES type_num, int fortran) - dtype PyArray_DescrFromTypeNum(NPY_TYPES type_num) - object PyArray_SimpleNew(int ndims, npy_intp* dims, NPY_TYPES type_num) - int PyArray_Check(object obj) - object PyArray_ContiguousFromAny(object obj, NPY_TYPES type, - int mindim, int maxdim) - object PyArray_ContiguousFromObject(object obj, NPY_TYPES type, - int mindim, int maxdim) - npy_intp PyArray_SIZE(ndarray arr) - npy_intp PyArray_NBYTES(ndarray arr) - void *PyArray_DATA(ndarray arr) - object PyArray_FromAny(object obj, dtype newtype, int mindim, int maxdim, - int requirements, object context) - object PyArray_FROMANY(object obj, NPY_TYPES type_num, int min, - int max, int requirements) - object PyArray_NewFromDescr(object subtype, dtype newtype, int nd, - npy_intp* dims, npy_intp* strides, void* data, - int flags, object parent) - - object PyArray_FROM_OTF(object obj, NPY_TYPES type, int flags) - object PyArray_EnsureArray(object) - - object PyArray_MultiIterNew(int n, ...) - - char *PyArray_MultiIter_DATA(broadcast multi, int i) - void PyArray_MultiIter_NEXTi(broadcast multi, int i) - void PyArray_MultiIter_NEXT(broadcast multi) - - object PyArray_IterNew(object arr) - void PyArray_ITER_NEXT(flatiter it) - - void import_array() Modified: branches/cdavid/numpy/doc/cython/numpyx.pyx =================================================================== --- branches/cdavid/numpy/doc/cython/numpyx.pyx 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/doc/cython/numpyx.pyx 2008-06-20 05:59:26 UTC (rev 5302) @@ -2,21 +2,20 @@ """Cython access to Numpy arrays - simple example. """ -# Includes from the python headers -include "Python.pxi" -# Include the Numpy C API for use via Cython extension code -include "numpy.pxi" +# Load the pieces of the Python C API we need to use (from c_python.pxd). Note +# that a 'cimport' is similart to a Python 'import' statement, but it provides +# access to the C part of a library instead of its Python-visible API. Please +# consult the Pyrex/Cython documentation for further details. +cimport c_python as py -################################################ -# Initialize numpy - this MUST be done before any other code is executed. -import_array() +# (C)Import the NumPy C API (from c_numpy.pxd) +cimport c_numpy as cnp -# Import the Numpy module for access to its usual Python API +# Import the NumPy module for access to its usual Python API import numpy as np - # A 'def' function is visible in the Python-imported module -def print_array_info(ndarray arr): +def print_array_info(cnp.ndarray arr): """Simple information printer about an array. Code meant to illustrate Cython/NumPy integration only.""" @@ -24,19 +23,19 @@ cdef int i print '-='*10 - # Note: the double cast here (void * first, then Py_intptr_t) is needed in - # Cython but not in Pyrex, since the casting behavior of cython is slightly - # different (and generally safer) than that of Pyrex. In this case, we - # just want the memory address of the actual Array object, so we cast it to - # void before doing the Py_intptr_t cast: + # Note: the double cast here (void * first, then py.Py_intptr_t) is needed + # in Cython but not in Pyrex, since the casting behavior of cython is + # slightly different (and generally safer) than that of Pyrex. In this + # case, we just want the memory address of the actual Array object, so we + # cast it to void before doing the py.Py_intptr_t cast: print 'Printing array info for ndarray at 0x%0lx'% \ - (arr,) + (arr,) print 'number of dimensions:',arr.nd - print 'address of strides: 0x%0lx'%(arr.strides,) + print 'address of strides: 0x%0lx'%(arr.strides,) print 'strides:' for i from 0<=iarr.strides[i] + print ' stride %d:'%i,arr.strides[i] print 'memory dump:' print_elements( arr.data, arr.strides, arr.dimensions, arr.nd, sizeof(double), arr.dtype ) @@ -46,12 +45,12 @@ # A 'cdef' function is NOT visible to the python side, but it is accessible to # the rest of this Cython module cdef print_elements(char *data, - Py_intptr_t* strides, - Py_intptr_t* dimensions, + py.Py_intptr_t* strides, + py.Py_intptr_t* dimensions, int nd, int elsize, object dtype): - cdef Py_intptr_t i,j + cdef py.Py_intptr_t i,j cdef void* elptr if dtype not in [np.dtype(np.object_), @@ -78,7 +77,7 @@ print_elements(data, strides+1, dimensions+1, nd-1, elsize, dtype) data = data + strides[0] -def test_methods(ndarray arr): +def test_methods(cnp.ndarray arr): """Test a few attribute accesses for an array. This illustrates how the pyrex-visible object is in practice a strange Modified: branches/cdavid/numpy/doc/example.py =================================================================== --- branches/cdavid/numpy/doc/example.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/doc/example.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -80,7 +80,9 @@ See Also -------- otherfunc : relationship (optional) - newfunc : relationship (optional) + newfunc : Relationship (optional), which could be fairly long, in which + case the line wraps here. + thirdfunc, fourthfunc, fifthfunc Notes ----- @@ -121,4 +123,3 @@ """ pass - Modified: branches/cdavid/numpy/f2py/lib/parser/test_Fortran2003.py =================================================================== --- branches/cdavid/numpy/f2py/lib/parser/test_Fortran2003.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/f2py/lib/parser/test_Fortran2003.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -7,9 +7,9 @@ ############################### SECTION 2 #################################### ############################################################################### -class TestProgram(NumpyTestCase): # R201 +class TestProgram(TestCase): # R201 - def check_simple(self): + def test_simple(self): reader = get_reader('''\ subroutine foo end subroutine foo @@ -21,9 +21,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a), 'SUBROUTINE foo\nEND SUBROUTINE foo\nSUBROUTINE bar\nEND SUBROUTINE bar') -class TestSpecificationPart(NumpyTestCase): # R204 +class TestSpecificationPart(TestCase): # R204 - def check_simple(self): + def test_simple(self): from api import get_reader reader = get_reader('''\ integer a''') @@ -37,9 +37,9 @@ ############################### SECTION 3 #################################### ############################################################################### -class TestName(NumpyTestCase): # R304 +class TestName(TestCase): # R304 - def check_name(self): + def test_name(self): a = Name('a') assert isinstance(a,Name),`a` a = Name('a2') @@ -55,9 +55,9 @@ ############################### SECTION 4 #################################### ############################################################################### -class TestTypeParamValue(NumpyTestCase): # 402 +class TestTypeParamValue(TestCase): # 402 - def check_type_param_value(self): + def test_type_param_value(self): cls = Type_Param_Value a = cls('*') assert isinstance(a,cls),`a` @@ -72,9 +72,9 @@ assert isinstance(a,Level_2_Expr),`a` assert_equal(str(a),'1 + 2') -class TestIntrinsicTypeSpec(NumpyTestCase): # R403 +class TestIntrinsicTypeSpec(TestCase): # R403 - def check_intrinsic_type_spec(self): + def test_intrinsic_type_spec(self): cls = Intrinsic_Type_Spec a = cls('INTEGER') assert isinstance(a,cls),`a` @@ -109,9 +109,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'DOUBLE PRECISION') -class TestKindSelector(NumpyTestCase): # R404 +class TestKindSelector(TestCase): # R404 - def check_kind_selector(self): + def test_kind_selector(self): cls = Kind_Selector a = cls('(1)') assert isinstance(a,cls),`a` @@ -126,9 +126,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'*1') -class TestSignedIntLiteralConstant(NumpyTestCase): # R405 +class TestSignedIntLiteralConstant(TestCase): # R405 - def check_int_literal_constant(self): + def test_int_literal_constant(self): cls = Signed_Int_Literal_Constant a = cls('1') assert isinstance(a,cls),`a` @@ -152,9 +152,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'+1976354279568241_8') -class TestIntLiteralConstant(NumpyTestCase): # R406 +class TestIntLiteralConstant(TestCase): # R406 - def check_int_literal_constant(self): + def test_int_literal_constant(self): cls = Int_Literal_Constant a = cls('1') assert isinstance(a,cls),`a` @@ -178,9 +178,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'1976354279568241_8') -class TestBinaryConstant(NumpyTestCase): # R412 +class TestBinaryConstant(TestCase): # R412 - def check_boz_literal_constant(self): + def test_boz_literal_constant(self): cls = Boz_Literal_Constant bcls = Binary_Constant a = cls('B"01"') @@ -188,9 +188,9 @@ assert_equal(str(a),'B"01"') assert_equal(repr(a),"%s('B\"01\"')" % (bcls.__name__)) -class TestOctalConstant(NumpyTestCase): # R413 +class TestOctalConstant(TestCase): # R413 - def check_boz_literal_constant(self): + def test_boz_literal_constant(self): cls = Boz_Literal_Constant ocls = Octal_Constant a = cls('O"017"') @@ -198,9 +198,9 @@ assert_equal(str(a),'O"017"') assert_equal(repr(a),"%s('O\"017\"')" % (ocls.__name__)) -class TestHexConstant(NumpyTestCase): # R414 +class TestHexConstant(TestCase): # R414 - def check_boz_literal_constant(self): + def test_boz_literal_constant(self): cls = Boz_Literal_Constant zcls = Hex_Constant a = cls('Z"01A"') @@ -208,9 +208,9 @@ assert_equal(str(a),'Z"01A"') assert_equal(repr(a),"%s('Z\"01A\"')" % (zcls.__name__)) -class TestSignedRealLiteralConstant(NumpyTestCase): # R416 +class TestSignedRealLiteralConstant(TestCase): # R416 - def check_signed_real_literal_constant(self): + def test_signed_real_literal_constant(self): cls = Signed_Real_Literal_Constant a = cls('12.78') assert isinstance(a,cls),`a` @@ -265,9 +265,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'-10.9E-17_quad') -class TestRealLiteralConstant(NumpyTestCase): # R417 +class TestRealLiteralConstant(TestCase): # R417 - def check_real_literal_constant(self): + def test_real_literal_constant(self): cls = Real_Literal_Constant a = cls('12.78') assert isinstance(a,cls),`a` @@ -326,9 +326,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'0.0D+0') -class TestCharSelector(NumpyTestCase): # R424 +class TestCharSelector(TestCase): # R424 - def check_char_selector(self): + def test_char_selector(self): cls = Char_Selector a = cls('(len=2, kind=8)') assert isinstance(a,cls),`a` @@ -352,9 +352,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'(LEN = 2, KIND = 8)') -class TestComplexLiteralConstant(NumpyTestCase): # R421 +class TestComplexLiteralConstant(TestCase): # R421 - def check_complex_literal_constant(self): + def test_complex_literal_constant(self): cls = Complex_Literal_Constant a = cls('(1.0, -1.0)') assert isinstance(a,cls),`a` @@ -374,9 +374,9 @@ assert_equal(str(a),'(0., PI)') -class TestTypeName(NumpyTestCase): # C424 +class TestTypeName(TestCase): # C424 - def check_simple(self): + def test_simple(self): cls = Type_Name a = cls('a') assert isinstance(a,cls),`a` @@ -386,9 +386,9 @@ self.assertRaises(NoMatchError,cls,'integer') self.assertRaises(NoMatchError,cls,'doubleprecision') -class TestLengthSelector(NumpyTestCase): # R425 +class TestLengthSelector(TestCase): # R425 - def check_length_selector(self): + def test_length_selector(self): cls = Length_Selector a = cls('( len = *)') assert isinstance(a,cls),`a` @@ -399,9 +399,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'*2') -class TestCharLength(NumpyTestCase): # R426 +class TestCharLength(TestCase): # R426 - def check_char_length(self): + def test_char_length(self): cls = Char_Length a = cls('(1)') assert isinstance(a,cls),`a` @@ -420,9 +420,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'(:)') -class TestCharLiteralConstant(NumpyTestCase): # R427 +class TestCharLiteralConstant(TestCase): # R427 - def check_char_literal_constant(self): + def test_char_literal_constant(self): cls = Char_Literal_Constant a = cls('NIH_"DO"') assert isinstance(a,cls),`a` @@ -454,9 +454,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'"hey ha(ada)\t"') -class TestLogicalLiteralConstant(NumpyTestCase): # R428 +class TestLogicalLiteralConstant(TestCase): # R428 - def check_logical_literal_constant(self): + def test_logical_literal_constant(self): cls = Logical_Literal_Constant a = cls('.TRUE.') assert isinstance(a,cls),`a` @@ -475,9 +475,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'.TRUE._HA') -class TestDerivedTypeStmt(NumpyTestCase): # R430 +class TestDerivedTypeStmt(TestCase): # R430 - def check_simple(self): + def test_simple(self): cls = Derived_Type_Stmt a = cls('type a') assert isinstance(a, cls),`a` @@ -492,18 +492,18 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'TYPE, PRIVATE, ABSTRACT :: a(b, c)') -class TestTypeName(NumpyTestCase): # C423 +class TestTypeName(TestCase): # C423 - def check_simple(self): + def test_simple(self): cls = Type_Name a = cls('a') assert isinstance(a, cls),`a` assert_equal(str(a),'a') assert_equal(repr(a),"Type_Name('a')") -class TestTypeAttrSpec(NumpyTestCase): # R431 +class TestTypeAttrSpec(TestCase): # R431 - def check_simple(self): + def test_simple(self): cls = Type_Attr_Spec a = cls('abstract') assert isinstance(a, cls),`a` @@ -523,9 +523,9 @@ assert_equal(str(a),'PRIVATE') -class TestEndTypeStmt(NumpyTestCase): # R433 +class TestEndTypeStmt(TestCase): # R433 - def check_simple(self): + def test_simple(self): cls = End_Type_Stmt a = cls('end type') assert isinstance(a, cls),`a` @@ -536,18 +536,18 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'END TYPE a') -class TestSequenceStmt(NumpyTestCase): # R434 +class TestSequenceStmt(TestCase): # R434 - def check_simple(self): + def test_simple(self): cls = Sequence_Stmt a = cls('sequence') assert isinstance(a, cls),`a` assert_equal(str(a),'SEQUENCE') assert_equal(repr(a),"Sequence_Stmt('SEQUENCE')") -class TestTypeParamDefStmt(NumpyTestCase): # R435 +class TestTypeParamDefStmt(TestCase): # R435 - def check_simple(self): + def test_simple(self): cls = Type_Param_Def_Stmt a = cls('integer ,kind :: a') assert isinstance(a, cls),`a` @@ -558,9 +558,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'INTEGER*2, LEN :: a = 3, b = 2 + c') -class TestTypeParamDecl(NumpyTestCase): # R436 +class TestTypeParamDecl(TestCase): # R436 - def check_simple(self): + def test_simple(self): cls = Type_Param_Decl a = cls('a=2') assert isinstance(a, cls),`a` @@ -571,9 +571,9 @@ assert isinstance(a, Name),`a` assert_equal(str(a),'a') -class TestTypeParamAttrSpec(NumpyTestCase): # R437 +class TestTypeParamAttrSpec(TestCase): # R437 - def check_simple(self): + def test_simple(self): cls = Type_Param_Attr_Spec a = cls('kind') assert isinstance(a, cls),`a` @@ -584,9 +584,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'LEN') -class TestComponentAttrSpec(NumpyTestCase): # R441 +class TestComponentAttrSpec(TestCase): # R441 - def check_simple(self): + def test_simple(self): cls = Component_Attr_Spec a = cls('pointer') assert isinstance(a, cls),`a` @@ -605,9 +605,9 @@ assert isinstance(a, Access_Spec),`a` assert_equal(str(a),'PRIVATE') -class TestComponentDecl(NumpyTestCase): # R442 +class TestComponentDecl(TestCase): # R442 - def check_simple(self): + def test_simple(self): cls = Component_Decl a = cls('a(1)') assert isinstance(a, cls),`a` @@ -626,9 +626,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'a(1) => NULL') -class TestFinalBinding(NumpyTestCase): # R454 +class TestFinalBinding(TestCase): # R454 - def check_simple(self): + def test_simple(self): cls = Final_Binding a = cls('final a, b') assert isinstance(a,cls),`a` @@ -639,9 +639,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'FINAL :: a') -class TestDerivedTypeSpec(NumpyTestCase): # R455 +class TestDerivedTypeSpec(TestCase): # R455 - def check_simple(self): + def test_simple(self): cls = Derived_Type_Spec a = cls('a(b)') assert isinstance(a,cls),`a` @@ -660,9 +660,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'a()') -class TestTypeParamSpec(NumpyTestCase): # R456 +class TestTypeParamSpec(TestCase): # R456 - def check_type_param_spec(self): + def test_type_param_spec(self): cls = Type_Param_Spec a = cls('a=1') assert isinstance(a,cls),`a` @@ -677,9 +677,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'k = :') -class TestTypeParamSpecList(NumpyTestCase): # R456-list +class TestTypeParamSpecList(TestCase): # R456-list - def check_type_param_spec_list(self): + def test_type_param_spec_list(self): cls = Type_Param_Spec_List a = cls('a,b') @@ -694,9 +694,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'k = a, c, g = 1') -class TestStructureConstructor2(NumpyTestCase): # R457.b +class TestStructureConstructor2(TestCase): # R457.b - def check_simple(self): + def test_simple(self): cls = Structure_Constructor_2 a = cls('k=a') assert isinstance(a,cls),`a` @@ -707,9 +707,9 @@ assert isinstance(a,Name),`a` assert_equal(str(a),'a') -class TestStructureConstructor(NumpyTestCase): # R457 +class TestStructureConstructor(TestCase): # R457 - def check_structure_constructor(self): + def test_structure_constructor(self): cls = Structure_Constructor a = cls('t()') assert isinstance(a,cls),`a` @@ -729,9 +729,9 @@ assert isinstance(a,Name),`a` assert_equal(str(a),'a') -class TestComponentSpec(NumpyTestCase): # R458 +class TestComponentSpec(TestCase): # R458 - def check_simple(self): + def test_simple(self): cls = Component_Spec a = cls('k=a') assert isinstance(a,cls),`a` @@ -750,9 +750,9 @@ assert isinstance(a, Component_Spec),`a` assert_equal(str(a),'s = a % b') -class TestComponentSpecList(NumpyTestCase): # R458-list +class TestComponentSpecList(TestCase): # R458-list - def check_simple(self): + def test_simple(self): cls = Component_Spec_List a = cls('k=a, b') assert isinstance(a,cls),`a` @@ -763,9 +763,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'k = a, c') -class TestArrayConstructor(NumpyTestCase): # R465 +class TestArrayConstructor(TestCase): # R465 - def check_simple(self): + def test_simple(self): cls = Array_Constructor a = cls('(/a/)') assert isinstance(a,cls),`a` @@ -785,9 +785,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'[INTEGER :: a, b]') -class TestAcSpec(NumpyTestCase): # R466 +class TestAcSpec(TestCase): # R466 - def check_ac_spec(self): + def test_ac_spec(self): cls = Ac_Spec a = cls('integer ::') assert isinstance(a,cls),`a` @@ -806,9 +806,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'INTEGER :: a, (a, b, n = 1, 5)') -class TestAcValueList(NumpyTestCase): # R469-list +class TestAcValueList(TestCase): # R469-list - def check_ac_value_list(self): + def test_ac_value_list(self): cls = Ac_Value_List a = cls('a, b') assert isinstance(a,cls),`a` @@ -819,18 +819,18 @@ assert isinstance(a,Name),`a` assert_equal(str(a),'a') -class TestAcImpliedDo(NumpyTestCase): # R470 +class TestAcImpliedDo(TestCase): # R470 - def check_ac_implied_do(self): + def test_ac_implied_do(self): cls = Ac_Implied_Do a = cls('( a, b, n = 1, 5 )') assert isinstance(a,cls),`a` assert_equal(str(a),'(a, b, n = 1, 5)') assert_equal(repr(a),"Ac_Implied_Do(Ac_Value_List(',', (Name('a'), Name('b'))), Ac_Implied_Do_Control(Name('n'), [Int_Literal_Constant('1', None), Int_Literal_Constant('5', None)]))") -class TestAcImpliedDoControl(NumpyTestCase): # R471 +class TestAcImpliedDoControl(TestCase): # R471 - def check_ac_implied_do_control(self): + def test_ac_implied_do_control(self): cls = Ac_Implied_Do_Control a = cls('n = 3, 5') assert isinstance(a,cls),`a` @@ -845,9 +845,9 @@ ############################### SECTION 5 #################################### ############################################################################### -class TestTypeDeclarationStmt(NumpyTestCase): # R501 +class TestTypeDeclarationStmt(TestCase): # R501 - def check_simple(self): + def test_simple(self): cls = Type_Declaration_Stmt a = cls('integer a') assert isinstance(a, cls),`a` @@ -869,9 +869,9 @@ a = cls('DOUBLE PRECISION ALPHA, BETA') assert isinstance(a, cls),`a` -class TestDeclarationTypeSpec(NumpyTestCase): # R502 +class TestDeclarationTypeSpec(TestCase): # R502 - def check_simple(self): + def test_simple(self): cls = Declaration_Type_Spec a = cls('Integer*2') assert isinstance(a, Intrinsic_Type_Spec),`a` @@ -882,9 +882,9 @@ assert_equal(str(a), 'TYPE(foo)') assert_equal(repr(a), "Declaration_Type_Spec('TYPE', Type_Name('foo'))") -class TestAttrSpec(NumpyTestCase): # R503 +class TestAttrSpec(TestCase): # R503 - def check_simple(self): + def test_simple(self): cls = Attr_Spec a = cls('allocatable') assert isinstance(a, cls),`a` @@ -894,27 +894,27 @@ assert isinstance(a, Dimension_Attr_Spec),`a` assert_equal(str(a),'DIMENSION(a)') -class TestDimensionAttrSpec(NumpyTestCase): # R503.d +class TestDimensionAttrSpec(TestCase): # R503.d - def check_simple(self): + def test_simple(self): cls = Dimension_Attr_Spec a = cls('dimension(a)') assert isinstance(a, cls),`a` assert_equal(str(a),'DIMENSION(a)') assert_equal(repr(a),"Dimension_Attr_Spec('DIMENSION', Explicit_Shape_Spec(None, Name('a')))") -class TestIntentAttrSpec(NumpyTestCase): # R503.f +class TestIntentAttrSpec(TestCase): # R503.f - def check_simple(self): + def test_simple(self): cls = Intent_Attr_Spec a = cls('intent(in)') assert isinstance(a, cls),`a` assert_equal(str(a),'INTENT(IN)') assert_equal(repr(a),"Intent_Attr_Spec('INTENT', Intent_Spec('IN'))") -class TestEntityDecl(NumpyTestCase): # 504 +class TestEntityDecl(TestCase): # 504 - def check_simple(self): + def test_simple(self): cls = Entity_Decl a = cls('a(1)') assert isinstance(a, cls),`a` @@ -929,9 +929,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'a(1)*(3) = 2') -class TestAccessSpec(NumpyTestCase): # R508 +class TestAccessSpec(TestCase): # R508 - def check_simple(self): + def test_simple(self): cls = Access_Spec a = cls('private') assert isinstance(a, cls),`a` @@ -942,9 +942,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'PUBLIC') -class TestLanguageBindingSpec(NumpyTestCase): # R509 +class TestLanguageBindingSpec(TestCase): # R509 - def check_simple(self): + def test_simple(self): cls = Language_Binding_Spec a = cls('bind(c)') assert isinstance(a, cls),`a` @@ -955,9 +955,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'BIND(C, NAME = "hey")') -class TestExplicitShapeSpec(NumpyTestCase): # R511 +class TestExplicitShapeSpec(TestCase): # R511 - def check_simple(self): + def test_simple(self): cls = Explicit_Shape_Spec a = cls('a:b') assert isinstance(a, cls),`a` @@ -968,9 +968,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'a') -class TestUpperBound(NumpyTestCase): # R513 +class TestUpperBound(TestCase): # R513 - def check_simple(self): + def test_simple(self): cls = Upper_Bound a = cls('a') assert isinstance(a, Name),`a` @@ -978,9 +978,9 @@ self.assertRaises(NoMatchError,cls,'*') -class TestAssumedShapeSpec(NumpyTestCase): # R514 +class TestAssumedShapeSpec(TestCase): # R514 - def check_simple(self): + def test_simple(self): cls = Assumed_Shape_Spec a = cls(':') assert isinstance(a, cls),`a` @@ -991,9 +991,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'a :') -class TestDeferredShapeSpec(NumpyTestCase): # R515 +class TestDeferredShapeSpec(TestCase): # R515 - def check_simple(self): + def test_simple(self): cls = Deferred_Shape_Spec a = cls(':') assert isinstance(a, cls),`a` @@ -1001,9 +1001,9 @@ assert_equal(repr(a),'Deferred_Shape_Spec(None, None)') -class TestAssumedSizeSpec(NumpyTestCase): # R516 +class TestAssumedSizeSpec(TestCase): # R516 - def check_simple(self): + def test_simple(self): cls = Assumed_Size_Spec a = cls('*') assert isinstance(a, cls),`a` @@ -1022,9 +1022,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'a : b, 1 : *') -class TestAccessStmt(NumpyTestCase): # R518 +class TestAccessStmt(TestCase): # R518 - def check_simple(self): + def test_simple(self): cls = Access_Stmt a = cls('private') assert isinstance(a, cls),`a` @@ -1039,9 +1039,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'PUBLIC :: a') -class TestParameterStmt(NumpyTestCase): # R538 +class TestParameterStmt(TestCase): # R538 - def check_simple(self): + def test_simple(self): cls = Parameter_Stmt a = cls('parameter(a=1)') assert isinstance(a, cls),`a` @@ -1056,18 +1056,18 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'PARAMETER(ONE = 1.0D+0, ZERO = 0.0D+0)') -class TestNamedConstantDef(NumpyTestCase): # R539 +class TestNamedConstantDef(TestCase): # R539 - def check_simple(self): + def test_simple(self): cls = Named_Constant_Def a = cls('a=1') assert isinstance(a, cls),`a` assert_equal(str(a),'a = 1') assert_equal(repr(a),"Named_Constant_Def(Name('a'), Int_Literal_Constant('1', None))") -class TestPointerDecl(NumpyTestCase): # R541 +class TestPointerDecl(TestCase): # R541 - def check_simple(self): + def test_simple(self): cls = Pointer_Decl a = cls('a(:)') assert isinstance(a, cls),`a` @@ -1078,9 +1078,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'a(:, :)') -class TestImplicitStmt(NumpyTestCase): # R549 +class TestImplicitStmt(TestCase): # R549 - def check_simple(self): + def test_simple(self): cls = Implicit_Stmt a = cls('implicitnone') assert isinstance(a, cls),`a` @@ -1091,9 +1091,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'IMPLICIT REAL(A - D), DOUBLE PRECISION(R - T, X), TYPE(a)(Y - Z)') -class TestImplicitSpec(NumpyTestCase): # R550 +class TestImplicitSpec(TestCase): # R550 - def check_simple(self): + def test_simple(self): cls = Implicit_Spec a = cls('integer (a-z)') assert isinstance(a, cls),`a` @@ -1104,9 +1104,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'DOUBLE COMPLEX(R, D - G)') -class TestLetterSpec(NumpyTestCase): # R551 +class TestLetterSpec(TestCase): # R551 - def check_simple(self): + def test_simple(self): cls = Letter_Spec a = cls('a-z') assert isinstance(a, cls),`a` @@ -1117,9 +1117,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'D') -class TestEquivalenceStmt(NumpyTestCase): # R554 +class TestEquivalenceStmt(TestCase): # R554 - def check_simple(self): + def test_simple(self): cls = Equivalence_Stmt a = cls('equivalence (a, b ,z)') assert isinstance(a, cls),`a` @@ -1130,9 +1130,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'EQUIVALENCE(a, b, z), (b, l)') -class TestCommonStmt(NumpyTestCase): # R557 +class TestCommonStmt(TestCase): # R557 - def check_simple(self): + def test_simple(self): cls = Common_Stmt a = cls('common a') assert isinstance(a, cls),`a` @@ -1151,9 +1151,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'COMMON /name/ a, b(4, 5) // c /ljuks/ g(2)') -class TestCommonBlockObject(NumpyTestCase): # R558 +class TestCommonBlockObject(TestCase): # R558 - def check_simple(self): + def test_simple(self): cls = Common_Block_Object a = cls('a(2)') assert isinstance(a, cls),`a` @@ -1169,9 +1169,9 @@ ############################### SECTION 6 #################################### ############################################################################### -class TestSubstring(NumpyTestCase): # R609 +class TestSubstring(TestCase): # R609 - def check_simple(self): + def test_simple(self): cls = Substring a = cls('a(:)') assert isinstance(a, cls),`a` @@ -1184,9 +1184,9 @@ assert_equal(repr(a),"Substring(Name('a'), Substring_Range(Int_Literal_Constant('1', None), Int_Literal_Constant('2', None)))") -class TestSubstringRange(NumpyTestCase): # R611 +class TestSubstringRange(TestCase): # R611 - def check_simple(self): + def test_simple(self): cls = Substring_Range a = cls(':') assert isinstance(a, cls),`a` @@ -1215,9 +1215,9 @@ assert_equal(str(a),': b') -class TestDataRef(NumpyTestCase): # R612 +class TestDataRef(TestCase): # R612 - def check_data_ref(self): + def test_data_ref(self): cls = Data_Ref a = cls('a%b') assert isinstance(a,cls),`a` @@ -1228,17 +1228,17 @@ assert isinstance(a,Name),`a` assert_equal(str(a),'a') -class TestPartRef(NumpyTestCase): # R613 +class TestPartRef(TestCase): # R613 - def check_part_ref(self): + def test_part_ref(self): cls = Part_Ref a = cls('a') assert isinstance(a, Name),`a` assert_equal(str(a),'a') -class TestTypeParamInquiry(NumpyTestCase): # R615 +class TestTypeParamInquiry(TestCase): # R615 - def check_simple(self): + def test_simple(self): cls = Type_Param_Inquiry a = cls('a % b') assert isinstance(a,cls),`a` @@ -1246,9 +1246,9 @@ assert_equal(repr(a),"Type_Param_Inquiry(Name('a'), '%', Name('b'))") -class TestArraySection(NumpyTestCase): # R617 +class TestArraySection(TestCase): # R617 - def check_array_section(self): + def test_array_section(self): cls = Array_Section a = cls('a(:)') assert isinstance(a,cls),`a` @@ -1260,9 +1260,9 @@ assert_equal(str(a),'a(2 :)') -class TestSectionSubscript(NumpyTestCase): # R619 +class TestSectionSubscript(TestCase): # R619 - def check_simple(self): + def test_simple(self): cls = Section_Subscript a = cls('1:2') @@ -1273,9 +1273,9 @@ assert isinstance(a, Name),`a` assert_equal(str(a),'zzz') -class TestSectionSubscriptList(NumpyTestCase): # R619-list +class TestSectionSubscriptList(TestCase): # R619-list - def check_simple(self): + def test_simple(self): cls = Section_Subscript_List a = cls('a,2') assert isinstance(a,cls),`a` @@ -1290,9 +1290,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),': : 1, 3') -class TestSubscriptTriplet(NumpyTestCase): # R620 +class TestSubscriptTriplet(TestCase): # R620 - def check_simple(self): + def test_simple(self): cls = Subscript_Triplet a = cls('a:b') assert isinstance(a,cls),`a` @@ -1319,18 +1319,18 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'a + 1 :') -class TestAllocOpt(NumpyTestCase): # R624 +class TestAllocOpt(TestCase): # R624 - def check_simple(self): + def test_simple(self): cls = Alloc_Opt a = cls('stat=a') assert isinstance(a, cls),`a` assert_equal(str(a),'STAT = a') assert_equal(repr(a),"Alloc_Opt('STAT', Name('a'))") -class TestNullifyStmt(NumpyTestCase): # R633 +class TestNullifyStmt(TestCase): # R633 - def check_simple(self): + def test_simple(self): cls = Nullify_Stmt a = cls('nullify (a)') assert isinstance(a, cls),`a` @@ -1345,9 +1345,9 @@ ############################### SECTION 7 #################################### ############################################################################### -class TestPrimary(NumpyTestCase): # R701 +class TestPrimary(TestCase): # R701 - def check_simple(self): + def test_simple(self): cls = Primary a = cls('a') assert isinstance(a,Name),`a` @@ -1401,9 +1401,9 @@ assert isinstance(a,Real_Literal_Constant),`a` assert_equal(str(a),'0.0E-1') -class TestParenthesis(NumpyTestCase): # R701.h +class TestParenthesis(TestCase): # R701.h - def check_simple(self): + def test_simple(self): cls = Parenthesis a = cls('(a)') assert isinstance(a,cls),`a` @@ -1422,9 +1422,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'(a + (a + c))') -class TestLevel1Expr(NumpyTestCase): # R702 +class TestLevel1Expr(TestCase): # R702 - def check_simple(self): + def test_simple(self): cls = Level_1_Expr a = cls('.hey. a') assert isinstance(a,cls),`a` @@ -1433,9 +1433,9 @@ self.assertRaises(NoMatchError,cls,'.not. a') -class TestMultOperand(NumpyTestCase): # R704 +class TestMultOperand(TestCase): # R704 - def check_simple(self): + def test_simple(self): cls = Mult_Operand a = cls('a**b') assert isinstance(a,cls),`a` @@ -1454,9 +1454,9 @@ assert isinstance(a,Real_Literal_Constant),`a` assert_equal(str(a),'0.0E-1') -class TestAddOperand(NumpyTestCase): # R705 +class TestAddOperand(TestCase): # R705 - def check_simple(self): + def test_simple(self): cls = Add_Operand a = cls('a*b') assert isinstance(a,cls),`a` @@ -1475,9 +1475,9 @@ assert isinstance(a,Real_Literal_Constant),`a` assert_equal(str(a),'0.0E-1') -class TestLevel2Expr(NumpyTestCase): # R706 +class TestLevel2Expr(TestCase): # R706 - def check_simple(self): + def test_simple(self): cls = Level_2_Expr a = cls('a+b') assert isinstance(a,cls),`a` @@ -1509,9 +1509,9 @@ assert_equal(str(a),'0.0E-1') -class TestLevel2UnaryExpr(NumpyTestCase): +class TestLevel2UnaryExpr(TestCase): - def check_simple(self): + def test_simple(self): cls = Level_2_Unary_Expr a = cls('+a') assert isinstance(a,cls),`a` @@ -1531,9 +1531,9 @@ assert_equal(str(a),'0.0E-1') -class TestLevel3Expr(NumpyTestCase): # R710 +class TestLevel3Expr(TestCase): # R710 - def check_simple(self): + def test_simple(self): cls = Level_3_Expr a = cls('a//b') assert isinstance(a,cls),`a` @@ -1544,9 +1544,9 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'"a" // "b"') -class TestLevel4Expr(NumpyTestCase): # R712 +class TestLevel4Expr(TestCase): # R712 - def check_simple(self): + def test_simple(self): cls = Level_4_Expr a = cls('a.eq.b') assert isinstance(a,cls),`a` @@ -1593,18 +1593,18 @@ assert isinstance(a,cls),`a` assert_equal(str(a),'a > b') -class TestAndOperand(NumpyTestCase): # R714 +class TestAndOperand(TestCase): # R714 - def check_simple(self): + def test_simple(self): cls = And_Operand a = cls('.not.a') assert isinstance(a,cls),`a` assert_equal(str(a),'.NOT. a') assert_equal(repr(a),"And_Operand('.NOT.', Name('a'))") -class TestOrOperand(NumpyTestCase): # R715 +class TestOrOperand(TestCase): # R715 - def check_simple(self): + def test_simple(self): cls = Or_Operand a = cls('a.and.b') assert isinstance(a,cls),`a` @@ -1612,9 +1612,9 @@ assert_equal(repr(a),"Or_Operand(Name('a'), '.AND.', Name('b'))") -class TestEquivOperand(NumpyTestCase): # R716 +class TestEquivOperand(TestCase): # R716 - def check_simple(self): + def test_simple(self): cls = Equiv_Operand a = cls('a.or.b') assert isinstance(a,cls),`a` @@ -1622,9 +1622,9 @@ assert_equal(repr(a),"Equiv_Operand(Name('a'), '.OR.', Name('b'))") -class TestLevel5Expr(NumpyTestCase): # R717 +class TestLevel5Expr(TestCase): # R717 - def check_simple(self): + def test_simple(self): cls = Level_5_Expr a = cls('a.eqv.b') assert isinstance(a,cls),`a` @@ -1639,9 +1639,9 @@ assert isinstance(a,Level_4_Expr),`a` assert_equal(str(a),'a .EQ. b') -class TestExpr(NumpyTestCase): # R722 +class TestExpr(TestCase): # R722 - def check_simple(self): + def test_simple(self): cls = Expr a = cls('a .op. b') assert isinstance(a,cls),`a` @@ -1661,9 +1661,9 @@ self.assertRaises(NoMatchError,Scalar_Int_Expr,'a,b') -class TestAssignmentStmt(NumpyTestCase): # R734 +class TestAssignmentStmt(TestCase): # R734 - def check_simple(self): + def test_simple(self): cls = Assignment_Stmt a = cls('a = b') assert isinstance(a, cls),`a` @@ -1678,27 +1678,27 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'a % c = b + c') -class TestProcComponentRef(NumpyTestCase): # R741 +class TestProcComponentRef(TestCase): # R741 - def check_proc_component_ref(self): + def test_proc_component_ref(self): cls = Proc_Component_Ref a = cls('a % b') assert isinstance(a,cls),`a` assert_equal(str(a),'a % b') assert_equal(repr(a),"Proc_Component_Ref(Name('a'), '%', Name('b'))") -class TestWhereStmt(NumpyTestCase): # R743 +class TestWhereStmt(TestCase): # R743 - def check_simple(self): + def test_simple(self): cls = Where_Stmt a = cls('where (a) c=2') assert isinstance(a,cls),`a` assert_equal(str(a),'WHERE (a) c = 2') assert_equal(repr(a),"Where_Stmt(Name('a'), Assignment_Stmt(Name('c'), '=', Int_Literal_Constant('2', None)))") -class TestWhereConstructStmt(NumpyTestCase): # R745 +class TestWhereConstructStmt(TestCase): # R745 - def check_simple(self): + def test_simple(self): cls = Where_Construct_Stmt a = cls('where (a)') assert isinstance(a,cls),`a` @@ -1710,9 +1710,9 @@ ############################### SECTION 8 #################################### ############################################################################### -class TestContinueStmt(NumpyTestCase): # R848 +class TestContinueStmt(TestCase): # R848 - def check_simple(self): + def test_simple(self): cls = Continue_Stmt a = cls('continue') assert isinstance(a, cls),`a` @@ -1723,9 +1723,9 @@ ############################### SECTION 9 #################################### ############################################################################### -class TestIoUnit(NumpyTestCase): # R901 +class TestIoUnit(TestCase): # R901 - def check_simple(self): + def test_simple(self): cls = Io_Unit a = cls('*') assert isinstance(a, cls),`a` @@ -1735,18 +1735,18 @@ assert isinstance(a, Name),`a` assert_equal(str(a),'a') -class TestWriteStmt(NumpyTestCase): # R911 +class TestWriteStmt(TestCase): # R911 - def check_simple(self): + def test_simple(self): cls = Write_Stmt a = cls('write (123)"hey"') assert isinstance(a, cls),`a` assert_equal(str(a),'WRITE(UNIT = 123) "hey"') assert_equal(repr(a),'Write_Stmt(Io_Control_Spec_List(\',\', (Io_Control_Spec(\'UNIT\', Int_Literal_Constant(\'123\', None)),)), Char_Literal_Constant(\'"hey"\', None))') -class TestPrintStmt(NumpyTestCase): # R912 +class TestPrintStmt(TestCase): # R912 - def check_simple(self): + def test_simple(self): cls = Print_Stmt a = cls('print 123') assert isinstance(a, cls),`a` @@ -1757,18 +1757,18 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'PRINT *, "a=", a') -class TestIoControlSpec(NumpyTestCase): # R913 +class TestIoControlSpec(TestCase): # R913 - def check_simple(self): + def test_simple(self): cls = Io_Control_Spec a = cls('end=123') assert isinstance(a, cls),`a` assert_equal(str(a),'END = 123') assert_equal(repr(a),"Io_Control_Spec('END', Label('123'))") -class TestIoControlSpecList(NumpyTestCase): # R913-list +class TestIoControlSpecList(TestCase): # R913-list - def check_simple(self): + def test_simple(self): cls = Io_Control_Spec_List a = cls('end=123') assert isinstance(a, cls),`a` @@ -1793,9 +1793,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'UNIT = 123, NML = a') -class TestFormat(NumpyTestCase): # R914 +class TestFormat(TestCase): # R914 - def check_simple(self): + def test_simple(self): cls = Format a = cls('*') assert isinstance(a, cls),`a` @@ -1810,17 +1810,17 @@ assert isinstance(a, Label),`a` assert_equal(str(a),'123') -class TestWaitStmt(NumpyTestCase): # R921 +class TestWaitStmt(TestCase): # R921 - def check_simple(self): + def test_simple(self): cls = Wait_Stmt a = cls('wait (123)') assert isinstance(a, cls),`a` assert_equal(str(a),'WAIT(UNIT = 123)') -class TestWaitSpec(NumpyTestCase): # R922 +class TestWaitSpec(TestCase): # R922 - def check_simple(self): + def test_simple(self): cls = Wait_Spec a = cls('123') assert isinstance(a, cls),`a` @@ -1840,9 +1840,9 @@ ############################### SECTION 11 #################################### ############################################################################### -class TestUseStmt(NumpyTestCase): # R1109 +class TestUseStmt(TestCase): # R1109 - def check_simple(self): + def test_simple(self): cls = Use_Stmt a = cls('use a') assert isinstance(a, cls),`a` @@ -1861,9 +1861,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'USE, INTRINSIC :: a, OPERATOR(.HEY.) => OPERATOR(.HOO.), c => g') -class TestModuleNature(NumpyTestCase): # R1110 +class TestModuleNature(TestCase): # R1110 - def check_simple(self): + def test_simple(self): cls = Module_Nature a = cls('intrinsic') assert isinstance(a, cls),`a` @@ -1878,9 +1878,9 @@ ############################### SECTION 12 #################################### ############################################################################### -class TestFunctionReference(NumpyTestCase): # R1217 +class TestFunctionReference(TestCase): # R1217 - def check_simple(self): + def test_simple(self): cls = Function_Reference a = cls('f()') assert isinstance(a,cls),`a` @@ -1892,18 +1892,18 @@ assert_equal(str(a),'f(2, k = 1, a)') -class TestProcedureDesignator(NumpyTestCase): # R1219 +class TestProcedureDesignator(TestCase): # R1219 - def check_procedure_designator(self): + def test_procedure_designator(self): cls = Procedure_Designator a = cls('a%b') assert isinstance(a,cls),`a` assert_equal(str(a),'a % b') assert_equal(repr(a),"Procedure_Designator(Name('a'), '%', Name('b'))") -class TestActualArgSpec(NumpyTestCase): # R1220 +class TestActualArgSpec(TestCase): # R1220 - def check_simple(self): + def test_simple(self): cls = Actual_Arg_Spec a = cls('k=a') assert isinstance(a,cls),`a` @@ -1914,9 +1914,9 @@ assert isinstance(a,Name),`a` assert_equal(str(a),'a') -class TestActualArgSpecList(NumpyTestCase): +class TestActualArgSpecList(TestCase): - def check_simple(self): + def test_simple(self): cls = Actual_Arg_Spec_List a = cls('a,b') assert isinstance(a,cls),`a` @@ -1935,18 +1935,18 @@ assert isinstance(a,Name),`a` assert_equal(str(a),'a') -class TestAltReturnSpec(NumpyTestCase): # R1222 +class TestAltReturnSpec(TestCase): # R1222 - def check_alt_return_spec(self): + def test_alt_return_spec(self): cls = Alt_Return_Spec a = cls('* 123') assert isinstance(a,cls),`a` assert_equal(str(a),'*123') assert_equal(repr(a),"Alt_Return_Spec(Label('123'))") -class TestPrefix(NumpyTestCase): # R1227 +class TestPrefix(TestCase): # R1227 - def check_simple(self): + def test_simple(self): cls = Prefix a = cls('pure recursive') assert isinstance(a, cls),`a` @@ -1957,9 +1957,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'INTEGER*2 PURE') -class TestPrefixSpec(NumpyTestCase): # R1228 +class TestPrefixSpec(TestCase): # R1228 - def check_simple(self): + def test_simple(self): cls = Prefix_Spec a = cls('pure') assert isinstance(a, cls),`a` @@ -1978,9 +1978,9 @@ assert isinstance(a, Intrinsic_Type_Spec),`a` assert_equal(str(a),'INTEGER*2') -class TestSubroutineSubprogram(NumpyTestCase): # R1231 +class TestSubroutineSubprogram(TestCase): # R1231 - def check_simple(self): + def test_simple(self): from api import get_reader reader = get_reader('''\ subroutine foo @@ -2000,9 +2000,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'SUBROUTINE foo\n INTEGER :: a\nEND SUBROUTINE foo') -class TestSubroutineStmt(NumpyTestCase): # R1232 +class TestSubroutineStmt(TestCase): # R1232 - def check_simple(self): + def test_simple(self): cls = Subroutine_Stmt a = cls('subroutine foo') assert isinstance(a, cls),`a` @@ -2021,9 +2021,9 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'SUBROUTINE foo BIND(C)') -class TestEndSubroutineStmt(NumpyTestCase): # R1234 +class TestEndSubroutineStmt(TestCase): # R1234 - def check_simple(self): + def test_simple(self): cls = End_Subroutine_Stmt a = cls('end subroutine foo') assert isinstance(a, cls),`a` @@ -2038,18 +2038,18 @@ assert isinstance(a, cls),`a` assert_equal(str(a),'END SUBROUTINE') -class TestReturnStmt(NumpyTestCase): # R1236 +class TestReturnStmt(TestCase): # R1236 - def check_simple(self): + def test_simple(self): cls = Return_Stmt a = cls('return') assert isinstance(a, cls),`a` assert_equal(str(a), 'RETURN') assert_equal(repr(a), 'Return_Stmt(None)') -class TestContains(NumpyTestCase): # R1237 +class TestContains(TestCase): # R1237 - def check_simple(self): + def test_simple(self): cls = Contains_Stmt a = cls('Contains') assert isinstance(a, cls),`a` @@ -2098,4 +2098,4 @@ print '-----' if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/f2py/lib/parser/test_parser.py =================================================================== --- branches/cdavid/numpy/f2py/lib/parser/test_parser.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/f2py/lib/parser/test_parser.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -34,25 +34,25 @@ return r raise ValueError, 'parsing %r with %s pattern failed' % (line, cls.__name__) -class TestStatements(NumpyTestCase): +class TestStatements(TestCase): - def check_assignment(self): + def test_assignment(self): assert_equal(parse(Assignment,'a=b'), 'a = b') assert_equal(parse(PointerAssignment,'a=>b'), 'a => b') assert_equal(parse(Assignment,'a (2)=b(n,m)'), 'a(2) = b(n,m)') assert_equal(parse(Assignment,'a % 2(2,4)=b(a(i))'), 'a%2(2,4) = b(a(i))') - def check_assign(self): + def test_assign(self): assert_equal(parse(Assign,'assign 10 to a'),'ASSIGN 10 TO a') - def check_call(self): + def test_call(self): assert_equal(parse(Call,'call a'),'CALL a') assert_equal(parse(Call,'call a()'),'CALL a') assert_equal(parse(Call,'call a(1)'),'CALL a(1)') assert_equal(parse(Call,'call a(1,2)'),'CALL a(1, 2)') assert_equal(parse(Call,'call a % 2 ( n , a+1 )'),'CALL a % 2(n, a+1)') - def check_goto(self): + def test_goto(self): assert_equal(parse(Goto,'go to 19'),'GO TO 19') assert_equal(parse(Goto,'goto 19'),'GO TO 19') assert_equal(parse(ComputedGoto,'goto (1, 2 ,3) a+b(2)'), @@ -63,29 +63,29 @@ assert_equal(parse(AssignedGoto,'goto a ( 1 )'),'GO TO a (1)') assert_equal(parse(AssignedGoto,'goto a ( 1 ,2)'),'GO TO a (1, 2)') - def check_continue(self): + def test_continue(self): assert_equal(parse(Continue,'continue'),'CONTINUE') - def check_return(self): + def test_return(self): assert_equal(parse(Return,'return'),'RETURN') assert_equal(parse(Return,'return a'),'RETURN a') assert_equal(parse(Return,'return a+1'),'RETURN a+1') assert_equal(parse(Return,'return a(c, a)'),'RETURN a(c, a)') - def check_stop(self): + def test_stop(self): assert_equal(parse(Stop,'stop'),'STOP') assert_equal(parse(Stop,'stop 1'),'STOP 1') assert_equal(parse(Stop,'stop "a"'),'STOP "a"') assert_equal(parse(Stop,'stop "a b"'),'STOP "a b"') - def check_print(self): + def test_print(self): assert_equal(parse(Print, 'print*'),'PRINT *') assert_equal(parse(Print, 'print "a b( c )"'),'PRINT "a b( c )"') assert_equal(parse(Print, 'print 12, a'),'PRINT 12, a') assert_equal(parse(Print, 'print 12, a , b'),'PRINT 12, a, b') assert_equal(parse(Print, 'print 12, a(c,1) , b'),'PRINT 12, a(c,1), b') - def check_read(self): + def test_read(self): assert_equal(parse(Read, 'read ( 10 )'),'READ (10)') assert_equal(parse(Read, 'read ( 10 ) a '),'READ (10) a') assert_equal(parse(Read, 'read ( 10 ) a , b'),'READ (10) a, b') @@ -98,44 +98,44 @@ assert_equal(parse(Read, 'read * , a , b'),'READ *, a, b') assert_equal(parse(Read, 'read ( unit =10 )'),'READ (UNIT = 10)') - def check_write(self): + def test_write(self): assert_equal(parse(Write, 'write ( 10 )'),'WRITE (10)') assert_equal(parse(Write, 'write ( 10 , a )'),'WRITE (10, a)') assert_equal(parse(Write, 'write ( 10 ) b'),'WRITE (10) b') assert_equal(parse(Write, 'write ( 10 ) a(1) , b+2'),'WRITE (10) a(1), b+2') assert_equal(parse(Write, 'write ( unit=10 )'),'WRITE (UNIT = 10)') - def check_flush(self): + def test_flush(self): assert_equal(parse(Flush, 'flush 10'),'FLUSH (10)') assert_equal(parse(Flush, 'flush (10)'),'FLUSH (10)') assert_equal(parse(Flush, 'flush (UNIT = 10)'),'FLUSH (UNIT = 10)') assert_equal(parse(Flush, 'flush (10, err= 23)'),'FLUSH (10, ERR = 23)') - def check_wait(self): + def test_wait(self): assert_equal(parse(Wait, 'wait(10)'),'WAIT (10)') assert_equal(parse(Wait, 'wait(10,err=129)'),'WAIT (10, ERR = 129)') - def check_contains(self): + def test_contains(self): assert_equal(parse(Contains, 'contains'),'CONTAINS') - def check_allocate(self): + def test_allocate(self): assert_equal(parse(Allocate, 'allocate (a)'), 'ALLOCATE (a)') assert_equal(parse(Allocate, \ 'allocate (a, stat=b)'), 'ALLOCATE (a, STAT = b)') assert_equal(parse(Allocate, 'allocate (a,b(:1))'), 'ALLOCATE (a, b(:1))') assert_equal(parse(Allocate, \ 'allocate (real(8)::a)'), 'ALLOCATE (REAL(KIND=8) :: a)') - def check_deallocate(self): + def test_deallocate(self): assert_equal(parse(Deallocate, 'deallocate (a)'), 'DEALLOCATE (a)') assert_equal(parse(Deallocate, 'deallocate (a, stat=b)'), 'DEALLOCATE (a, STAT = b)') - def check_moduleprocedure(self): + def test_moduleprocedure(self): assert_equal(parse(ModuleProcedure,\ 'ModuleProcedure a'), 'MODULE PROCEDURE a') assert_equal(parse(ModuleProcedure,\ 'module procedure a , b'), 'MODULE PROCEDURE a, b') - def check_access(self): + def test_access(self): assert_equal(parse(Public,'Public'),'PUBLIC') assert_equal(parse(Public,'public a'),'PUBLIC a') assert_equal(parse(Public,'public :: a'),'PUBLIC a') @@ -144,45 +144,45 @@ assert_equal(parse(Private,'private'),'PRIVATE') assert_equal(parse(Private,'private :: a'),'PRIVATE a') - def check_close(self): + def test_close(self): assert_equal(parse(Close,'close (12)'),'CLOSE (12)') assert_equal(parse(Close,'close (12, err=99)'),'CLOSE (12, ERR = 99)') assert_equal(parse(Close,'close (12, status = a(1,2))'),'CLOSE (12, STATUS = a(1,2))') - def check_cycle(self): + def test_cycle(self): assert_equal(parse(Cycle,'cycle'),'CYCLE') assert_equal(parse(Cycle,'cycle ab'),'CYCLE ab') - def check_rewind(self): + def test_rewind(self): assert_equal(parse(Rewind,'rewind 1'),'REWIND (1)') assert_equal(parse(Rewind,'rewind (1)'),'REWIND (1)') assert_equal(parse(Rewind,'rewind (1, err = 123)'),'REWIND (1, ERR = 123)') - def check_backspace(self): + def test_backspace(self): assert_equal(parse(Backspace,'backspace 1'),'BACKSPACE (1)') assert_equal(parse(Backspace,'backspace (1)'),'BACKSPACE (1)') assert_equal(parse(Backspace,'backspace (1, err = 123)'),'BACKSPACE (1, ERR = 123)') - def check_endfile(self): + def test_endfile(self): assert_equal(parse(Endfile,'endfile 1'),'ENDFILE (1)') assert_equal(parse(Endfile,'endfile (1)'),'ENDFILE (1)') assert_equal(parse(Endfile,'endfile (1, err = 123)'),'ENDFILE (1, ERR = 123)') - def check_open(self): + def test_open(self): assert_equal(parse(Open,'open (1)'),'OPEN (1)') assert_equal(parse(Open,'open (1, err = 123)'),'OPEN (1, ERR = 123)') - def check_format(self): + def test_format(self): assert_equal(parse(Format,'1: format ()'),'1: FORMAT ()') assert_equal(parse(Format,'199 format (1)'),'199: FORMAT (1)') assert_equal(parse(Format,'2 format (1 , SS)'),'2: FORMAT (1, ss)') - def check_save(self): + def test_save(self): assert_equal(parse(Save,'save'), 'SAVE') assert_equal(parse(Save,'save :: a'), 'SAVE a') assert_equal(parse(Save,'save a,b'), 'SAVE a, b') - def check_data(self): + def test_data(self): assert_equal(parse(Data,'data a /b/'), 'DATA a / b /') assert_equal(parse(Data,'data a , c /b/'), 'DATA a, c / b /') assert_equal(parse(Data,'data a /b ,c/'), 'DATA a / b, c /') @@ -190,11 +190,11 @@ assert_equal(parse(Data,'data a(1,2) /b/'), 'DATA a(1,2) / b /') assert_equal(parse(Data,'data a /b, c(1)/'), 'DATA a / b, c(1) /') - def check_nullify(self): + def test_nullify(self): assert_equal(parse(Nullify,'nullify(a)'),'NULLIFY (a)') assert_equal(parse(Nullify,'nullify(a ,b)'),'NULLIFY (a, b)') - def check_use(self): + def test_use(self): assert_equal(parse(Use, 'use a'), 'USE a') assert_equal(parse(Use, 'use :: a'), 'USE a') assert_equal(parse(Use, 'use, intrinsic:: a'), 'USE INTRINSIC :: a') @@ -205,79 +205,79 @@ 'use :: a , only: operator(+) , b'),\ 'USE a, ONLY: operator(+), b') - def check_exit(self): + def test_exit(self): assert_equal(parse(Exit,'exit'),'EXIT') assert_equal(parse(Exit,'exit ab'),'EXIT ab') - def check_parameter(self): + def test_parameter(self): assert_equal(parse(Parameter,'parameter (a = b(1,2))'), 'PARAMETER (a = b(1,2))') assert_equal(parse(Parameter,'parameter (a = b(1,2) , b=1)'), 'PARAMETER (a = b(1,2), b=1)') - def check_equivalence(self): + def test_equivalence(self): assert_equal(parse(Equivalence,'equivalence (a , b)'),'EQUIVALENCE (a, b)') assert_equal(parse(Equivalence,'equivalence (a , b) , ( c, d(1) , g )'), 'EQUIVALENCE (a, b), (c, d(1), g)') - def check_dimension(self): + def test_dimension(self): assert_equal(parse(Dimension,'dimension a(b)'),'DIMENSION a(b)') assert_equal(parse(Dimension,'dimension::a(b)'),'DIMENSION a(b)') assert_equal(parse(Dimension,'dimension a(b) , c(d)'),'DIMENSION a(b), c(d)') assert_equal(parse(Dimension,'dimension a(b,c)'),'DIMENSION a(b,c)') - def check_target(self): + def test_target(self): assert_equal(parse(Target,'target a(b)'),'TARGET a(b)') assert_equal(parse(Target,'target::a(b)'),'TARGET a(b)') assert_equal(parse(Target,'target a(b) , c(d)'),'TARGET a(b), c(d)') assert_equal(parse(Target,'target a(b,c)'),'TARGET a(b,c)') - def check_pointer(self): + def test_pointer(self): assert_equal(parse(Pointer,'pointer a=b'),'POINTER a=b') assert_equal(parse(Pointer,'pointer :: a=b'),'POINTER a=b') assert_equal(parse(Pointer,'pointer a=b, c=d(1,2)'),'POINTER a=b, c=d(1,2)') - def check_protected(self): + def test_protected(self): assert_equal(parse(Protected,'protected a'),'PROTECTED a') assert_equal(parse(Protected,'protected::a'),'PROTECTED a') assert_equal(parse(Protected,'protected a , b'),'PROTECTED a, b') - def check_volatile(self): + def test_volatile(self): assert_equal(parse(Volatile,'volatile a'),'VOLATILE a') assert_equal(parse(Volatile,'volatile::a'),'VOLATILE a') assert_equal(parse(Volatile,'volatile a , b'),'VOLATILE a, b') - def check_value(self): + def test_value(self): assert_equal(parse(Value,'value a'),'VALUE a') assert_equal(parse(Value,'value::a'),'VALUE a') assert_equal(parse(Value,'value a , b'),'VALUE a, b') - def check_arithmeticif(self): + def test_arithmeticif(self): assert_equal(parse(ArithmeticIf,'if (a) 1,2,3'),'IF (a) 1, 2, 3') assert_equal(parse(ArithmeticIf,'if (a(1)) 1,2,3'),'IF (a(1)) 1, 2, 3') assert_equal(parse(ArithmeticIf,'if (a(1,2)) 1,2,3'),'IF (a(1,2)) 1, 2, 3') - def check_intrinsic(self): + def test_intrinsic(self): assert_equal(parse(Intrinsic,'intrinsic a'),'INTRINSIC a') assert_equal(parse(Intrinsic,'intrinsic::a'),'INTRINSIC a') assert_equal(parse(Intrinsic,'intrinsic a , b'),'INTRINSIC a, b') - def check_inquire(self): + def test_inquire(self): assert_equal(parse(Inquire, 'inquire (1)'),'INQUIRE (1)') assert_equal(parse(Inquire, 'inquire (1, err=123)'),'INQUIRE (1, ERR = 123)') assert_equal(parse(Inquire, 'inquire (iolength=a) b'),'INQUIRE (IOLENGTH = a) b') assert_equal(parse(Inquire, 'inquire (iolength=a) b ,c(1,2)'), 'INQUIRE (IOLENGTH = a) b, c(1,2)') - def check_sequence(self): + def test_sequence(self): assert_equal(parse(Sequence, 'sequence'),'SEQUENCE') - def check_external(self): + def test_external(self): assert_equal(parse(External,'external a'),'EXTERNAL a') assert_equal(parse(External,'external::a'),'EXTERNAL a') assert_equal(parse(External,'external a , b'),'EXTERNAL a, b') - def check_common(self): + def test_common(self): assert_equal(parse(Common, 'common a'),'COMMON a') assert_equal(parse(Common, 'common a , b'),'COMMON a, b') assert_equal(parse(Common, 'common a , b(1,2)'),'COMMON a, b(1,2)') @@ -289,18 +289,18 @@ assert_equal(parse(Common, 'common / name/ a, /foo/ c(1) ,d'), 'COMMON / name / a / foo / c(1), d') - def check_optional(self): + def test_optional(self): assert_equal(parse(Optional,'optional a'),'OPTIONAL a') assert_equal(parse(Optional,'optional::a'),'OPTIONAL a') assert_equal(parse(Optional,'optional a , b'),'OPTIONAL a, b') - def check_intent(self): + def test_intent(self): assert_equal(parse(Intent,'intent (in) a'),'INTENT (IN) a') assert_equal(parse(Intent,'intent(in)::a'),'INTENT (IN) a') assert_equal(parse(Intent,'intent(in) a , b'),'INTENT (IN) a, b') assert_equal(parse(Intent,'intent (in, out) a'),'INTENT (IN, OUT) a') - def check_entry(self): + def test_entry(self): assert_equal(parse(Entry,'entry a'), 'ENTRY a') assert_equal(parse(Entry,'entry a()'), 'ENTRY a') assert_equal(parse(Entry,'entry a(b)'), 'ENTRY a (b)') @@ -315,13 +315,13 @@ assert_equal(parse(Entry,'entry a(b,*) result (g)'), 'ENTRY a (b, *) RESULT (g)') - def check_import(self): + def test_import(self): assert_equal(parse(Import,'import'),'IMPORT') assert_equal(parse(Import,'import a'),'IMPORT a') assert_equal(parse(Import,'import::a'),'IMPORT a') assert_equal(parse(Import,'import a , b'),'IMPORT a, b') - def check_forall(self): + def test_forall(self): assert_equal(parse(ForallStmt,'forall (i = 1:n(k,:) : 2) a(i) = i*i*b(i)'), 'FORALL (i = 1 : n(k,:) : 2) a(i) = i*i*b(i)') assert_equal(parse(ForallStmt,'forall (i=1:n,j=2:3) a(i) = b(i,i)'), @@ -329,7 +329,7 @@ assert_equal(parse(ForallStmt,'forall (i=1:n,j=2:3, 1+a(1,2)) a(i) = b(i,i)'), 'FORALL (i = 1 : n, j = 2 : 3, 1+a(1,2)) a(i) = b(i,i)') - def check_specificbinding(self): + def test_specificbinding(self): assert_equal(parse(SpecificBinding,'procedure a'),'PROCEDURE a') assert_equal(parse(SpecificBinding,'procedure :: a'),'PROCEDURE a') assert_equal(parse(SpecificBinding,'procedure , NOPASS :: a'),'PROCEDURE , NOPASS :: a') @@ -343,29 +343,29 @@ assert_equal(parse(SpecificBinding,'procedure(n),pass :: a =>c'), 'PROCEDURE (n) , PASS :: a => c') - def check_genericbinding(self): + def test_genericbinding(self): assert_equal(parse(GenericBinding,'generic :: a=>b'),'GENERIC :: a => b') assert_equal(parse(GenericBinding,'generic, public :: a=>b'),'GENERIC, PUBLIC :: a => b') assert_equal(parse(GenericBinding,'generic, public :: a(1,2)=>b ,c'), 'GENERIC, PUBLIC :: a(1,2) => b, c') - def check_finalbinding(self): + def test_finalbinding(self): assert_equal(parse(FinalBinding,'final a'),'FINAL a') assert_equal(parse(FinalBinding,'final::a'),'FINAL a') assert_equal(parse(FinalBinding,'final a , b'),'FINAL a, b') - def check_allocatable(self): + def test_allocatable(self): assert_equal(parse(Allocatable,'allocatable a'),'ALLOCATABLE a') assert_equal(parse(Allocatable,'allocatable :: a'),'ALLOCATABLE a') assert_equal(parse(Allocatable,'allocatable a (1,2)'),'ALLOCATABLE a (1,2)') assert_equal(parse(Allocatable,'allocatable a (1,2) ,b'),'ALLOCATABLE a (1,2), b') - def check_asynchronous(self): + def test_asynchronous(self): assert_equal(parse(Asynchronous,'asynchronous a'),'ASYNCHRONOUS a') assert_equal(parse(Asynchronous,'asynchronous::a'),'ASYNCHRONOUS a') assert_equal(parse(Asynchronous,'asynchronous a , b'),'ASYNCHRONOUS a, b') - def check_bind(self): + def test_bind(self): assert_equal(parse(Bind,'bind(c) a'),'BIND (C) a') assert_equal(parse(Bind,'bind(c) :: a'),'BIND (C) a') assert_equal(parse(Bind,'bind(c) a ,b'),'BIND (C) a, b') @@ -373,13 +373,13 @@ assert_equal(parse(Bind,'bind(c) /a/ ,b'),'BIND (C) / a /, b') assert_equal(parse(Bind,'bind(c,name="hey") a'),'BIND (C, NAME = "hey") a') - def check_else(self): + def test_else(self): assert_equal(parse(Else,'else'),'ELSE') assert_equal(parse(ElseIf,'else if (a) then'),'ELSE IF (a) THEN') assert_equal(parse(ElseIf,'else if (a.eq.b(1,2)) then'), 'ELSE IF (a.eq.b(1,2)) THEN') - def check_case(self): + def test_case(self): assert_equal(parse(Case,'case (1)'),'CASE ( 1 )') assert_equal(parse(Case,'case (1:)'),'CASE ( 1 : )') assert_equal(parse(Case,'case (:1)'),'CASE ( : 1 )') @@ -391,56 +391,56 @@ assert_equal(parse(Case,'case (a(1,:):)'),'CASE ( a(1,:) : )') assert_equal(parse(Case,'case default'),'CASE DEFAULT') - def check_where(self): + def test_where(self): assert_equal(parse(WhereStmt,'where (1) a=1'),'WHERE ( 1 ) a = 1') assert_equal(parse(WhereStmt,'where (a(1,2)) a=1'),'WHERE ( a(1,2) ) a = 1') - def check_elsewhere(self): + def test_elsewhere(self): assert_equal(parse(ElseWhere,'else where'),'ELSE WHERE') assert_equal(parse(ElseWhere,'elsewhere (1)'),'ELSE WHERE ( 1 )') assert_equal(parse(ElseWhere,'elsewhere(a(1,2))'),'ELSE WHERE ( a(1,2) )') - def check_enumerator(self): + def test_enumerator(self): assert_equal(parse(Enumerator,'enumerator a'), 'ENUMERATOR a') assert_equal(parse(Enumerator,'enumerator:: a'), 'ENUMERATOR a') assert_equal(parse(Enumerator,'enumerator a,b'), 'ENUMERATOR a, b') assert_equal(parse(Enumerator,'enumerator a=1'), 'ENUMERATOR a=1') assert_equal(parse(Enumerator,'enumerator a=1 , b=c(1,2)'), 'ENUMERATOR a=1, b=c(1,2)') - def check_fortranname(self): + def test_fortranname(self): assert_equal(parse(FortranName,'fortranname a'),'FORTRANNAME a') - def check_threadsafe(self): + def test_threadsafe(self): assert_equal(parse(Threadsafe,'threadsafe'),'THREADSAFE') - def check_depend(self): + def test_depend(self): assert_equal(parse(Depend,'depend( a) b'), 'DEPEND ( a ) b') assert_equal(parse(Depend,'depend( a) ::b'), 'DEPEND ( a ) b') assert_equal(parse(Depend,'depend( a,c) b,e'), 'DEPEND ( a, c ) b, e') - def check_check(self): + def test_check(self): assert_equal(parse(Check,'check(1) a'), 'CHECK ( 1 ) a') assert_equal(parse(Check,'check(1) :: a'), 'CHECK ( 1 ) a') assert_equal(parse(Check,'check(b(1,2)) a'), 'CHECK ( b(1,2) ) a') assert_equal(parse(Check,'check(a>1) :: a'), 'CHECK ( a>1 ) a') - def check_callstatement(self): + def test_callstatement(self): assert_equal(parse(CallStatement,'callstatement (*func)()',isstrict=1), 'CALLSTATEMENT (*func)()') assert_equal(parse(CallStatement,'callstatement i=1;(*func)()',isstrict=1), 'CALLSTATEMENT i=1;(*func)()') - def check_callprotoargument(self): + def test_callprotoargument(self): assert_equal(parse(CallProtoArgument,'callprotoargument int(*), double'), 'CALLPROTOARGUMENT int(*), double') - def check_pause(self): + def test_pause(self): assert_equal(parse(Pause,'pause'),'PAUSE') assert_equal(parse(Pause,'pause 1'),'PAUSE 1') assert_equal(parse(Pause,'pause "hey"'),'PAUSE "hey"') assert_equal(parse(Pause,'pause "hey pa"'),'PAUSE "hey pa"') - def check_integer(self): + def test_integer(self): assert_equal(parse(Integer,'integer'),'INTEGER') assert_equal(parse(Integer,'integer*4'),'INTEGER*4') assert_equal(parse(Integer,'integer*4 a'),'INTEGER*4 a') @@ -460,7 +460,7 @@ assert_equal(parse(Integer,'integer(kind=2+2)'),'INTEGER(KIND=2+2)') assert_equal(parse(Integer,'integer(kind=f(4,5))'),'INTEGER(KIND=f(4,5))') - def check_character(self): + def test_character(self): assert_equal(parse(Character,'character'),'CHARACTER') assert_equal(parse(Character,'character*2'),'CHARACTER(LEN=2)') assert_equal(parse(Character,'character**'),'CHARACTER(LEN=*)') @@ -482,7 +482,7 @@ assert_equal(parse(Character,'character(len=3,kind=fA(1,2))'), 'CHARACTER(LEN=3, KIND=fa(1,2))') - def check_implicit(self): + def test_implicit(self): assert_equal(parse(Implicit,'implicit none'),'IMPLICIT NONE') assert_equal(parse(Implicit,'implicit'),'IMPLICIT NONE') assert_equal(parse(Implicit,'implicit integer (i-m)'), @@ -492,5 +492,6 @@ assert_equal(parse(Implicit,'implicit integer (i-m), real (z)'), 'IMPLICIT INTEGER ( i-m ), REAL ( z )') + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/f2py/lib/tests/test_derived_scalar.py =================================================================== --- branches/cdavid/numpy/f2py/lib/tests/test_derived_scalar.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/f2py/lib/tests/test_derived_scalar.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -42,9 +42,9 @@ from numpy import * -class TestM(NumpyTestCase): +class TestM(TestCase): - def check_foo_simple(self, level=1): + def test_foo_simple(self, level=1): a = m.myt(2) assert_equal(a.flag,2) assert isinstance(a,m.myt),`a` @@ -59,7 +59,7 @@ #s = m.foo((5,)) - def check_foo2_simple(self, level=1): + def test_foo2_simple(self, level=1): a = m.myt(2) assert_equal(a.flag,2) assert isinstance(a,m.myt),`a` @@ -71,4 +71,4 @@ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/f2py/lib/tests/test_module_module.py =================================================================== --- branches/cdavid/numpy/f2py/lib/tests/test_module_module.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/f2py/lib/tests/test_module_module.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -51,11 +51,11 @@ from numpy import * -class TestM(NumpyTestCase): +class TestM(TestCase): - def check_foo_simple(self, level=1): + def test_foo_simple(self, level=1): foo = m.foo foo() if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/f2py/lib/tests/test_module_scalar.py =================================================================== --- branches/cdavid/numpy/f2py/lib/tests/test_module_scalar.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/f2py/lib/tests/test_module_scalar.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -40,19 +40,19 @@ from numpy import * -class TestM(NumpyTestCase): +class TestM(TestCase): - def check_foo_simple(self, level=1): + def test_foo_simple(self, level=1): foo = m.foo r = foo(2) assert isinstance(r,int32),`type(r)` assert_equal(r,3) - def check_foo2_simple(self, level=1): + def test_foo2_simple(self, level=1): foo2 = m.foo2 r = foo2(2) assert isinstance(r,int32),`type(r)` assert_equal(r,4) if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/f2py/lib/tests/test_scalar_function_in.py =================================================================== --- branches/cdavid/numpy/f2py/lib/tests/test_scalar_function_in.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/f2py/lib/tests/test_scalar_function_in.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -107,9 +107,9 @@ from numpy import * -class TestM(NumpyTestCase): +class TestM(TestCase): - def check_foo_integer1(self, level=1): + def test_foo_integer1(self, level=1): i = int8(2) e = int8(3) func = m.fooint1 @@ -144,7 +144,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_integer2(self, level=1): + def test_foo_integer2(self, level=1): i = int16(2) e = int16(3) func = m.fooint2 @@ -179,7 +179,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_integer4(self, level=1): + def test_foo_integer4(self, level=1): i = int32(2) e = int32(3) func = m.fooint4 @@ -214,7 +214,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_integer8(self, level=1): + def test_foo_integer8(self, level=1): i = int64(2) e = int64(3) func = m.fooint8 @@ -249,7 +249,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_real4(self, level=1): + def test_foo_real4(self, level=1): i = float32(2) e = float32(3) func = m.foofloat4 @@ -283,7 +283,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_real8(self, level=1): + def test_foo_real8(self, level=1): i = float64(2) e = float64(3) func = m.foofloat8 @@ -317,7 +317,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_complex8(self, level=1): + def test_foo_complex8(self, level=1): i = complex64(2) e = complex64(3) func = m.foocomplex8 @@ -358,7 +358,7 @@ self.assertRaises(TypeError,lambda :func([2,1,3])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_complex16(self, level=1): + def test_foo_complex16(self, level=1): i = complex128(2) e = complex128(3) func = m.foocomplex16 @@ -399,7 +399,7 @@ self.assertRaises(TypeError,lambda :func([2,1,3])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_bool1(self, level=1): + def test_foo_bool1(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool1 @@ -419,7 +419,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_bool2(self, level=1): + def test_foo_bool2(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool2 @@ -439,7 +439,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_bool4(self, level=1): + def test_foo_bool4(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool4 @@ -459,7 +459,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_bool8(self, level=1): + def test_foo_bool8(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool8 @@ -479,7 +479,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_string1(self, level=1): + def test_foo_string1(self, level=1): i = string0('a') e = string0('1') func = m.foostring1 @@ -497,7 +497,7 @@ assert isinstance(r,string0),`type(r)` assert_equal(r,e) - def check_foo_string5(self, level=1): + def test_foo_string5(self, level=1): i = string0('abcde') e = string0('12cde') func = m.foostring5 @@ -528,5 +528,6 @@ r = func('') assert_equal(r,'') + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/f2py/lib/tests/test_scalar_in_out.py =================================================================== --- branches/cdavid/numpy/f2py/lib/tests/test_scalar_in_out.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/f2py/lib/tests/test_scalar_in_out.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -104,9 +104,9 @@ from numpy import * -class TestM(NumpyTestCase): +class TestM(TestCase): - def check_foo_integer1(self, level=1): + def test_foo_integer1(self, level=1): i = int8(2) e = int8(3) func = m.fooint1 @@ -141,7 +141,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_integer2(self, level=1): + def test_foo_integer2(self, level=1): i = int16(2) e = int16(3) func = m.fooint2 @@ -176,7 +176,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_integer4(self, level=1): + def test_foo_integer4(self, level=1): i = int32(2) e = int32(3) func = m.fooint4 @@ -211,7 +211,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_integer8(self, level=1): + def test_foo_integer8(self, level=1): i = int64(2) e = int64(3) func = m.fooint8 @@ -246,7 +246,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_real4(self, level=1): + def test_foo_real4(self, level=1): i = float32(2) e = float32(3) func = m.foofloat4 @@ -280,7 +280,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_real8(self, level=1): + def test_foo_real8(self, level=1): i = float64(2) e = float64(3) func = m.foofloat8 @@ -314,7 +314,7 @@ self.assertRaises(TypeError,lambda :func([2,1])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_complex8(self, level=1): + def test_foo_complex8(self, level=1): i = complex64(2) e = complex64(3) func = m.foocomplex8 @@ -355,7 +355,7 @@ self.assertRaises(TypeError,lambda :func([2,1,3])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_complex16(self, level=1): + def test_foo_complex16(self, level=1): i = complex128(2) e = complex128(3) func = m.foocomplex16 @@ -396,7 +396,7 @@ self.assertRaises(TypeError,lambda :func([2,1,3])) self.assertRaises(TypeError,lambda :func({})) - def check_foo_bool1(self, level=1): + def test_foo_bool1(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool1 @@ -416,7 +416,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_bool2(self, level=1): + def test_foo_bool2(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool2 @@ -436,7 +436,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_bool4(self, level=1): + def test_foo_bool4(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool4 @@ -456,7 +456,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_bool8(self, level=1): + def test_foo_bool8(self, level=1): i = bool8(True) e = bool8(False) func = m.foobool8 @@ -476,7 +476,7 @@ assert isinstance(r,bool8),`type(r)` assert_equal(r,not e) - def check_foo_string1(self, level=1): + def test_foo_string1(self, level=1): i = string0('a') e = string0('1') func = m.foostring1 @@ -494,7 +494,7 @@ assert isinstance(r,string0),`type(r)` assert_equal(r,e) - def check_foo_string5(self, level=1): + def test_foo_string5(self, level=1): i = string0('abcde') e = string0('12cde') func = m.foostring5 @@ -516,7 +516,7 @@ assert isinstance(r,string0),`type(r)` assert_equal(r,'12] ') - def check_foo_string0(self, level=1): + def test_foo_string0(self, level=1): i = string0('abcde') e = string0('12cde') func = m.foostringstar @@ -525,5 +525,6 @@ r = func('') assert_equal(r,'') + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/f2py/tests/array_from_pyobj/tests/test_array_from_pyobj.py =================================================================== --- branches/cdavid/numpy/f2py/tests/array_from_pyobj/tests/test_array_from_pyobj.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/f2py/tests/array_from_pyobj/tests/test_array_from_pyobj.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -8,7 +8,7 @@ set_package_path() from array_from_pyobj import wrap -del sys.path[0] +restore_path() def flags_info(arr): flags = wrap.array_attrs(arr)[6] @@ -240,7 +240,7 @@ ################################################## class test_intent(unittest.TestCase): - def check_in_out(self): + def test_in_out(self): assert_equal(str(intent.in_.out),'intent(in,out)') assert intent.in_.c.is_intent('c') assert not intent.in_.c.is_intent_exact('c') @@ -251,11 +251,11 @@ class _test_shared_memory: num2seq = [1,2] num23seq = [[1,2,3],[4,5,6]] - def check_in_from_2seq(self): + def test_in_from_2seq(self): a = self.array([2],intent.in_,self.num2seq) assert not a.has_shared_memory() - def check_in_from_2casttype(self): + def test_in_from_2casttype(self): for t in self.type.cast_types(): obj = array(self.num2seq,dtype=t.dtype) a = self.array([len(self.num2seq)],intent.in_,obj) @@ -264,7 +264,7 @@ else: assert not a.has_shared_memory(),`t.dtype` - def check_inout_2seq(self): + def test_inout_2seq(self): obj = array(self.num2seq,dtype=self.type.dtype) a = self.array([len(self.num2seq)],intent.inout,obj) assert a.has_shared_memory() @@ -277,7 +277,7 @@ else: raise SystemError,'intent(inout) should have failed on sequence' - def check_f_inout_23seq(self): + def test_f_inout_23seq(self): obj = array(self.num23seq,dtype=self.type.dtype,fortran=1) shape = (len(self.num23seq),len(self.num23seq[0])) a = self.array(shape,intent.in_.inout,obj) @@ -293,31 +293,31 @@ else: raise SystemError,'intent(inout) should have failed on improper array' - def check_c_inout_23seq(self): + def test_c_inout_23seq(self): obj = array(self.num23seq,dtype=self.type.dtype) shape = (len(self.num23seq),len(self.num23seq[0])) a = self.array(shape,intent.in_.c.inout,obj) assert a.has_shared_memory() - def check_in_copy_from_2casttype(self): + def test_in_copy_from_2casttype(self): for t in self.type.cast_types(): obj = array(self.num2seq,dtype=t.dtype) a = self.array([len(self.num2seq)],intent.in_.copy,obj) assert not a.has_shared_memory(),`t.dtype` - def check_c_in_from_23seq(self): + def test_c_in_from_23seq(self): a = self.array([len(self.num23seq),len(self.num23seq[0])], intent.in_,self.num23seq) assert not a.has_shared_memory() - def check_in_from_23casttype(self): + def test_in_from_23casttype(self): for t in self.type.cast_types(): obj = array(self.num23seq,dtype=t.dtype) a = self.array([len(self.num23seq),len(self.num23seq[0])], intent.in_,obj) assert not a.has_shared_memory(),`t.dtype` - def check_f_in_from_23casttype(self): + def test_f_in_from_23casttype(self): for t in self.type.cast_types(): obj = array(self.num23seq,dtype=t.dtype,fortran=1) a = self.array([len(self.num23seq),len(self.num23seq[0])], @@ -327,7 +327,7 @@ else: assert not a.has_shared_memory(),`t.dtype` - def check_c_in_from_23casttype(self): + def test_c_in_from_23casttype(self): for t in self.type.cast_types(): obj = array(self.num23seq,dtype=t.dtype) a = self.array([len(self.num23seq),len(self.num23seq[0])], @@ -337,21 +337,21 @@ else: assert not a.has_shared_memory(),`t.dtype` - def check_f_copy_in_from_23casttype(self): + def test_f_copy_in_from_23casttype(self): for t in self.type.cast_types(): obj = array(self.num23seq,dtype=t.dtype,fortran=1) a = self.array([len(self.num23seq),len(self.num23seq[0])], intent.in_.copy,obj) assert not a.has_shared_memory(),`t.dtype` - def check_c_copy_in_from_23casttype(self): + def test_c_copy_in_from_23casttype(self): for t in self.type.cast_types(): obj = array(self.num23seq,dtype=t.dtype) a = self.array([len(self.num23seq),len(self.num23seq[0])], intent.in_.c.copy,obj) assert not a.has_shared_memory(),`t.dtype` - def check_in_cache_from_2casttype(self): + def test_in_cache_from_2casttype(self): for t in self.type.all_types(): if t.elsize != self.type.elsize: continue @@ -377,7 +377,7 @@ raise else: raise SystemError,'intent(cache) should have failed on multisegmented array' - def check_in_cache_from_2casttype_failure(self): + def test_in_cache_from_2casttype_failure(self): for t in self.type.all_types(): if t.elsize >= self.type.elsize: continue @@ -391,7 +391,7 @@ else: raise SystemError,'intent(cache) should have failed on smaller array' - def check_cache_hidden(self): + def test_cache_hidden(self): shape = (2,) a = self.array(shape,intent.cache.hide,None) assert a.arr.shape==shape @@ -409,7 +409,7 @@ else: raise SystemError,'intent(cache) should have failed on undefined dimensions' - def check_hidden(self): + def test_hidden(self): shape = (2,) a = self.array(shape,intent.hide,None) assert a.arr.shape==shape @@ -436,7 +436,7 @@ else: raise SystemError,'intent(hide) should have failed on undefined dimensions' - def check_optional_none(self): + def test_optional_none(self): shape = (2,) a = self.array(shape,intent.optional,None) assert a.arr.shape==shape @@ -454,14 +454,14 @@ assert a.arr_equal(a.arr,zeros(shape,dtype=self.type.dtype)) assert not a.arr.flags['FORTRAN'] and a.arr.flags['CONTIGUOUS'] - def check_optional_from_2seq(self): + def test_optional_from_2seq(self): obj = self.num2seq shape = (len(obj),) a = self.array(shape,intent.optional,obj) assert a.arr.shape==shape assert not a.has_shared_memory() - def check_optional_from_23seq(self): + def test_optional_from_23seq(self): obj = self.num23seq shape = (len(obj),len(obj[0])) a = self.array(shape,intent.optional,obj) @@ -472,7 +472,7 @@ assert a.arr.shape==shape assert not a.has_shared_memory() - def check_inplace(self): + def test_inplace(self): obj = array(self.num23seq,dtype=self.type.dtype) assert not obj.flags['FORTRAN'] and obj.flags['CONTIGUOUS'] shape = obj.shape @@ -484,7 +484,7 @@ assert obj.flags['FORTRAN'] # obj attributes are changed inplace! assert not obj.flags['CONTIGUOUS'] - def check_inplace_from_casttype(self): + def test_inplace_from_casttype(self): for t in self.type.cast_types(): if t is self.type: continue @@ -502,6 +502,7 @@ assert not obj.flags['CONTIGUOUS'] assert obj.dtype.type is self.type.dtype # obj type is changed inplace! + for t in Type._type_names: exec '''\ class test_%s_gen(unittest.TestCase, @@ -512,4 +513,4 @@ ''' % (t,t,t) if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Copied: branches/cdavid/numpy/fft/SConscript (from rev 5301, trunk/numpy/fft/SConscript) Deleted: branches/cdavid/numpy/fft/SConstruct =================================================================== --- branches/cdavid/numpy/fft/SConstruct 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/fft/SConstruct 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,10 +0,0 @@ -# Last Change: Tue May 20 05:00 PM 2008 J -# vim:syntax=python -from numscons import GetNumpyEnvironment, scons_get_paths - -env = GetNumpyEnvironment(ARGUMENTS) -env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) - -fftpack_lite = env.NumpyPythonExtension('fftpack_lite', - source = ['fftpack_litemodule.c', - 'fftpack.c']) Copied: branches/cdavid/numpy/fft/SConstruct (from rev 5301, trunk/numpy/fft/SConstruct) Modified: branches/cdavid/numpy/fft/__init__.py =================================================================== --- branches/cdavid/numpy/fft/__init__.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/fft/__init__.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -4,6 +4,6 @@ from fftpack import * from helper import * -def test(level=1, verbosity=1): - from numpy.testing import NumpyTest - return NumpyTest().test(level, verbosity) +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().bench Modified: branches/cdavid/numpy/fft/tests/test_fftpack.py =================================================================== --- branches/cdavid/numpy/fft/tests/test_fftpack.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/fft/tests/test_fftpack.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -10,15 +10,17 @@ phase = np.arange(L).reshape(-1,1) * phase return np.sum(x*np.exp(phase),axis=1) -class TestFFTShift(NumpyTestCase): - def check_fft_n(self): +class TestFFTShift(TestCase): + def test_fft_n(self): self.failUnlessRaises(ValueError,np.fft.fft,[1,2,3],0) -class TestFFT1D(NumpyTestCase): - def check_basic(self): + +class TestFFT1D(TestCase): + def test_basic(self): rand = np.random.random x = rand(30) + 1j*rand(30) assert_array_almost_equal(fft1(x), np.fft.fft(x)) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/fft/tests/test_helper.py =================================================================== --- branches/cdavid/numpy/fft/tests/test_helper.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/fft/tests/test_helper.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -7,16 +7,15 @@ from numpy.testing import * set_package_path() from numpy.fft import fftshift,ifftshift,fftfreq -del sys.path[0] +restore_path() from numpy import pi def random(size): return rand(*size) -class TestFFTShift(NumpyTestCase): - - def check_definition(self): +class TestFFTShift(TestCase): + def test_definition(self): x = [0,1,2,3,4,-4,-3,-2,-1] y = [-4,-3,-2,-1,0,1,2,3,4] assert_array_almost_equal(fftshift(x),y) @@ -26,14 +25,14 @@ assert_array_almost_equal(fftshift(x),y) assert_array_almost_equal(ifftshift(y),x) - def check_inverse(self): + def test_inverse(self): for n in [1,4,9,100,211]: x = random((n,)) assert_array_almost_equal(ifftshift(fftshift(x)),x) -class TestFFTFreq(NumpyTestCase): - def check_definition(self): +class TestFFTFreq(TestCase): + def test_definition(self): x = [0,1,2,3,4,-4,-3,-2,-1] assert_array_almost_equal(9*fftfreq(9),x) assert_array_almost_equal(9*pi*fftfreq(9,pi),x) @@ -41,5 +40,6 @@ assert_array_almost_equal(10*fftfreq(10),x) assert_array_almost_equal(10*pi*fftfreq(10,pi),x) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Copied: branches/cdavid/numpy/lib/SConscript (from rev 5301, trunk/numpy/lib/SConscript) Deleted: branches/cdavid/numpy/lib/SConstruct =================================================================== --- branches/cdavid/numpy/lib/SConstruct 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/SConstruct 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,9 +0,0 @@ -# Last Change: Tue May 20 05:00 PM 2008 J -# vim:syntax=python -from numscons import GetNumpyEnvironment, scons_get_paths - -env = GetNumpyEnvironment(ARGUMENTS) -env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) - -_compiled_base = env.NumpyPythonExtension('_compiled_base', - source = ['src/_compiled_base.c']) Copied: branches/cdavid/numpy/lib/SConstruct (from rev 5301, trunk/numpy/lib/SConstruct) Modified: branches/cdavid/numpy/lib/__init__.py =================================================================== --- branches/cdavid/numpy/lib/__init__.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/__init__.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -34,6 +34,7 @@ __all__ += io.__all__ __all__ += financial.__all__ -def test(level=1, verbosity=1): - from numpy.testing import NumpyTest - return NumpyTest().test(level, verbosity) +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().bench + Modified: branches/cdavid/numpy/lib/function_base.py =================================================================== --- branches/cdavid/numpy/lib/function_base.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/function_base.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -574,13 +574,32 @@ n += 1 if (n != n2): raise ValueError, "function list and condition list must be the same" + zerod = False + # This is a hack to work around problems with NumPy's + # handling of 0-d arrays and boolean indexing with + # numpy.bool_ scalars + if x.ndim == 0: + x = x[None] + zerod = True + newcondlist = [] + for k in range(n): + if condlist[k].ndim == 0: + condition = condlist[k][None] + else: + condition = condlist[k] + newcondlist.append(condition) + condlist = newcondlist y = empty(x.shape, x.dtype) for k in range(n): item = funclist[k] if not callable(item): y[condlist[k]] = item else: - y[condlist[k]] = item(x[condlist[k]], *args, **kw) + vals = x[condlist[k]] + if vals.size > 0: + y[condlist[k]] = item(vals, *args, **kw) + if zerod: + y = y.squeeze() return y def select(condlist, choicelist, default=0): Modified: branches/cdavid/numpy/lib/tests/test__datasource.py =================================================================== --- branches/cdavid/numpy/lib/tests/test__datasource.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test__datasource.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -67,7 +67,7 @@ def invalid_httpfile(): return http_fakefile -class TestDataSourceOpen(NumpyTestCase): +class TestDataSourceOpen(TestCase): def setUp(self): self.tmpdir = mkdtemp() self.ds = datasource.DataSource(self.tmpdir) @@ -127,7 +127,7 @@ self.assertEqual(magic_line, result) -class TestDataSourceExists(NumpyTestCase): +class TestDataSourceExists(TestCase): def setUp(self): self.tmpdir = mkdtemp() self.ds = datasource.DataSource(self.tmpdir) @@ -157,7 +157,7 @@ self.assertEqual(self.ds.exists(tmpfile), False) -class TestDataSourceAbspath(NumpyTestCase): +class TestDataSourceAbspath(TestCase): def setUp(self): self.tmpdir = os.path.abspath(mkdtemp()) self.ds = datasource.DataSource(self.tmpdir) @@ -222,7 +222,7 @@ os.sep = orig_os_sep -class TestRepositoryAbspath(NumpyTestCase): +class TestRepositoryAbspath(TestCase): def setUp(self): self.tmpdir = os.path.abspath(mkdtemp()) self.repos = datasource.Repository(valid_baseurl(), self.tmpdir) @@ -255,7 +255,7 @@ os.sep = orig_os_sep -class TestRepositoryExists(NumpyTestCase): +class TestRepositoryExists(TestCase): def setUp(self): self.tmpdir = mkdtemp() self.repos = datasource.Repository(valid_baseurl(), self.tmpdir) @@ -288,7 +288,7 @@ assert self.repos.exists(tmpfile) -class TestOpenFunc(NumpyTestCase): +class TestOpenFunc(TestCase): def setUp(self): self.tmpdir = mkdtemp() @@ -304,4 +304,4 @@ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/lib/tests/test_arraysetops.py =================================================================== --- branches/cdavid/numpy/lib/tests/test_arraysetops.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test_arraysetops.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -6,15 +6,14 @@ set_package_path() import numpy from numpy.lib.arraysetops import * -from numpy.lib.arraysetops import ediff1d restore_path() ################################################## -class TestAso(NumpyTestCase): +class TestAso(TestCase): ## # 03.11.2005, c - def check_unique1d( self ): + def test_unique1d( self ): a = numpy.array( [5, 7, 1, 2, 1, 5, 7] ) @@ -26,7 +25,7 @@ ## # 03.11.2005, c - def check_intersect1d( self ): + def test_intersect1d( self ): a = numpy.array( [5, 7, 1, 2] ) b = numpy.array( [2, 4, 3, 1, 5] ) @@ -39,7 +38,7 @@ ## # 03.11.2005, c - def check_intersect1d_nu( self ): + def test_intersect1d_nu( self ): a = numpy.array( [5, 5, 7, 1, 2] ) b = numpy.array( [2, 1, 4, 3, 3, 1, 5] ) @@ -52,7 +51,7 @@ ## # 03.11.2005, c - def check_setxor1d( self ): + def test_setxor1d( self ): a = numpy.array( [5, 7, 1, 2] ) b = numpy.array( [2, 4, 3, 1, 5] ) @@ -77,7 +76,7 @@ assert_array_equal([], setxor1d([],[])) - def check_ediff1d(self): + def test_ediff1d(self): zero_elem = numpy.array([]) one_elem = numpy.array([1]) two_elem = numpy.array([1,2]) @@ -91,7 +90,7 @@ ## # 03.11.2005, c - def check_setmember1d( self ): + def test_setmember1d( self ): a = numpy.array( [5, 7, 1, 2] ) b = numpy.array( [2, 4, 3, 1, 5] ) @@ -114,7 +113,7 @@ ## # 03.11.2005, c - def check_union1d( self ): + def test_union1d( self ): a = numpy.array( [5, 4, 7, 1, 2] ) b = numpy.array( [2, 4, 3, 3, 2, 1, 5] ) @@ -128,7 +127,7 @@ ## # 03.11.2005, c # 09.01.2006 - def check_setdiff1d( self ): + def test_setdiff1d( self ): a = numpy.array( [6, 5, 4, 7, 1, 2] ) b = numpy.array( [2, 4, 3, 3, 2, 1, 5] ) @@ -145,14 +144,14 @@ assert_array_equal([], setdiff1d([],[])) - def check_setdiff1d_char_array(self): + def test_setdiff1d_char_array(self): a = numpy.array(['a','b','c']) b = numpy.array(['a','b','s']) assert_array_equal(setdiff1d(a,b),numpy.array(['c'])) ## # 03.11.2005, c - def check_manyways( self ): + def test_manyways( self ): nItem = 100 a = numpy.fix( nItem / 10 * numpy.random.random( nItem ) ) @@ -171,5 +170,6 @@ c2 = setdiff1d( aux2, aux1 ) assert_array_equal( c1, c2 ) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/lib/tests/test_financial.py =================================================================== --- branches/cdavid/numpy/lib/tests/test_financial.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test_financial.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -32,8 +32,9 @@ from numpy.testing import * import numpy as np -class TestDocs(NumpyTestCase): - def check_doctests(self): return self.rundocs() +def test(): + import doctest + doctest.testmod() if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/lib/tests/test_format.py =================================================================== --- branches/cdavid/numpy/lib/tests/test_format.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test_format.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -506,3 +506,7 @@ for magic in bad_version_magic + malformed_magic: f = StringIO(magic) yield raises(ValueError)(format.read_array), f + + +if __name__ == "__main__": + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/lib/tests/test_function_base.py =================================================================== --- branches/cdavid/numpy/lib/tests/test_function_base.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test_function_base.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -5,11 +5,10 @@ import numpy.lib;reload(numpy.lib) from numpy.lib import * from numpy.core import * +restore_path() -del sys.path[0] - -class TestAny(NumpyTestCase): - def check_basic(self): +class TestAny(TestCase): + def test_basic(self): y1 = [0,0,1,0] y2 = [0,0,0,0] y3 = [1,0,1,0] @@ -17,14 +16,14 @@ assert(any(y3)) assert(not any(y2)) - def check_nd(self): + def test_nd(self): y1 = [[0,0,0],[0,1,0],[1,1,0]] assert(any(y1)) assert_array_equal(sometrue(y1,axis=0),[1,1,0]) assert_array_equal(sometrue(y1,axis=1),[0,1,1]) -class TestAll(NumpyTestCase): - def check_basic(self): +class TestAll(TestCase): + def test_basic(self): y1 = [0,1,1,0] y2 = [0,0,0,0] y3 = [1,1,1,1] @@ -33,14 +32,14 @@ assert(not all(y2)) assert(all(~array(y2))) - def check_nd(self): + def test_nd(self): y1 = [[0,0,1],[0,1,1],[1,1,1]] assert(not all(y1)) assert_array_equal(alltrue(y1,axis=0),[0,0,1]) assert_array_equal(alltrue(y1,axis=1),[0,0,1]) -class TestAverage(NumpyTestCase): - def check_basic(self): +class TestAverage(TestCase): + def test_basic(self): y1 = array([1,2,3]) assert(average(y1,axis=0) == 2.) y2 = array([1.,2.,3.]) @@ -61,7 +60,7 @@ y6 = matrix(rand(5,5)) assert_array_equal(y6.mean(0), average(y6,0)) - def check_weights(self): + def test_weights(self): y = arange(10) w = arange(10) assert_almost_equal(average(y, weights=w), (arange(10)**2).sum()*1./arange(10).sum()) @@ -89,7 +88,7 @@ assert_equal(average(y1, weights=w2), 5.) - def check_returned(self): + def test_returned(self): y = array([[1,2,3],[4,5,6]]) # No weights @@ -116,14 +115,14 @@ assert_array_equal(scl, array([1.,6.])) -class TestSelect(NumpyTestCase): +class TestSelect(TestCase): def _select(self,cond,values,default=0): output = [] for m in range(len(cond)): output += [V[m] for V,C in zip(values,cond) if C[m]] or [default] return output - def check_basic(self): + def test_basic(self): choices = [array([1,2,3]), array([4,5,6]), array([7,8,9])] @@ -136,8 +135,8 @@ assert_equal(len(choices),3) assert_equal(len(conditions),3) -class TestLogspace(NumpyTestCase): - def check_basic(self): +class TestLogspace(TestCase): + def test_basic(self): y = logspace(0,6) assert(len(y)==50) y = logspace(0,6,num=100) @@ -147,8 +146,8 @@ y = logspace(0,6,num=7) assert_array_equal(y,[1,10,100,1e3,1e4,1e5,1e6]) -class TestLinspace(NumpyTestCase): - def check_basic(self): +class TestLinspace(TestCase): + def test_basic(self): y = linspace(0,10) assert(len(y)==50) y = linspace(2,10,num=100) @@ -159,28 +158,28 @@ assert_almost_equal(st,8/49.0) assert_array_almost_equal(y,mgrid[2:10:50j],13) - def check_corner(self): + def test_corner(self): y = list(linspace(0,1,1)) assert y == [0.0], y y = list(linspace(0,1,2.5)) assert y == [0.0, 1.0] - def check_type(self): + def test_type(self): t1 = linspace(0,1,0).dtype t2 = linspace(0,1,1).dtype t3 = linspace(0,1,2).dtype assert_equal(t1, t2) assert_equal(t2, t3) -class TestInsert(NumpyTestCase): - def check_basic(self): +class TestInsert(TestCase): + def test_basic(self): a = [1,2,3] assert_equal(insert(a,0,1), [1,1,2,3]) assert_equal(insert(a,3,1), [1,2,3,1]) assert_equal(insert(a,[1,1,1],[1,2,3]), [1,1,2,3,2,3]) -class TestAmax(NumpyTestCase): - def check_basic(self): +class TestAmax(TestCase): + def test_basic(self): a = [3,4,5,10,-3,-5,6.0] assert_equal(amax(a),10.0) b = [[3,6.0, 9.0], @@ -189,8 +188,8 @@ assert_equal(amax(b,axis=0),[8.0,10.0,9.0]) assert_equal(amax(b,axis=1),[9.0,10.0,8.0]) -class TestAmin(NumpyTestCase): - def check_basic(self): +class TestAmin(TestCase): + def test_basic(self): a = [3,4,5,10,-3,-5,6.0] assert_equal(amin(a),-5.0) b = [[3,6.0, 9.0], @@ -199,8 +198,8 @@ assert_equal(amin(b,axis=0),[3.0,3.0,2.0]) assert_equal(amin(b,axis=1),[3.0,4.0,2.0]) -class TestPtp(NumpyTestCase): - def check_basic(self): +class TestPtp(TestCase): + def test_basic(self): a = [3,4,5,10,-3,-5,6.0] assert_equal(ptp(a,axis=0),15.0) b = [[3,6.0, 9.0], @@ -209,8 +208,8 @@ assert_equal(ptp(b,axis=0),[5.0,7.0,7.0]) assert_equal(ptp(b,axis=-1),[6.0,6.0,6.0]) -class TestCumsum(NumpyTestCase): - def check_basic(self): +class TestCumsum(TestCase): + def test_basic(self): ba = [1,2,10,11,6,5,4] ba2 = [[1,2,3,4],[5,6,7,9],[10,3,4,5]] for ctype in [int8,uint8,int16,uint16,int32,uint32, @@ -225,8 +224,8 @@ [5,11,18,27], [10,13,17,22]],ctype)) -class TestProd(NumpyTestCase): - def check_basic(self): +class TestProd(TestCase): + def test_basic(self): ba = [1,2,10,11,6,5,4] ba2 = [[1,2,3,4],[5,6,7,9],[10,3,4,5]] for ctype in [int16,uint16,int32,uint32, @@ -243,8 +242,8 @@ array([50,36,84,180],ctype)) assert_array_equal(prod(a2,axis=-1),array([24, 1890, 600],ctype)) -class TestCumprod(NumpyTestCase): - def check_basic(self): +class TestCumprod(TestCase): + def test_basic(self): ba = [1,2,10,11,6,5,4] ba2 = [[1,2,3,4],[5,6,7,9],[10,3,4,5]] for ctype in [int16,uint16,int32,uint32, @@ -268,8 +267,8 @@ [ 5, 30, 210, 1890], [10, 30, 120, 600]],ctype)) -class TestDiff(NumpyTestCase): - def check_basic(self): +class TestDiff(TestCase): + def test_basic(self): x = [1,4,6,7,12] out = array([3,2,1,5]) out2 = array([-1,-1,4]) @@ -278,7 +277,7 @@ assert_array_equal(diff(x,n=2),out2) assert_array_equal(diff(x,n=3),out3) - def check_nd(self): + def test_nd(self): x = 20*rand(10,20,30) out1 = x[:,:,1:] - x[:,:,:-1] out2 = out1[:,:,1:] - out1[:,:,:-1] @@ -289,8 +288,8 @@ assert_array_equal(diff(x,axis=0),out3) assert_array_equal(diff(x,n=2,axis=0),out4) -class TestAngle(NumpyTestCase): - def check_basic(self): +class TestAngle(TestCase): + def test_basic(self): x = [1+3j,sqrt(2)/2.0+1j*sqrt(2)/2,1,1j,-1,-1j,1-3j,-1+3j] y = angle(x) yo = [arctan(3.0/1.0),arctan(1.0),0,pi/2,pi,-pi/2.0, @@ -300,33 +299,33 @@ assert_array_almost_equal(y,yo,11) assert_array_almost_equal(z,zo,11) -class TestTrimZeros(NumpyTestCase): +class TestTrimZeros(TestCase): """ only testing for integer splits. """ - def check_basic(self): + def test_basic(self): a= array([0,0,1,2,3,4,0]) res = trim_zeros(a) assert_array_equal(res,array([1,2,3,4])) - def check_leading_skip(self): + def test_leading_skip(self): a= array([0,0,1,0,2,3,4,0]) res = trim_zeros(a) assert_array_equal(res,array([1,0,2,3,4])) - def check_trailing_skip(self): + def test_trailing_skip(self): a= array([0,0,1,0,2,3,0,4,0]) res = trim_zeros(a) assert_array_equal(res,array([1,0,2,3,0,4])) -class TestExtins(NumpyTestCase): - def check_basic(self): +class TestExtins(TestCase): + def test_basic(self): a = array([1,3,2,1,2,3,3]) b = extract(a>1,a) assert_array_equal(b,[3,2,2,3,3]) - def check_place(self): + def test_place(self): a = array([1,4,3,2,5,8,7]) place(a,[0,1,0,1,0,1,0],[2,4,6]) assert_array_equal(a,[1,2,3,4,5,6,7]) - def check_both(self): + def test_both(self): a = rand(10) mask = a > 0.5 ac = a.copy() @@ -335,8 +334,8 @@ place(a,mask,c) assert_array_equal(a,ac) -class TestVectorize(NumpyTestCase): - def check_simple(self): +class TestVectorize(TestCase): + def test_simple(self): def addsubtract(a,b): if a > b: return a - b @@ -345,7 +344,7 @@ f = vectorize(addsubtract) r = f([0,3,6,9],[1,3,5,7]) assert_array_equal(r,[1,6,1,2]) - def check_scalar(self): + def test_scalar(self): def addsubtract(a,b): if a > b: return a - b @@ -354,59 +353,59 @@ f = vectorize(addsubtract) r = f([0,3,6,9],5) assert_array_equal(r,[5,8,1,4]) - def check_large(self): + def test_large(self): x = linspace(-3,2,10000) f = vectorize(lambda x: x) y = f(x) assert_array_equal(y, x) -class TestDigitize(NumpyTestCase): - def check_forward(self): +class TestDigitize(TestCase): + def test_forward(self): x = arange(-6,5) bins = arange(-5,5) assert_array_equal(digitize(x,bins),arange(11)) - def check_reverse(self): + def test_reverse(self): x = arange(5,-6,-1) bins = arange(5,-5,-1) assert_array_equal(digitize(x,bins),arange(11)) - def check_random(self): + def test_random(self): x = rand(10) bin = linspace(x.min(), x.max(), 10) assert all(digitize(x,bin) != 0) -class TestUnwrap(NumpyTestCase): - def check_simple(self): +class TestUnwrap(TestCase): + def test_simple(self): #check that unwrap removes jumps greather that 2*pi assert_array_equal(unwrap([1,1+2*pi]),[1,1]) #check that unwrap maintans continuity assert(all(diff(unwrap(rand(10)*100))3, [4, 0]) + assert y.ndim == 0 + assert y == 0 + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/lib/tests/test_getlimits.py =================================================================== --- branches/cdavid/numpy/lib/tests/test_getlimits.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test_getlimits.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -2,39 +2,40 @@ """ from numpy.testing import * -import numpy.lib;reload(numpy.lib) +import numpy.lib +reload(numpy.lib) from numpy.lib.getlimits import finfo, iinfo from numpy import single,double,longdouble import numpy as np ################################################## -class TestPythonFloat(NumpyTestCase): - def check_singleton(self): +class TestPythonFloat(TestCase): + def test_singleton(self): ftype = finfo(float) ftype2 = finfo(float) assert_equal(id(ftype),id(ftype2)) -class TestSingle(NumpyTestCase): - def check_singleton(self): +class TestSingle(TestCase): + def test_singleton(self): ftype = finfo(single) ftype2 = finfo(single) assert_equal(id(ftype),id(ftype2)) -class TestDouble(NumpyTestCase): - def check_singleton(self): +class TestDouble(TestCase): + def test_singleton(self): ftype = finfo(double) ftype2 = finfo(double) assert_equal(id(ftype),id(ftype2)) -class TestLongdouble(NumpyTestCase): - def check_singleton(self,level=2): +class TestLongdouble(TestCase): + def test_singleton(self,level=2): ftype = finfo(longdouble) ftype2 = finfo(longdouble) assert_equal(id(ftype),id(ftype2)) -class TestIinfo(NumpyTestCase): - def check_basic(self): +class TestIinfo(TestCase): + def test_basic(self): dts = zip(['i1', 'i2', 'i4', 'i8', 'u1', 'u2', 'u4', 'u8'], [np.int8, np.int16, np.int32, np.int64, @@ -44,10 +45,11 @@ assert_equal(iinfo(dt1).max, iinfo(dt2).max) self.assertRaises(ValueError, iinfo, 'f4') - def check_unsigned_max(self): + def test_unsigned_max(self): types = np.sctypes['uint'] for T in types: assert_equal(iinfo(T).max, T(-1)) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/lib/tests/test_index_tricks.py =================================================================== --- branches/cdavid/numpy/lib/tests/test_index_tricks.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test_index_tricks.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -3,8 +3,8 @@ from numpy import array, ones, r_, mgrid restore_path() -class TestGrid(NumpyTestCase): - def check_basic(self): +class TestGrid(TestCase): + def test_basic(self): a = mgrid[-1:1:10j] b = mgrid[-1:1:0.1] assert(a.shape == (10,)) @@ -16,7 +16,7 @@ assert_almost_equal(b[-1],b[0]+19*0.1,11) assert_almost_equal(a[1]-a[0],2.0/9.0,11) - def check_nd(self): + def test_nd(self): c = mgrid[-1:1:10j,-2:2:10j] d = mgrid[-1:1:0.1,-2:2:0.2] assert(c.shape == (2,10,10)) @@ -28,22 +28,22 @@ assert_array_almost_equal(d[0,1,:]-d[0,0,:], 0.1*ones(20,'d'),11) assert_array_almost_equal(d[1,:,1]-d[1,:,0], 0.2*ones(20,'d'),11) -class TestConcatenator(NumpyTestCase): - def check_1d(self): +class TestConcatenator(TestCase): + def test_1d(self): assert_array_equal(r_[1,2,3,4,5,6],array([1,2,3,4,5,6])) b = ones(5) c = r_[b,0,0,b] assert_array_equal(c,[1,1,1,1,1,0,0,1,1,1,1,1]) - def check_mixed_type(self): + def test_mixed_type(self): g = r_[10.1, 1:10] assert(g.dtype == 'f8') - def check_more_mixed_type(self): + def test_more_mixed_type(self): g = r_[-10.1, array([1]), array([2,3,4]), 10.0] assert(g.dtype == 'f8') - def check_2d(self): + def test_2d(self): b = rand(5,5) c = rand(5,5) d = r_['1',b,c] # append columns @@ -55,5 +55,6 @@ assert_array_equal(d[:5,:],b) assert_array_equal(d[5:,:],c) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/lib/tests/test_io.py =================================================================== --- branches/cdavid/numpy/lib/tests/test_io.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test_io.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -2,7 +2,7 @@ import numpy as np import StringIO -class TestSaveTxt(NumpyTestCase): +class TestSaveTxt(TestCase): def test_array(self): a =np.array( [[1,2],[3,4]], float) c = StringIO.StringIO() @@ -62,7 +62,7 @@ assert_equal(lines, ['01 : 2.0\n', '03 : 4.0\n']) -class TestLoadTxt(NumpyTestCase): +class TestLoadTxt(TestCase): def test_record(self): c = StringIO.StringIO() c.write('1 2\n3 4') @@ -164,7 +164,7 @@ assert_array_equal(x, a[:,1:]) -class Testfromregex(NumpyTestCase): +class Testfromregex(TestCase): def test_record(self): c = StringIO.StringIO() c.write('1.312 foo\n1.534 bar\n4.444 qux') @@ -196,5 +196,6 @@ a = np.array([(1312,), (1534,), (4444,)], dtype=dt) assert_array_equal(x, a) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/lib/tests/test_machar.py =================================================================== --- branches/cdavid/numpy/lib/tests/test_machar.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test_machar.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,10 +1,10 @@ -from numpy.testing import NumpyTestCase, NumpyTest +from numpy.testing import * from numpy.lib.machar import MachAr import numpy.core.numerictypes as ntypes from numpy import seterr, array -class TestMachAr(NumpyTestCase): +class TestMachAr(TestCase): def _run_machar_highprec(self): # Instanciate MachAr instance with high enough precision to cause # underflow @@ -26,5 +26,6 @@ finally: seterr(**serrstate) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/lib/tests/test_polynomial.py =================================================================== --- branches/cdavid/numpy/lib/tests/test_polynomial.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test_polynomial.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -76,13 +76,14 @@ from numpy.testing import * import numpy as np -class TestDocs(NumpyTestCase): - def check_doctests(self): return self.rundocs() +class TestDocs(TestCase): + def test_doctests(self): + return rundocs() - def check_roots(self): + def test_roots(self): assert_array_equal(np.roots([1,0,0]), [0,0]) - def check_str_leading_zeros(self): + def test_str_leading_zeros(self): p = np.poly1d([4,3,2,1]) p[3] = 0 assert_equal(str(p), @@ -94,7 +95,7 @@ p[1] = 0 assert_equal(str(p), " \n0") - def check_polyfit(self) : + def test_polyfit(self) : c = np.array([3., 2., 1.]) x = np.linspace(0,2,5) y = np.polyval(c,x) @@ -109,5 +110,6 @@ cc = np.concatenate((c,c), axis=1) assert_almost_equal(cc, np.polyfit(x,yy,2)) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/lib/tests/test_regression.py =================================================================== --- branches/cdavid/numpy/lib/tests/test_regression.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test_regression.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -6,7 +6,7 @@ rlevel = 1 -class TestRegression(NumpyTestCase): +class TestRegression(TestCase): def test_polyfit_build(self,level=rlevel): """Ticket #628""" ref = [-1.06123820e-06, 5.70886914e-04, -1.13822012e-01, @@ -30,4 +30,4 @@ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/lib/tests/test_shape_base.py =================================================================== --- branches/cdavid/numpy/lib/tests/test_shape_base.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test_shape_base.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,31 +1,34 @@ from numpy.testing import * set_package_path() -import numpy.lib; from numpy.lib import * from numpy.core import * restore_path() -class TestApplyAlongAxis(NumpyTestCase): - def check_simple(self): +class TestApplyAlongAxis(TestCase): + def test_simple(self): a = ones((20,10),'d') assert_array_equal(apply_along_axis(len,0,a),len(a)*ones(shape(a)[1])) - def check_simple101(self,level=11): + + def test_simple101(self,level=11): a = ones((10,101),'d') assert_array_equal(apply_along_axis(len,0,a),len(a)*ones(shape(a)[1])) - def check_3d(self): + def test_3d(self): a = arange(27).reshape((3,3,3)) - assert_array_equal(apply_along_axis(sum,0,a), [[27,30,33],[36,39,42],[45,48,51]]) + assert_array_equal(apply_along_axis(sum,0,a), + [[27,30,33],[36,39,42],[45,48,51]]) -class TestArraySplit(NumpyTestCase): - def check_integer_0_split(self): + +class TestArraySplit(TestCase): + def test_integer_0_split(self): a = arange(10) try: res = array_split(a,0) assert(0) # it should have thrown a value error except ValueError: pass - def check_integer_split(self): + + def test_integer_split(self): a = arange(10) res = array_split(a,1) desired = [arange(10)] @@ -78,19 +81,22 @@ arange(4,5),arange(5,6), arange(6,7), arange(7,8), arange(8,9), arange(9,10),array([])] compare_results(res,desired) - def check_integer_split_2D_rows(self): + + def test_integer_split_2D_rows(self): a = array([arange(10),arange(10)]) res = array_split(a,3,axis=0) desired = [array([arange(10)]),array([arange(10)]),array([])] compare_results(res,desired) - def check_integer_split_2D_cols(self): + + def test_integer_split_2D_cols(self): a = array([arange(10),arange(10)]) res = array_split(a,3,axis=-1) desired = [array([arange(4),arange(4)]), array([arange(4,7),arange(4,7)]), array([arange(7,10),arange(7,10)])] compare_results(res,desired) - def check_integer_split_2D_default(self): + + def test_integer_split_2D_default(self): """ This will fail if we change default axis """ a = array([arange(10),arange(10)]) @@ -99,20 +105,21 @@ compare_results(res,desired) #perhaps should check higher dimensions - def check_index_split_simple(self): + def test_index_split_simple(self): a = arange(10) indices = [1,5,7] res = array_split(a,indices,axis=-1) desired = [arange(0,1),arange(1,5),arange(5,7),arange(7,10)] compare_results(res,desired) - def check_index_split_low_bound(self): + def test_index_split_low_bound(self): a = arange(10) indices = [0,5,7] res = array_split(a,indices,axis=-1) desired = [array([]),arange(0,5),arange(5,7),arange(7,10)] compare_results(res,desired) - def check_index_split_high_bound(self): + + def test_index_split_high_bound(self): a = arange(10) indices = [0,5,7,10,12] res = array_split(a,indices,axis=-1) @@ -120,18 +127,19 @@ array([]),array([])] compare_results(res,desired) -class TestSplit(NumpyTestCase): + +class TestSplit(TestCase): """* This function is essentially the same as array_split, except that it test if splitting will result in an equal split. Only test for this case. *""" - def check_equal_split(self): + def test_equal_split(self): a = arange(10) res = split(a,2) desired = [arange(5),arange(5,10)] compare_results(res,desired) - def check_unequal_split(self): + def test_unequal_split(self): a = arange(10) try: res = split(a,3) @@ -139,29 +147,34 @@ except ValueError: pass -class TestAtleast1d(NumpyTestCase): - def check_0D_array(self): + +class TestAtleast1d(TestCase): + def test_0D_array(self): a = array(1); b = array(2); res=map(atleast_1d,[a,b]) desired = [array([1]),array([2])] assert_array_equal(res,desired) - def check_1D_array(self): + + def test_1D_array(self): a = array([1,2]); b = array([2,3]); res=map(atleast_1d,[a,b]) desired = [array([1,2]),array([2,3])] assert_array_equal(res,desired) - def check_2D_array(self): + + def test_2D_array(self): a = array([[1,2],[1,2]]); b = array([[2,3],[2,3]]); res=map(atleast_1d,[a,b]) desired = [a,b] assert_array_equal(res,desired) - def check_3D_array(self): + + def test_3D_array(self): a = array([[1,2],[1,2]]); b = array([[2,3],[2,3]]); a = array([a,a]);b = array([b,b]); res=map(atleast_1d,[a,b]) desired = [a,b] assert_array_equal(res,desired) - def check_r1array(self): + + def test_r1array(self): """ Test to make sure equivalent Travis O's r1array function """ assert(atleast_1d(3).shape == (1,)) @@ -170,114 +183,130 @@ assert(atleast_1d(3.0).shape == (1,)) assert(atleast_1d([[2,3],[4,5]]).shape == (2,2)) -class TestAtleast2d(NumpyTestCase): - def check_0D_array(self): +class TestAtleast2d(TestCase): + def test_0D_array(self): a = array(1); b = array(2); res=map(atleast_2d,[a,b]) desired = [array([[1]]),array([[2]])] assert_array_equal(res,desired) - def check_1D_array(self): + + def test_1D_array(self): a = array([1,2]); b = array([2,3]); res=map(atleast_2d,[a,b]) desired = [array([[1,2]]),array([[2,3]])] assert_array_equal(res,desired) - def check_2D_array(self): + + def test_2D_array(self): a = array([[1,2],[1,2]]); b = array([[2,3],[2,3]]); res=map(atleast_2d,[a,b]) desired = [a,b] assert_array_equal(res,desired) - def check_3D_array(self): + + def test_3D_array(self): a = array([[1,2],[1,2]]); b = array([[2,3],[2,3]]); a = array([a,a]);b = array([b,b]); res=map(atleast_2d,[a,b]) desired = [a,b] assert_array_equal(res,desired) - def check_r2array(self): + + def test_r2array(self): """ Test to make sure equivalent Travis O's r2array function """ assert(atleast_2d(3).shape == (1,1)) assert(atleast_2d([3j,1]).shape == (1,2)) assert(atleast_2d([[[3,1],[4,5]],[[3,5],[1,2]]]).shape == (2,2,2)) -class TestAtleast3d(NumpyTestCase): - def check_0D_array(self): + +class TestAtleast3d(TestCase): + def test_0D_array(self): a = array(1); b = array(2); res=map(atleast_3d,[a,b]) desired = [array([[[1]]]),array([[[2]]])] assert_array_equal(res,desired) - def check_1D_array(self): + + def test_1D_array(self): a = array([1,2]); b = array([2,3]); res=map(atleast_3d,[a,b]) desired = [array([[[1],[2]]]),array([[[2],[3]]])] assert_array_equal(res,desired) - def check_2D_array(self): + + def test_2D_array(self): a = array([[1,2],[1,2]]); b = array([[2,3],[2,3]]); res=map(atleast_3d,[a,b]) desired = [a[:,:,newaxis],b[:,:,newaxis]] assert_array_equal(res,desired) - def check_3D_array(self): + + def test_3D_array(self): a = array([[1,2],[1,2]]); b = array([[2,3],[2,3]]); a = array([a,a]);b = array([b,b]); res=map(atleast_3d,[a,b]) desired = [a,b] assert_array_equal(res,desired) -class TestHstack(NumpyTestCase): - def check_0D_array(self): +class TestHstack(TestCase): + def test_0D_array(self): a = array(1); b = array(2); res=hstack([a,b]) desired = array([1,2]) assert_array_equal(res,desired) - def check_1D_array(self): + + def test_1D_array(self): a = array([1]); b = array([2]); res=hstack([a,b]) desired = array([1,2]) assert_array_equal(res,desired) - def check_2D_array(self): + + def test_2D_array(self): a = array([[1],[2]]); b = array([[1],[2]]); res=hstack([a,b]) desired = array([[1,1],[2,2]]) assert_array_equal(res,desired) -class TestVstack(NumpyTestCase): - def check_0D_array(self): +class TestVstack(TestCase): + def test_0D_array(self): a = array(1); b = array(2); res=vstack([a,b]) desired = array([[1],[2]]) assert_array_equal(res,desired) - def check_1D_array(self): + + def test_1D_array(self): a = array([1]); b = array([2]); res=vstack([a,b]) desired = array([[1],[2]]) assert_array_equal(res,desired) - def check_2D_array(self): + + def test_2D_array(self): a = array([[1],[2]]); b = array([[1],[2]]); res=vstack([a,b]) desired = array([[1],[2],[1],[2]]) assert_array_equal(res,desired) - def check_2D_array2(self): + + def test_2D_array2(self): a = array([1,2]); b = array([1,2]); res=vstack([a,b]) desired = array([[1,2],[1,2]]) assert_array_equal(res,desired) -class TestDstack(NumpyTestCase): - def check_0D_array(self): +class TestDstack(TestCase): + def test_0D_array(self): a = array(1); b = array(2); res=dstack([a,b]) desired = array([[[1,2]]]) assert_array_equal(res,desired) - def check_1D_array(self): + + def test_1D_array(self): a = array([1]); b = array([2]); res=dstack([a,b]) desired = array([[[1,2]]]) assert_array_equal(res,desired) - def check_2D_array(self): + + def test_2D_array(self): a = array([[1],[2]]); b = array([[1],[2]]); res=dstack([a,b]) desired = array([[[1,1]],[[2,2,]]]) assert_array_equal(res,desired) - def check_2D_array2(self): + + def test_2D_array2(self): a = array([1,2]); b = array([1,2]); res=dstack([a,b]) desired = array([[[1,1],[2,2]]]) @@ -286,49 +315,54 @@ """ array_split has more comprehensive test of splitting. only do simple test on hsplit, vsplit, and dsplit """ -class TestHsplit(NumpyTestCase): +class TestHsplit(TestCase): """ only testing for integer splits. """ - def check_0D_array(self): + def test_0D_array(self): a= array(1) try: hsplit(a,2) assert(0) except ValueError: pass - def check_1D_array(self): + + def test_1D_array(self): a= array([1,2,3,4]) res = hsplit(a,2) desired = [array([1,2]),array([3,4])] compare_results(res,desired) - def check_2D_array(self): + + def test_2D_array(self): a= array([[1,2,3,4], [1,2,3,4]]) res = hsplit(a,2) desired = [array([[1,2],[1,2]]),array([[3,4],[3,4]])] compare_results(res,desired) -class TestVsplit(NumpyTestCase): + +class TestVsplit(TestCase): """ only testing for integer splits. """ - def check_1D_array(self): + def test_1D_array(self): a= array([1,2,3,4]) try: vsplit(a,2) assert(0) except ValueError: pass - def check_2D_array(self): + + def test_2D_array(self): a= array([[1,2,3,4], [1,2,3,4]]) res = vsplit(a,2) desired = [array([[1,2,3,4]]),array([[1,2,3,4]])] compare_results(res,desired) -class TestDsplit(NumpyTestCase): + +class TestDsplit(TestCase): """ only testing for integer splits. """ - def check_2D_array(self): + def test_2D_array(self): a= array([[1,2,3,4], [1,2,3,4]]) try: @@ -336,7 +370,8 @@ assert(0) except ValueError: pass - def check_3D_array(self): + + def test_3D_array(self): a= array([[[1,2,3,4], [1,2,3,4]], [[1,2,3,4], @@ -346,8 +381,9 @@ array([[[3,4],[3,4]],[[3,4],[3,4]]])] compare_results(res,desired) -class TestSqueeze(NumpyTestCase): - def check_basic(self): + +class TestSqueeze(TestCase): + def test_basic(self): a = rand(20,10,10,1,1) b = rand(20,1,10,1,20) c = rand(1,1,20,10) @@ -355,8 +391,9 @@ assert_array_equal(squeeze(b),reshape(b,(20,10,20))) assert_array_equal(squeeze(c),reshape(c,(20,10))) -class TestKron(NumpyTestCase): - def check_return_type(self): + +class TestKron(TestCase): + def test_return_type(self): a = ones([2,2]) m = asmatrix(a) assert_equal(type(kron(a,a)), ndarray) @@ -372,8 +409,8 @@ assert_equal(type(kron(ma,a)), myarray) -class TestTile(NumpyTestCase): - def check_basic(self): +class TestTile(TestCase): + def test_basic(self): a = array([0,1,2]) b = [[1,2],[3,4]] assert_equal(tile(a,2), [0,1,2,0,1,2]) @@ -384,12 +421,12 @@ assert_equal(tile(b,(2,2)),[[1,2,1,2],[3,4,3,4], [1,2,1,2],[3,4,3,4]]) - def check_empty(self): + def test_empty(self): a = array([[[]]]) d = tile(a,(3,2,5)).shape assert_equal(d,(3,2,0)) - def check_kroncompare(self): + def test_kroncompare(self): import numpy.random as nr reps=[(2,),(1,2),(2,1),(2,2),(2,3,2),(3,2)] shape=[(3,),(2,3),(3,4,3),(3,2,3),(4,3,2,4),(2,2)] @@ -401,12 +438,12 @@ klarge = kron(a, b) assert_equal(large, klarge) + # Utility - def compare_results(res,desired): for i in range(len(desired)): assert_array_equal(res[i],desired[i]) if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/lib/tests/test_twodim_base.py =================================================================== --- branches/cdavid/numpy/lib/tests/test_twodim_base.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test_twodim_base.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -17,8 +17,8 @@ data = add.outer(data,data) return data -class TestEye(NumpyTestCase): - def check_basic(self): +class TestEye(TestCase): + def test_basic(self): assert_equal(eye(4),array([[1,0,0,0], [0,1,0,0], [0,0,1,0], @@ -29,7 +29,7 @@ [0,0,0,1]],'f')) assert_equal(eye(3) == 1, eye(3,dtype=bool)) - def check_diag(self): + def test_diag(self): assert_equal(eye(4,k=1),array([[0,1,0,0], [0,0,1,0], [0,0,0,1], @@ -38,7 +38,7 @@ [1,0,0,0], [0,1,0,0], [0,0,1,0]])) - def check_2d(self): + def test_2d(self): assert_equal(eye(4,3),array([[1,0,0], [0,1,0], [0,0,1], @@ -46,7 +46,7 @@ assert_equal(eye(3,4),array([[1,0,0,0], [0,1,0,0], [0,0,1,0]])) - def check_diag2d(self): + def test_diag2d(self): assert_equal(eye(3,4,k=2),array([[0,0,1,0], [0,0,0,1], [0,0,0,0]])) @@ -55,8 +55,8 @@ [1,0,0], [0,1,0]])) -class TestDiag(NumpyTestCase): - def check_vector(self): +class TestDiag(TestCase): + def test_vector(self): vals = (100*arange(5)).astype('l') b = zeros((5,5)) for k in range(5): @@ -70,7 +70,7 @@ assert_equal(diag(vals,k=2), b) assert_equal(diag(vals,k=-2), c) - def check_matrix(self): + def test_matrix(self): vals = (100*get_mat(5)+1).astype('l') b = zeros((5,)) for k in range(5): @@ -84,8 +84,8 @@ b[k] = vals[k+2,k] assert_equal(diag(vals,-2),b[:3]) -class TestFliplr(NumpyTestCase): - def check_basic(self): +class TestFliplr(TestCase): + def test_basic(self): self.failUnlessRaises(ValueError, fliplr, ones(4)) a = get_mat(4) b = a[:,::-1] @@ -96,8 +96,8 @@ [5,4,3]] assert_equal(fliplr(a),b) -class TestFlipud(NumpyTestCase): - def check_basic(self): +class TestFlipud(TestCase): + def test_basic(self): a = get_mat(4) b = a[::-1,:] assert_equal(flipud(a),b) @@ -107,8 +107,8 @@ [0,1,2]] assert_equal(flipud(a),b) -class TestRot90(NumpyTestCase): - def check_basic(self): +class TestRot90(TestCase): + def test_basic(self): self.failUnlessRaises(ValueError, rot90, ones(4)) a = [[0,1,2], @@ -133,12 +133,12 @@ for k in range(0,13,4): assert_equal(rot90(a,k=k),b4) - def check_axes(self): + def test_axes(self): a = ones((50,40,3)) assert_equal(rot90(a).shape,(40,50,3)) -class TestHistogram2d(NumpyTestCase): - def check_simple(self): +class TestHistogram2d(TestCase): + def test_simple(self): x = array([ 0.41702200, 0.72032449, 0.00011437481, 0.302332573, 0.146755891]) y = array([ 0.09233859, 0.18626021, 0.34556073, 0.39676747, 0.53881673]) xedges = np.linspace(0,1,10) @@ -161,7 +161,7 @@ assert_array_equal(xedges, np.linspace(0,9,11)) assert_array_equal(yedges, np.linspace(0,9,11)) - def check_asym(self): + def test_asym(self): x = array([1, 1, 2, 3, 4, 4, 4, 5]) y = array([1, 3, 2, 0, 1, 2, 3, 4]) H, xed, yed = histogram2d(x,y, (6, 5), range = [[0,6],[0,5]], normed=True) @@ -174,7 +174,7 @@ assert_array_almost_equal(H, answer/8., 3) assert_array_equal(xed, np.linspace(0,6,7)) assert_array_equal(yed, np.linspace(0,5,6)) - def check_norm(self): + def test_norm(self): x = array([1,2,3,1,2,3,1,2,3]) y = array([1,1,1,2,2,2,3,3,3]) H, xed, yed = histogram2d(x,y,[[1,2,3,5], [1,2,3,5]], normed=True) @@ -183,12 +183,13 @@ [.5,.5,.25]])/9. assert_array_almost_equal(H, answer, 3) - def check_all_outliers(self): + def test_all_outliers(self): r = rand(100)+1. H, xed, yed = histogram2d(r, r, (4, 5), range=([0,1], [0,1])) assert_array_equal(H, 0) -class TestTri(NumpyTestCase): + +class TestTri(TestCase): def test_dtype(self): out = array([[1,0,0], [1,1,0], @@ -196,5 +197,6 @@ assert_array_equal(tri(3),out) assert_array_equal(tri(3,dtype=bool),out.astype(bool)) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/lib/tests/test_type_check.py =================================================================== --- branches/cdavid/numpy/lib/tests/test_type_check.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test_type_check.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -10,9 +10,9 @@ def assert_all(x): assert(all(x)), x -class TestMintypecode(NumpyTestCase): +class TestMintypecode(TestCase): - def check_default_1(self): + def test_default_1(self): for itype in '1bcsuwil': assert_equal(mintypecode(itype),'d') assert_equal(mintypecode('f'),'f') @@ -20,7 +20,7 @@ assert_equal(mintypecode('F'),'F') assert_equal(mintypecode('D'),'D') - def check_default_2(self): + def test_default_2(self): for itype in '1bcsuwil': assert_equal(mintypecode(itype+'f'),'f') assert_equal(mintypecode(itype+'d'),'d') @@ -45,7 +45,7 @@ assert_equal(mintypecode('DF'),'D') assert_equal(mintypecode('DD'),'D') - def check_default_3(self): + def test_default_3(self): assert_equal(mintypecode('fdF'),'D') #assert_equal(mintypecode('fdF',savespace=1),'F') assert_equal(mintypecode('fdD'),'D') @@ -59,8 +59,8 @@ #assert_equal(mintypecode('idF',savespace=1),'F') assert_equal(mintypecode('idD'),'D') -class TestIsscalar(NumpyTestCase): - def check_basic(self): +class TestIsscalar(TestCase): + def test_basic(self): assert(isscalar(3)) assert(not isscalar([3])) assert(not isscalar((3,))) @@ -68,145 +68,145 @@ assert(isscalar(10L)) assert(isscalar(4.0)) -class TestReal(NumpyTestCase): - def check_real(self): +class TestReal(TestCase): + def test_real(self): y = rand(10,) assert_array_equal(y,real(y)) - def check_cmplx(self): + def test_cmplx(self): y = rand(10,)+1j*rand(10,) assert_array_equal(y.real,real(y)) -class TestImag(NumpyTestCase): - def check_real(self): +class TestImag(TestCase): + def test_real(self): y = rand(10,) assert_array_equal(0,imag(y)) - def check_cmplx(self): + def test_cmplx(self): y = rand(10,)+1j*rand(10,) assert_array_equal(y.imag,imag(y)) -class TestIscomplex(NumpyTestCase): - def check_fail(self): +class TestIscomplex(TestCase): + def test_fail(self): z = array([-1,0,1]) res = iscomplex(z) assert(not sometrue(res,axis=0)) - def check_pass(self): + def test_pass(self): z = array([-1j,1,0]) res = iscomplex(z) assert_array_equal(res,[1,0,0]) -class TestIsreal(NumpyTestCase): - def check_pass(self): +class TestIsreal(TestCase): + def test_pass(self): z = array([-1,0,1j]) res = isreal(z) assert_array_equal(res,[1,1,0]) - def check_fail(self): + def test_fail(self): z = array([-1j,1,0]) res = isreal(z) assert_array_equal(res,[0,1,1]) -class TestIscomplexobj(NumpyTestCase): - def check_basic(self): +class TestIscomplexobj(TestCase): + def test_basic(self): z = array([-1,0,1]) assert(not iscomplexobj(z)) z = array([-1j,0,-1]) assert(iscomplexobj(z)) -class TestIsrealobj(NumpyTestCase): - def check_basic(self): +class TestIsrealobj(TestCase): + def test_basic(self): z = array([-1,0,1]) assert(isrealobj(z)) z = array([-1j,0,-1]) assert(not isrealobj(z)) -class TestIsnan(NumpyTestCase): - def check_goodvalues(self): +class TestIsnan(TestCase): + def test_goodvalues(self): z = array((-1.,0.,1.)) res = isnan(z) == 0 assert_all(alltrue(res,axis=0)) - def check_posinf(self): + def test_posinf(self): olderr = seterr(divide='ignore') assert_all(isnan(array((1.,))/0.) == 0) seterr(**olderr) - def check_neginf(self): + def test_neginf(self): olderr = seterr(divide='ignore') assert_all(isnan(array((-1.,))/0.) == 0) seterr(**olderr) - def check_ind(self): + def test_ind(self): olderr = seterr(divide='ignore', invalid='ignore') assert_all(isnan(array((0.,))/0.) == 1) seterr(**olderr) - #def check_qnan(self): log(-1) return pi*j now + #def test_qnan(self): log(-1) return pi*j now # assert_all(isnan(log(-1.)) == 1) - def check_integer(self): + def test_integer(self): assert_all(isnan(1) == 0) - def check_complex(self): + def test_complex(self): assert_all(isnan(1+1j) == 0) - def check_complex1(self): + def test_complex1(self): olderr = seterr(divide='ignore', invalid='ignore') assert_all(isnan(array(0+0j)/0.) == 1) seterr(**olderr) -class TestIsfinite(NumpyTestCase): - def check_goodvalues(self): +class TestIsfinite(TestCase): + def test_goodvalues(self): z = array((-1.,0.,1.)) res = isfinite(z) == 1 assert_all(alltrue(res,axis=0)) - def check_posinf(self): + def test_posinf(self): olderr = seterr(divide='ignore') assert_all(isfinite(array((1.,))/0.) == 0) seterr(**olderr) - def check_neginf(self): + def test_neginf(self): olderr = seterr(divide='ignore') assert_all(isfinite(array((-1.,))/0.) == 0) seterr(**olderr) - def check_ind(self): + def test_ind(self): olderr = seterr(divide='ignore', invalid='ignore') assert_all(isfinite(array((0.,))/0.) == 0) seterr(**olderr) - #def check_qnan(self): + #def test_qnan(self): # assert_all(isfinite(log(-1.)) == 0) - def check_integer(self): + def test_integer(self): assert_all(isfinite(1) == 1) - def check_complex(self): + def test_complex(self): assert_all(isfinite(1+1j) == 1) - def check_complex1(self): + def test_complex1(self): olderr = seterr(divide='ignore', invalid='ignore') assert_all(isfinite(array(1+1j)/0.) == 0) seterr(**olderr) -class TestIsinf(NumpyTestCase): - def check_goodvalues(self): +class TestIsinf(TestCase): + def test_goodvalues(self): z = array((-1.,0.,1.)) res = isinf(z) == 0 assert_all(alltrue(res,axis=0)) - def check_posinf(self): + def test_posinf(self): olderr = seterr(divide='ignore') assert_all(isinf(array((1.,))/0.) == 1) seterr(**olderr) - def check_posinf_scalar(self): + def test_posinf_scalar(self): olderr = seterr(divide='ignore') assert_all(isinf(array(1.,)/0.) == 1) seterr(**olderr) - def check_neginf(self): + def test_neginf(self): olderr = seterr(divide='ignore') assert_all(isinf(array((-1.,))/0.) == 1) seterr(**olderr) - def check_neginf_scalar(self): + def test_neginf_scalar(self): olderr = seterr(divide='ignore') assert_all(isinf(array(-1.)/0.) == 1) seterr(**olderr) - def check_ind(self): + def test_ind(self): olderr = seterr(divide='ignore', invalid='ignore') assert_all(isinf(array((0.,))/0.) == 0) seterr(**olderr) - #def check_qnan(self): + #def test_qnan(self): # assert_all(isinf(log(-1.)) == 0) # assert_all(isnan(log(-1.)) == 1) -class TestIsposinf(NumpyTestCase): - def check_generic(self): +class TestIsposinf(TestCase): + def test_generic(self): olderr = seterr(divide='ignore', invalid='ignore') vals = isposinf(array((-1.,0,1))/0.) seterr(**olderr) @@ -214,8 +214,8 @@ assert(vals[1] == 0) assert(vals[2] == 1) -class TestIsneginf(NumpyTestCase): - def check_generic(self): +class TestIsneginf(TestCase): + def test_generic(self): olderr = seterr(divide='ignore', invalid='ignore') vals = isneginf(array((-1.,0,1))/0.) seterr(**olderr) @@ -223,21 +223,21 @@ assert(vals[1] == 0) assert(vals[2] == 0) -class TestNanToNum(NumpyTestCase): - def check_generic(self): +class TestNanToNum(TestCase): + def test_generic(self): olderr = seterr(divide='ignore', invalid='ignore') vals = nan_to_num(array((-1.,0,1))/0.) seterr(**olderr) assert_all(vals[0] < -1e10) and assert_all(isfinite(vals[0])) assert(vals[1] == 0) assert_all(vals[2] > 1e10) and assert_all(isfinite(vals[2])) - def check_integer(self): + def test_integer(self): vals = nan_to_num(1) assert_all(vals == 1) - def check_complex_good(self): + def test_complex_good(self): vals = nan_to_num(1+1j) assert_all(vals == 1+1j) - def check_complex_bad(self): + def test_complex_bad(self): v = 1+1j olderr = seterr(divide='ignore', invalid='ignore') v += array(0+1.j)/0. @@ -245,7 +245,7 @@ vals = nan_to_num(v) # !! This is actually (unexpectedly) zero assert_all(isfinite(vals)) - def check_complex_bad2(self): + def test_complex_bad2(self): v = 1+1j olderr = seterr(divide='ignore', invalid='ignore') v += array(-1+1.j)/0. @@ -259,8 +259,8 @@ #assert_all(vals.real < -1e10) and assert_all(isfinite(vals)) -class TestRealIfClose(NumpyTestCase): - def check_basic(self): +class TestRealIfClose(TestCase): + def test_basic(self): a = rand(10) b = real_if_close(a+1e-15j) assert_all(isrealobj(b)) @@ -270,11 +270,12 @@ b = real_if_close(a+1e-7j,tol=1e-6) assert_all(isrealobj(b)) -class TestArrayConversion(NumpyTestCase): - def check_asfarray(self): +class TestArrayConversion(TestCase): + def test_asfarray(self): a = asfarray(array([1,2,3])) assert_equal(a.__class__,ndarray) assert issubdtype(a.dtype,float) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/lib/tests/test_ufunclike.py =================================================================== --- branches/cdavid/numpy/lib/tests/test_ufunclike.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/lib/tests/test_ufunclike.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -59,8 +59,10 @@ from numpy.testing import * -class TestDocs(NumpyTestCase): - def check_doctests(self): return self.rundocs() +class TestDocs(TestCase): + def test_doctests(self): + return rundocs() + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Copied: branches/cdavid/numpy/linalg/SConscript (from rev 5301, trunk/numpy/linalg/SConscript) Deleted: branches/cdavid/numpy/linalg/SConstruct =================================================================== --- branches/cdavid/numpy/linalg/SConstruct 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/linalg/SConstruct 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,29 +0,0 @@ -# Last Change: Tue May 20 05:00 PM 2008 J -# vim:syntax=python -import os.path - -from numscons import GetNumpyEnvironment, scons_get_paths, \ - scons_get_mathlib -from numscons import CheckF77LAPACK -from numscons import write_info - -env = GetNumpyEnvironment(ARGUMENTS) -env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) - -config = env.NumpyConfigure(custom_tests = - {'CheckLAPACK' : CheckF77LAPACK}) - -use_lapack = config.CheckLAPACK() - -mlib = scons_get_mathlib(env) -env.AppendUnique(LIBS = mlib) - -config.Finish() -write_info(env) - -sources = ['lapack_litemodule.c'] -if not use_lapack: - sources.extend(['python_xerbla.c', 'zlapack_lite.c', 'dlapack_lite.c', - 'blas_lite.c', 'dlamch.c', 'f2c_lite.c']) -lapack_lite = env.NumpyPythonExtension('lapack_lite', source = sources) - Copied: branches/cdavid/numpy/linalg/SConstruct (from rev 5301, trunk/numpy/linalg/SConstruct) Modified: branches/cdavid/numpy/linalg/__init__.py =================================================================== --- branches/cdavid/numpy/linalg/__init__.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/linalg/__init__.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -3,6 +3,6 @@ from linalg import * -def test(level=1, verbosity=1): - from numpy.testing import NumpyTest - return NumpyTest().test(level, verbosity) +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().test Modified: branches/cdavid/numpy/linalg/tests/test_linalg.py =================================================================== --- branches/cdavid/numpy/linalg/tests/test_linalg.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/linalg/tests/test_linalg.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -23,28 +23,28 @@ decimal = 12 old_assert_almost_equal(a, b, decimal=decimal, **kw) -class LinalgTestCase(NumpyTestCase): - def check_single(self): +class LinalgTestCase: + def test_single(self): a = array([[1.,2.], [3.,4.]], dtype=single) b = array([2., 1.], dtype=single) self.do(a, b) - def check_double(self): + def test_double(self): a = array([[1.,2.], [3.,4.]], dtype=double) b = array([2., 1.], dtype=double) self.do(a, b) - def check_csingle(self): + def test_csingle(self): a = array([[1.+2j,2+3j], [3+4j,4+5j]], dtype=csingle) b = array([2.+1j, 1.+2j], dtype=csingle) self.do(a, b) - def check_cdouble(self): + def test_cdouble(self): a = array([[1.+2j,2+3j], [3+4j,4+5j]], dtype=cdouble) b = array([2.+1j, 1.+2j], dtype=cdouble) self.do(a, b) - def check_empty(self): + def test_empty(self): a = atleast_2d(array([], dtype = double)) b = atleast_2d(array([], dtype = double)) try: @@ -53,79 +53,79 @@ except linalg.LinAlgError, e: pass - def check_nonarray(self): + def test_nonarray(self): a = [[1,2], [3,4]] b = [2, 1] self.do(a,b) - def check_matrix_b_only(self): + def test_matrix_b_only(self): """Check that matrix type is preserved.""" a = array([[1.,2.], [3.,4.]]) b = matrix([2., 1.]).T self.do(a, b) - def check_matrix_a_and_b(self): + def test_matrix_a_and_b(self): """Check that matrix type is preserved.""" a = matrix([[1.,2.], [3.,4.]]) b = matrix([2., 1.]).T self.do(a, b) -class TestSolve(LinalgTestCase): +class TestSolve(LinalgTestCase, TestCase): def do(self, a, b): x = linalg.solve(a, b) assert_almost_equal(b, dot(a, x)) assert imply(isinstance(b, matrix), isinstance(x, matrix)) -class TestInv(LinalgTestCase): +class TestInv(LinalgTestCase, TestCase): def do(self, a, b): a_inv = linalg.inv(a) assert_almost_equal(dot(a, a_inv), identity(asarray(a).shape[0])) assert imply(isinstance(a, matrix), isinstance(a_inv, matrix)) -class TestEigvals(LinalgTestCase): +class TestEigvals(LinalgTestCase, TestCase): def do(self, a, b): ev = linalg.eigvals(a) evalues, evectors = linalg.eig(a) assert_almost_equal(ev, evalues) -class TestEig(LinalgTestCase): +class TestEig(LinalgTestCase, TestCase): def do(self, a, b): evalues, evectors = linalg.eig(a) assert_almost_equal(dot(a, evectors), multiply(evectors, evalues)) assert imply(isinstance(a, matrix), isinstance(evectors, matrix)) -class TestSVD(LinalgTestCase): +class TestSVD(LinalgTestCase, TestCase): def do(self, a, b): u, s, vt = linalg.svd(a, 0) assert_almost_equal(a, dot(multiply(u, s), vt)) assert imply(isinstance(a, matrix), isinstance(u, matrix)) assert imply(isinstance(a, matrix), isinstance(vt, matrix)) -class TestCondSVD(LinalgTestCase): +class TestCondSVD(LinalgTestCase, TestCase): def do(self, a, b): c = asarray(a) # a might be a matrix s = linalg.svd(c, compute_uv=False) old_assert_almost_equal(s[0]/s[-1], linalg.cond(a), decimal=5) -class TestCond2(LinalgTestCase): +class TestCond2(LinalgTestCase, TestCase): def do(self, a, b): c = asarray(a) # a might be a matrix s = linalg.svd(c, compute_uv=False) old_assert_almost_equal(s[0]/s[-1], linalg.cond(a,2), decimal=5) -class TestCondInf(NumpyTestCase): +class TestCondInf(TestCase): def test(self): A = array([[1.,0,0],[0,-2.,0],[0,0,3.]]) assert_almost_equal(linalg.cond(A,inf),3.) -class TestPinv(LinalgTestCase): +class TestPinv(LinalgTestCase, TestCase): def do(self, a, b): a_ginv = linalg.pinv(a) assert_almost_equal(dot(a, a_ginv), identity(asarray(a).shape[0])) assert imply(isinstance(a, matrix), isinstance(a_ginv, matrix)) -class TestDet(LinalgTestCase): +class TestDet(LinalgTestCase, TestCase): def do(self, a, b): d = linalg.det(a) if asarray(a).dtype.type in (single, double): @@ -135,7 +135,7 @@ ev = linalg.eigvals(ad) assert_almost_equal(d, multiply.reduce(ev)) -class TestLstsq(LinalgTestCase): +class TestLstsq(LinalgTestCase, TestCase): def do(self, a, b): u, s, vt = linalg.svd(a, 0) x, residuals, rank, sv = linalg.lstsq(a, b) @@ -145,7 +145,7 @@ assert imply(isinstance(b, matrix), isinstance(x, matrix)) assert imply(isinstance(b, matrix), isinstance(residuals, matrix)) -class TestMatrixPower(ParametricTestCase): +class TestMatrixPower(TestCase): R90 = array([[0,1],[-1,0]]) Arb22 = array([[4,-7],[-2,10]]) noninv = array([[1,0],[0,0]]) @@ -158,6 +158,7 @@ def test_large_power(self): assert_equal(matrix_power(self.R90,2L**100+2**10+2**5+1),self.R90) + def test_large_power_trailing_zero(self): assert_equal(matrix_power(self.R90,2L**100+2**10+2**5),identity(2)) @@ -197,10 +198,11 @@ self.assertRaises(numpy.linalg.linalg.LinAlgError, lambda: matrix_power(self.noninv,-1)) -class TestBoolPower(NumpyTestCase): - def check_square(self): +class TestBoolPower(TestCase): + def test_square(self): A = array([[True,False],[True,True]]) assert_equal(matrix_power(A,2),A) -if __name__ == '__main__': - NumpyTest().run() + +if __name__ == "__main__": + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/linalg/tests/test_regression.py =================================================================== --- branches/cdavid/numpy/linalg/tests/test_regression.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/linalg/tests/test_regression.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -9,10 +9,10 @@ rlevel = 1 -class TestRegression(NumpyTestCase): +class TestRegression(TestCase): def test_eig_build(self, level = rlevel): """Ticket #652""" - rva = [1.03221168e+02 +0.j, + rva = array([1.03221168e+02 +0.j, -1.91843603e+01 +0.j, -6.04004526e-01+15.84422474j, -6.04004526e-01-15.84422474j, @@ -24,11 +24,13 @@ 7.80732773e+00 +0.j , -7.65390898e-01 +0.j, 1.51971555e-15 +0.j , - -1.51308713e-15 +0.j] + -1.51308713e-15 +0.j]) a = arange(13*13, dtype = float64) a.shape = (13,13) a = a%17 va, ve = linalg.eig(a) + va.sort() + rva.sort() assert_array_almost_equal(va, rva) def test_eigh_build(self, level = rlevel): @@ -52,5 +54,6 @@ assert_array_almost_equal(b, np.zeros((2, 2))) + if __name__ == '__main__': - NumpyTest().run() + nose.run(argv=['', __file__]) Property changes on: branches/cdavid/numpy/ma ___________________________________________________________________ Name: svn:ignore - core_tmp.py ma_old.py + core_tmp.py ma_old.py core_mod.py core_new.py mrecords_new.py *.pyc *.swp Modified: branches/cdavid/numpy/ma/__init__.py =================================================================== --- branches/cdavid/numpy/ma/__init__.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/ma/__init__.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -20,3 +20,7 @@ __all__ = ['core', 'extras'] __all__ += core.__all__ __all__ += extras.__all__ + +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().bench Modified: branches/cdavid/numpy/ma/core.py =================================================================== --- branches/cdavid/numpy/ma/core.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/ma/core.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -44,7 +44,8 @@ 'masked_less','masked_less_equal', 'masked_not_equal', 'masked_object','masked_outside', 'masked_print_option', 'masked_singleton','masked_values', 'masked_where', 'max', 'maximum', - 'mean', 'min', 'minimum', 'multiply', + 'maximum_fill_value', 'mean', 'min', 'minimum', 'minimum_fill_value', + 'multiply', 'negative', 'nomask', 'nonzero', 'not_equal', 'ones', 'outer', 'outerproduct', 'power', 'product', 'ptp', 'put', 'putmask', @@ -61,24 +62,23 @@ import cPickle import operator -import numpy -from numpy.core import bool_, complex_, float_, int_, object_, str_ +import numpy as np +from numpy import ndarray, dtype, typecodes, amax, amin, iscomplexobj,\ + bool_, complex_, float_, int_, object_, str_ +from numpy import array as narray + import numpy.core.umath as umath -import numpy.core.fromnumeric as fromnumeric -import numpy.core.numeric as numeric import numpy.core.numerictypes as ntypes -from numpy import bool_, dtype, typecodes, amax, amin, ndarray, iscomplexobj from numpy import expand_dims as n_expand_dims -from numpy import array as narray import warnings -MaskType = bool_ +MaskType = np.bool_ nomask = MaskType(0) -divide_tolerance = 1.e-35 -numpy.seterr(all='ignore') +divide_tolerance = np.finfo(float).tiny +np.seterr(all='ignore') def doc_note(note): return "\nNotes\n-----\n%s" % note @@ -90,7 +90,7 @@ "Class for MA related errors." def __init__ (self, args=None): "Creates an exception." - Exception.__init__(self,args) + Exception.__init__(self, args) self.args = args def __str__(self): "Calculates the string representation." @@ -111,20 +111,21 @@ 'V' : '???', } max_filler = ntypes._minvals -max_filler.update([(k,-numpy.inf) for k in [numpy.float32, numpy.float64]]) +max_filler.update([(k, -np.inf) for k in [np.float32, np.float64]]) min_filler = ntypes._maxvals -min_filler.update([(k,numpy.inf) for k in [numpy.float32, numpy.float64]]) +min_filler.update([(k, +np.inf) for k in [np.float32, np.float64]]) if 'float128' in ntypes.typeDict: - max_filler.update([(numpy.float128,-numpy.inf)]) - min_filler.update([(numpy.float128, numpy.inf)]) + max_filler.update([(np.float128, -np.inf)]) + min_filler.update([(np.float128, +np.inf)]) + def default_fill_value(obj): """Calculate the default fill value for the argument object. """ if hasattr(obj,'dtype'): defval = default_filler[obj.dtype.kind] - elif isinstance(obj, numeric.dtype): + elif isinstance(obj, np.dtype): defval = default_filler[obj.kind] elif isinstance(obj, float): defval = default_filler['f'] @@ -155,7 +156,7 @@ return min_filler[ntypes.typeDict['int_']] elif isinstance(obj, long): return min_filler[ntypes.typeDict['uint']] - elif isinstance(obj, numeric.dtype): + elif isinstance(obj, np.dtype): return min_filler[obj] else: raise TypeError, 'Unsuitable type for calculating minimum.' @@ -177,28 +178,43 @@ return max_filler[ntypes.typeDict['int_']] elif isinstance(obj, long): return max_filler[ntypes.typeDict['uint']] - elif isinstance(obj, numeric.dtype): + elif isinstance(obj, np.dtype): return max_filler[obj] else: raise TypeError, 'Unsuitable type for calculating minimum.' -def _check_fill_value(fill_value, dtype): - descr = numpy.dtype(dtype).descr +def _check_fill_value(fill_value, ndtype): + ndtype = np.dtype(ndtype) + nbfields = len(ndtype) if fill_value is None: - if len(descr) > 1: - fill_value = [default_fill_value(numeric.dtype(d[1])) - for d in descr] + if nbfields >= 1: + fill_value = np.array(tuple([default_fill_value(np.dtype(d)) + for (_, d) in ndtype.descr]), + dtype=ndtype) else: - fill_value = default_fill_value(dtype) + fill_value = default_fill_value(ndtype) + elif nbfields >= 1: + if isinstance(fill_value, ndarray): + try: + fill_value = np.array(fill_value, copy=False, dtype=ndtype) + except ValueError: + err_msg = "Unable to transform %s to dtype %s" + raise ValueError(err_msg % (fill_value,ndtype)) + else: + fval = np.resize(fill_value, nbfields) + fill_value = tuple([np.asarray(f).astype(d).item() + for (f, (_, d)) in zip(fval, ndtype.descr)]) + fill_value = np.array(fill_value, copy=False, dtype=ndtype) else: - fill_value = narray(fill_value).tolist() - fval = numpy.resize(fill_value, len(descr)) - if len(descr) > 1: - fill_value = [numpy.asarray(f).astype(d[1]).item() - for (f,d) in zip(fval, descr)] + if isinstance(fill_value, basestring) and (ndtype.char not in 'SV'): + fill_value = default_fill_value(ndtype) else: - fill_value = narray(fval, copy=False, dtype=dtype).item() + # In case we want to convert 1e+20 to int... + try: + fill_value = np.array(fill_value, copy=False, dtype=ndtype).item() + except OverflowError: + fill_value = default_fill_value(ndtype) return fill_value @@ -263,9 +279,9 @@ # Should we check for contiguity ? and a.flags['CONTIGUOUS']: return a elif isinstance(a, dict): - return narray(a, 'O') + return np.array(a, 'O') else: - return narray(a) + return np.array(a) #####-------------------------------------------------------------------------- def get_masked_subclass(*arrays): @@ -302,13 +318,13 @@ return a subclass of ndarray if approriate (True). """ - data = getattr(a, '_data', numpy.array(a, subok=subok)) + data = getattr(a, '_data', np.array(a, subok=subok)) if not subok: return data.view(ndarray) return data getdata = get_data -def fix_invalid(a, copy=True, fill_value=None): +def fix_invalid(a, mask=nomask, copy=True, fill_value=None): """Return (a copy of) a where invalid data (nan/inf) are masked and replaced by fill_value. @@ -329,9 +345,9 @@ b : MaskedArray """ - a = masked_array(a, copy=copy, subok=True) + a = masked_array(a, copy=copy, mask=mask, subok=True) #invalid = (numpy.isnan(a._data) | numpy.isinf(a._data)) - invalid = numpy.logical_not(numpy.isfinite(a._data)) + invalid = np.logical_not(np.isfinite(a._data)) if not invalid.any(): return a a._mask |= invalid @@ -439,16 +455,16 @@ "Execute the call behavior." # m = getmask(a) - d1 = get_data(a) + d1 = getdata(a) # if self.domain is not None: - dm = narray(self.domain(d1), copy=False) - m = numpy.logical_or(m, dm) + dm = np.array(self.domain(d1), copy=False) + m = np.logical_or(m, dm) # The following two lines control the domain filling methods. d1 = d1.copy() # We could use smart indexing : d1[dm] = self.fill ... - # ... but numpy.putmask looks more efficient, despite the copy. - numpy.putmask(d1, dm, self.fill) + # ... but np.putmask looks more efficient, despite the copy. + np.putmask(d1, dm, self.fill) # Take care of the masked singletong first ... if not m.ndim and m: return masked @@ -500,14 +516,14 @@ "Execute the call behavior." m = mask_or(getmask(a), getmask(b)) (d1, d2) = (get_data(a), get_data(b)) - result = self.f(d1, d2, *args, **kwargs).view(get_masked_subclass(a,b)) + result = self.f(d1, d2, *args, **kwargs).view(get_masked_subclass(a, b)) if result.size > 1: if m is not nomask: result._mask = make_mask_none(result.shape) result._mask.flat = m - if isinstance(a,MaskedArray): + if isinstance(a, MaskedArray): result._update_from(a) - if isinstance(b,MaskedArray): + if isinstance(b, MaskedArray): result._update_from(b) elif m: return masked @@ -554,7 +570,7 @@ m = umath.logical_or.outer(ma, mb) if (not m.ndim) and m: return masked - rcls = get_masked_subclass(a,b) + rcls = get_masked_subclass(a, b) # We could fill the arguments first, butis it useful ? # d = self.f.outer(filled(a, self.fillx), filled(b, self.filly)).view(rcls) d = self.f.outer(getdata(a), getdata(b)).view(rcls) @@ -614,16 +630,16 @@ if t.any(None): mb = mask_or(mb, t) # The following line controls the domain filling - d2 = numpy.where(t,self.filly,d2) + d2 = np.where(t,self.filly,d2) m = mask_or(ma, mb) if (not m.ndim) and m: return masked - result = self.f(d1, d2).view(get_masked_subclass(a,b)) + result = self.f(d1, d2).view(get_masked_subclass(a, b)) if result.ndim > 0: result._mask = m - if isinstance(a,MaskedArray): + if isinstance(a, MaskedArray): result._update_from(a) - if isinstance(b,MaskedArray): + if isinstance(b, MaskedArray): result._update_from(b) return result @@ -647,7 +663,7 @@ negative = _MaskedUnaryOperation(umath.negative) floor = _MaskedUnaryOperation(umath.floor) ceil = _MaskedUnaryOperation(umath.ceil) -around = _MaskedUnaryOperation(fromnumeric.round_) +around = _MaskedUnaryOperation(np.round_) logical_not = _MaskedUnaryOperation(umath.logical_not) # Domained unary ufuncs ....................................................... sqrt = _MaskedUnaryOperation(umath.sqrt, 0.0, @@ -716,15 +732,15 @@ return getattr(a, '_mask', nomask) getmask = get_mask -def getmaskarray(a): - """Return the mask of a, if any, or a boolean array of the shape +def getmaskarray(arr): + """Return the mask of arr, if any, or a boolean array of the shape of a, full of False. """ - m = getmask(a) - if m is nomask: - m = make_mask_none(fromnumeric.shape(a)) - return m + mask = getmask(arr) + if mask is nomask: + mask = make_mask_none(np.shape(arr), getdata(arr).dtype.names) + return mask def is_mask(m): """Return True if m is a legal mask. @@ -776,16 +792,21 @@ else: return result -def make_mask_none(s): +def make_mask_none(newshape, fields=None): """Return a mask of shape s, filled with False. Parameters ---------- - s : tuple + news : tuple A tuple indicating the shape of the final mask. + fields: {None, string sequence}, optional + A list of field names, if needed. """ - result = numeric.zeros(s, dtype=MaskType) + if not fields: + result = np.zeros(newshape, dtype=MaskType) + else: + result = np.zeros(newshape, dtype=[(n, MaskType) for n in fields]) return result def mask_or (m1, m2, copy=False, shrink=True): @@ -801,9 +822,9 @@ First mask. m2 : array_like Second mask - copy : bool + copy : {False, True}, optional Whether to return a copy. - shrink : bool + shrink : {True, False}, optional Whether to shrink m to nomask if all its values are False. """ @@ -834,7 +855,7 @@ """ cond = make_mask(condition) - a = narray(a, copy=copy, subok=True) + a = np.array(a, copy=copy, subok=True) if hasattr(a, '_mask'): cond = mask_or(cond, a._mask) cls = type(a) @@ -910,27 +931,34 @@ return masked_where(condition, x, copy=copy) # -def masked_object(x, value, copy=True): +def masked_object(x, value, copy=True, shrink=True): """Mask the array x where the data are exactly equal to value. This function is suitable only for object arrays: for floating point, please use ``masked_values`` instead. - Notes - ----- - The mask is set to `nomask` if posible. + Parameters + ---------- + x : array-like + Array to mask + value : var + Comparison value + copy : {True, False}, optional + Whether to return a copy of x. + shrink : {True, False}, optional + Whether to collapse a mask full of False to nomask """ if isMaskedArray(x): condition = umath.equal(x._data, value) mask = x._mask else: - condition = umath.equal(fromnumeric.asarray(x), value) + condition = umath.equal(np.asarray(x), value) mask = nomask - mask = mask_or(mask, make_mask(condition, shrink=True)) + mask = mask_or(mask, make_mask(condition, shrink=shrink)) return masked_array(x, mask=mask, copy=copy, fill_value=value) -def masked_values(x, value, rtol=1.e-5, atol=1.e-8, copy=True): +def masked_values(x, value, rtol=1.e-5, atol=1.e-8, copy=True, shrink=True): """Mask the array x where the data are approximately equal in value, i.e. @@ -945,23 +973,25 @@ Array to fill. value : float Masking value. - rtol : float + rtol : {float}, optional Tolerance parameter. - atol : float + atol : {float}, optional Tolerance parameter (1e-8). - copy : bool + copy : {True, False}, optional Whether to return a copy of x. + shrink : {True, False}, optional + Whether to collapse a mask full of False to nomask """ abs = umath.absolute xnew = filled(x, value) - if issubclass(xnew.dtype.type, numeric.floating): + if issubclass(xnew.dtype.type, np.floating): condition = umath.less_equal(abs(xnew-value), atol+rtol*abs(value)) mask = getattr(x, '_mask', nomask) else: condition = umath.equal(xnew, value) mask = nomask - mask = mask_or(mask, make_mask(condition, shrink=True)) + mask = mask_or(mask, make_mask(condition, shrink=shrink)) return masked_array(xnew, mask=mask, copy=copy, fill_value=value) def masked_invalid(a, copy=True): @@ -969,8 +999,8 @@ preexisting mask is conserved. """ - a = narray(a, copy=copy, subok=True) - condition = ~(numpy.isfinite(a)) + a = np.array(a, copy=copy, subok=True) + condition = ~(np.isfinite(a)) if hasattr(a, '_mask'): condition = mask_or(condition, a._mask) cls = type(a) @@ -1054,7 +1084,7 @@ def getdoc(self): "Return the doc of the function (from the doc of the method)." methdoc = getattr(ndarray, self._name, None) - methdoc = getattr(numpy, self._name, methdoc) + methdoc = getattr(np, self._name, methdoc) if methdoc is not None: return methdoc.__doc__ # @@ -1084,7 +1114,7 @@ "Define an interator." def __init__(self, ma): self.ma = ma - self.ma_iter = numpy.asarray(ma).flat + self.ma_iter = np.asarray(ma).flat if ma._mask is nomask: self.maskiter = None @@ -1106,7 +1136,7 @@ return d -class MaskedArray(numeric.ndarray): +class MaskedArray(ndarray): """Arrays with possibly masked values. Masked values of True exclude the corresponding element from any computation. @@ -1151,11 +1181,11 @@ __array_priority__ = 15 _defaultmask = nomask _defaulthardmask = False - _baseclass = numeric.ndarray + _baseclass = ndarray def __new__(cls, data=None, mask=nomask, dtype=None, copy=False, subok=True, ndmin=0, fill_value=None, - keep_mask=True, hard_mask=False, flag=None,shrink=True, + keep_mask=True, hard_mask=False, flag=None, shrink=True, **options): """Create a new masked array from scratch. @@ -1168,7 +1198,7 @@ DeprecationWarning) shrink = flag # Process data............ - _data = narray(data, dtype=dtype, copy=copy, subok=True, ndmin=ndmin) + _data = np.array(data, dtype=dtype, copy=copy, subok=True, ndmin=ndmin) _baseclass = getattr(data, '_baseclass', type(_data)) _basedict = getattr(data, '_basedict', getattr(data, '__dict__', {})) if not isinstance(data, MaskedArray) or not subok: @@ -1179,31 +1209,59 @@ if hasattr(data,'_mask') and not isinstance(data, ndarray): _data._mask = data._mask _sharedmask = True - # Process mask ........... + # Process mask ............................... + # Number of named fields (or zero if none) + names_ = _data.dtype.names or () + # Type of the mask + if names_: + mdtype = [(n, MaskType) for n in names_] + else: + mdtype = MaskType + # Case 1. : no mask in input ............ if mask is nomask: + # Erase the current mask ? if not keep_mask: + # With a reduced version if shrink: _data._mask = nomask + # With full version else: - _data._mask = make_mask_none(_data) - if copy: - _data._mask = _data._mask.copy() - _data._sharedmask = False + _data._mask = np.zeros(_data.shape, dtype=mdtype) + # Check whether we missed something + elif isinstance(data, (tuple,list)): + mask = np.array([getmaskarray(m) for m in data], dtype=mdtype) + # Force shrinking of the mask if needed (and possible) + if (mdtype == MaskType) and mask.any(): + _data._mask = mask + _data._sharedmask = False else: - _data._sharedmask = True + if copy: + _data._mask = _data._mask.copy() + _data._sharedmask = False + else: + _data._sharedmask = True + # Case 2. : With a mask in input ........ else: - mask = narray(mask, dtype=MaskType, copy=copy) + # Read the mask with the current mdtype + try: + mask = np.array(mask, copy=copy, dtype=mdtype) + # Or assume it's a sequence of bool/int + except TypeError: + mask = np.array([tuple([m]*len(mdtype)) for m in mask], + dtype=mdtype) + # Make sure the mask and the data have the same shape if mask.shape != _data.shape: (nd, nm) = (_data.size, mask.size) if nm == 1: - mask = numeric.resize(mask, _data.shape) + mask = np.resize(mask, _data.shape) elif nm == nd: - mask = fromnumeric.reshape(mask, _data.shape) + mask = np.reshape(mask, _data.shape) else: msg = "Mask and data not compatible: data size is %i, "+\ "mask size is %i." raise MAError, msg % (nd, nm) copy = True + # Set the mask to the new value if _data._mask is nomask: _data._mask = mask _data._sharedmask = not copy @@ -1212,11 +1270,18 @@ _data._mask = mask _data._sharedmask = not copy else: - _data._mask = umath.logical_or(mask, _data._mask) + if names_: + for n in names_: + _data._mask[n] |= mask[n] + else: + _data._mask = np.logical_or(mask, _data._mask) _data._sharedmask = False - # Update fill_value....... - _data._fill_value = _check_fill_value(fill_value, _data.dtype) + if fill_value is None: + fill_value = getattr(data, '_fill_value', None) + # But don't run the check unless we have something to check.... + if fill_value is not None: + _data._fill_value = _check_fill_value(fill_value, _data.dtype) # Process extra options .. _data._hardmask = hard_mask _data._baseclass = _baseclass @@ -1225,19 +1290,20 @@ # def _update_from(self, obj): """Copies some attributes of obj to self. - """ - if obj is not None and isinstance(obj,ndarray): + """ + if obj is not None and isinstance(obj, ndarray): _baseclass = type(obj) else: _baseclass = ndarray - _basedict = getattr(obj,'_basedict',getattr(obj,'__dict__',{})) + _basedict = getattr(obj, '_basedict', getattr(obj, '__dict__',{})) _dict = dict(_fill_value=getattr(obj, '_fill_value', None), _hardmask=getattr(obj, '_hardmask', False), _sharedmask=getattr(obj, '_sharedmask', False), - _baseclass=getattr(obj,'_baseclass',_baseclass), + _isfield=getattr(obj, '_isfield', False), + _baseclass=getattr(obj,'_baseclass', _baseclass), _basedict=_basedict,) self.__dict__.update(_dict) - self.__dict__.update(_basedict) + self.__dict__.update(_basedict) return #........................ def __array_finalize__(self,obj): @@ -1280,7 +1346,7 @@ # Domain not recognized, use fill_value instead fill_value = self.fill_value result = result.copy() - numpy.putmask(result, d, fill_value) + np.putmask(result, d, fill_value) # Update the mask if m is nomask: if d is not nomask: @@ -1296,6 +1362,24 @@ #.... return result #............................................. + def astype(self, newtype): + """Returns a copy of the array cast to newtype.""" + newtype = np.dtype(newtype) + output = self._data.astype(newtype).view(type(self)) + output._update_from(self) + names = output.dtype.names + if names is None: + output._mask = self._mask.astype(bool) + else: + if self._mask is nomask: + output._mask = nomask + else: + output._mask = self._mask.astype([(n,bool) for n in names]) + # Don't check _fill_value if it's None, that'll speed things up + if self._fill_value is not None: + output._fill_value = _check_fill_value(self._fill_value, newtype) + return output + #............................................. def __getitem__(self, indx): """x.__getitem__(y) <==> x[y] @@ -1311,10 +1395,10 @@ # But then we would have to modify __array_finalize__ to prevent the # mask of being reshaped if it hasn't been set up properly yet... # So it's easier to stick to the current version - m = self._mask + _mask = self._mask if not getattr(dout,'ndim', False): # Just a scalar............ - if m is not nomask and m[indx]: + if _mask is not nomask and _mask[indx]: return masked else: # Force dout to MA ........ @@ -1323,20 +1407,14 @@ dout._update_from(self) # Check the fill_value .... if isinstance(indx, basestring): - fvindx = list(self.dtype.names).index(indx) - dout._fill_value = self.fill_value[fvindx] + if self._fill_value is not None: + dout._fill_value = self._fill_value[indx] + dout._isfield = True # Update the mask if needed - if m is not nomask: - if isinstance(indx, basestring): - dout._mask = m.reshape(dout.shape) - else: - dout._mask = ndarray.__getitem__(m, indx).reshape(dout.shape) + if _mask is not nomask: + dout._mask = _mask[indx] + dout._sharedmask = True # Note: Don't try to check for m.any(), that'll take too long... -# mask = ndarray.__getitem__(m, indx).reshape(dout.shape) -# if self._shrinkmask and not m.any(): -# dout._mask = nomask -# else: -# dout._mask = mask return dout #........................ def __setitem__(self, indx, value): @@ -1353,46 +1431,64 @@ # msg = "Masked arrays must be filled before they can be used as indices!" # raise IndexError, msg if isinstance(indx, basestring): - ndarray.__setitem__(self._data,indx, getdata(value)) - warnings.warn("MaskedArray.__setitem__ on fields: "\ - "The mask is NOT affected!") + ndarray.__setitem__(self._data, indx, value) + ndarray.__setitem__(self._mask, indx, getmask(value)) return - #.... + #........................................ +# ndgetattr = ndarray.__getattribute__ + _names = ndarray.__getattribute__(self,'dtype').names or () + _data = self._data + _mask = ndarray.__getattribute__(self,'_mask') + #........................................ if value is masked: - m = self._mask - if m is nomask: - m = numpy.zeros(self.shape, dtype=MaskType) - m[indx] = True - self._mask = m - self._sharedmask = False + # The mask wasn't set: create a full version... + if _mask is nomask: + _mask = self._mask = make_mask_none(self.shape, _names) + # Now, set the mask to its value. + if _names: + _mask[indx] = tuple([True,] * len(_names)) + else: + _mask[indx] = True + if not self._isfield: + self._sharedmask = False return - #.... - dval = narray(value, copy=False, dtype=self.dtype) - valmask = getmask(value) - if self._mask is nomask: + #........................................ + # Get the _data part of the new value + dval = value + # Get the _mask part of the new value + mval = getattr(value, '_mask', nomask) + if _names and mval is nomask: + mval = tuple([False] * len(_names)) + if _mask is nomask: # Set the data, then the mask - ndarray.__setitem__(self._data,indx,dval) - if valmask is not nomask: - self._mask = numpy.zeros(self.shape, dtype=MaskType) - self._mask[indx] = valmask + ndarray.__setitem__(_data, indx, dval) + if mval is not nomask: + _mask = self._mask = make_mask_none(self.shape, _names) + ndarray.__setitem__(_mask, indx, mval) elif not self._hardmask: # Unshare the mask if necessary to avoid propagation - self.unshare_mask() + if not self._isfield: + self.unshare_mask() + _mask = ndarray.__getattribute__(self,'_mask') # Set the data, then the mask - ndarray.__setitem__(self._data,indx,dval) - self._mask[indx] = valmask - elif hasattr(indx, 'dtype') and (indx.dtype==bool_): - indx = indx * umath.logical_not(self._mask) - ndarray.__setitem__(self._data,indx,dval) + ndarray.__setitem__(_data, indx, dval) + ndarray.__setitem__(_mask, indx, mval) + elif hasattr(indx, 'dtype') and (indx.dtype==MaskType): + indx = indx * umath.logical_not(_mask) + ndarray.__setitem__(_data,indx,dval) else: - mindx = mask_or(self._mask[indx], valmask, copy=True) + if _names: + err_msg = "Flexible 'hard' masks are not yet supported..." + raise NotImplementedError(err_msg) + mindx = mask_or(_mask[indx], mval, copy=True) dindx = self._data[indx] if dindx.size > 1: dindx[~mindx] = dval elif mindx is nomask: dindx = dval - ndarray.__setitem__(self._data,indx,dindx) - self._mask[indx] = mindx + ndarray.__setitem__(_data, indx, dindx) + _mask[indx] = mindx + return #............................................ def __getslice__(self, i, j): """x.__getslice__(i, j) <==> x[i:j] @@ -1416,28 +1512,57 @@ """Set the mask. """ - if mask is not nomask: - mask = narray(mask, copy=copy, dtype=MaskType) - # We could try to check whether shrinking is needed.. - # ... but we would waste some precious time -# if self._shrinkmask and not mask.any(): -# mask = nomask - if self._mask is nomask: - self._mask = mask - elif self._hardmask: - if mask is not nomask: - self._mask.__ior__(mask) - else: - # This one is tricky: if we set the mask that way, we may break the - # propagation. But if we don't, we end up with a mask full of False - # and a test on nomask fails... + names = ndarray.__getattribute__(self,'dtype').names + current_mask = ndarray.__getattribute__(self,'_mask') + if mask is masked: + mask = True + # Make sure the mask is set + if (current_mask is nomask): + # Just don't do anything is there's nothing to do... if mask is nomask: - self._mask = nomask + return + current_mask = self._mask = make_mask_none(self.shape, names) + # No named fields......... + if names is None: + # Hardmask: don't unmask the data + if self._hardmask: + current_mask |= mask + # Softmask: set everything to False else: - self.unshare_mask() - self._mask.flat = mask - if self._mask.shape: - self._mask = numeric.reshape(self._mask, self.shape) + current_mask.flat = mask + # Named fields w/ ............ + else: + mdtype = current_mask.dtype + mask = np.array(mask, copy=False) + # Mask is a singleton + if not mask.ndim: + # It's a boolean : make a record + if mask.dtype.kind == 'b': + mask = np.array(tuple([mask.item()]*len(mdtype)), + dtype=mdtype) + # It's a record: make sure the dtype is correct + else: + mask = mask.astype(mdtype) + # Mask is a sequence + else: + # Make sure the new mask is a ndarray with the proper dtype + try: + mask = np.array(mask, copy=copy, dtype=mdtype) + # Or assume it's a sequence of bool/int + except TypeError: + mask = np.array([tuple([m]*len(mdtype)) for m in mask], + dtype=mdtype) + # Hardmask: don't unmask the data + if self._hardmask: + for n in names: + current_mask[n] |= mask[n] + # Softmask: set everything to False + else: + current_mask.flat = mask + # Reshape if needed + if current_mask.shape: + current_mask.shape = self.shape + return _set_mask = __setmask__ #.... def _get_mask(self): @@ -1448,6 +1573,26 @@ # return self._mask.reshape(self.shape) return self._mask mask = property(fget=_get_mask, fset=__setmask__, doc="Mask") + # + def _getrecordmask(self): + """Return the mask of the records. + A record is masked when all the fields are masked. + + """ + if self.dtype.names is None: + return self._mask + elif self.size > 1: + return self._mask.view((bool_, len(self.dtype))).all(1) + else: + return self._mask.view((bool_, len(self.dtype))).all() + + def _setrecordmask(self): + """Return the mask of the records. + A record is masked when all the fields are masked. + + """ + raise NotImplementedError("Coming soon: setting the mask per records!") + recordmask = property(fget=_getrecordmask) #............................................ def harden_mask(self): """Force the mask to hard. @@ -1526,7 +1671,7 @@ If value is None, use a default based on the data type. """ - self._fill_value = _check_fill_value(value,self.dtype) + self._fill_value = _check_fill_value(value, self.dtype) fill_value = property(fget=get_fill_value, fset=set_fill_value, doc="Filling value.") @@ -1552,28 +1697,36 @@ """ m = self._mask - if m is nomask or not m.any(): + if m is nomask: return self._data # if fill_value is None: fill_value = self.fill_value # if self is masked_singleton: - result = numeric.asanyarray(fill_value) + return np.asanyarray(fill_value) + # + if len(self.dtype): + result = self._data.copy() + for n in result.dtype.names: + field = result[n] + np.putmask(field, self._mask[n], self.fill_value[n]) + elif not m.any(): + return self._data else: result = self._data.copy() try: - numpy.putmask(result, m, fill_value) + np.putmask(result, m, fill_value) except (TypeError, AttributeError): fill_value = narray(fill_value, dtype=object) d = result.astype(object) - result = fromnumeric.choose(m, (d, fill_value)) + result = np.choose(m, (d, fill_value)) except IndexError: #ok, if scalar if self._data.shape: raise elif m: - result = narray(fill_value, dtype=self.dtype) + result = np.array(fill_value, dtype=self.dtype) else: result = self._data return result @@ -1584,7 +1737,7 @@ """ data = ndarray.ravel(self._data) if self._mask is not nomask: - data = data.compress(numpy.logical_not(ndarray.ravel(self._mask))) + data = data.compress(np.logical_not(ndarray.ravel(self._mask))) return data @@ -1605,7 +1758,7 @@ # Get the basic components (_data, _mask) = (self._data, self._mask) # Force the condition to a regular ndarray (forget the missing values...) - condition = narray(condition, copy=False, subok=False) + condition = np.array(condition, copy=False, subok=False) # _new = _data.compress(condition, axis=axis, out=out).view(type(self)) _new._update_from(self) @@ -1632,11 +1785,14 @@ else: return str(self._data) # convert to object array to make filled work -#CHECK: the two lines below seem more robust than the self._data.astype -# res = numeric.empty(self._data.shape, object_) -# numeric.putmask(res,~m,self._data) - res = self._data.astype("|O8") - res[m] = f + names = self.dtype.names + if names is None: + res = self._data.astype("|O8") + res[m] = f + else: + res = self._data.astype([(n,'|O8') for n in names]) + for field in names: + np.putmask(res[field], m[field], f) else: res = self.filled(self.fill_value) return str(res) @@ -1739,8 +1895,8 @@ new_mask = mask_or(other_mask, dom_mask) # The following 3 lines control the domain filling if dom_mask.any(): - other_data = other_data.copy() - numpy.putmask(other_data, dom_mask, 1) + (_, fval) = ufunc_fills[np.divide] + other_data = np.where(dom_mask, fval, other_data) ndarray.__idiv__(self._data, other_data) self._mask = mask_or(self._mask, new_mask) return self @@ -1751,28 +1907,28 @@ other_data = getdata(other) other_mask = getmask(other) ndarray.__ipow__(_data, other_data) - invalid = numpy.logical_not(numpy.isfinite(_data)) - new_mask = mask_or(other_mask,invalid) + invalid = np.logical_not(np.isfinite(_data)) + new_mask = mask_or(other_mask, invalid) self._mask = mask_or(self._mask, new_mask) # The following line is potentially problematic, as we change _data... - numpy.putmask(self._data,invalid,self.fill_value) + np.putmask(self._data,invalid,self.fill_value) return self #............................................ def __float__(self): "Convert to float." if self.size > 1: - raise TypeError,\ - "Only length-1 arrays can be converted to Python scalars" + raise TypeError("Only length-1 arrays can be converted "\ + "to Python scalars") elif self._mask: warnings.warn("Warning: converting a masked element to nan.") - return numpy.nan + return np.nan return float(self.item()) def __int__(self): "Convert to int." if self.size > 1: - raise TypeError,\ - "Only length-1 arrays can be converted to Python scalars" + raise TypeError("Only length-1 arrays can be converted "\ + "to Python scalars") elif self._mask: raise MAError, 'Cannot convert masked element to a Python int.' return int(self.item()) @@ -1822,9 +1978,9 @@ n = s[axis] t = list(s) del t[axis] - return numeric.ones(t) * n - n1 = numpy.size(m, axis) - n2 = m.astype(int_).sum(axis) + return np.ones(t) * n + n1 = np.size(m, axis) + n2 = m.astype(int).sum(axis) if axis is None: return (n1-n2) else: @@ -1925,84 +2081,87 @@ return (self.ctypes.data, self._mask.ctypes.data) #............................................ def all(self, axis=None, out=None): - """Return True if all entries along the given axis are True, - False otherwise. Masked values are considered as True during - computation. + """a.all(axis=None, out=None) + + Check if all of the elements of `a` are true. - Parameter - ---------- - axis : int, optional - Axis along which the operation is performed. If None, - the operation is performed on a flatten array - out : {MaskedArray}, optional - Alternate optional output. If not None, out should be - a valid MaskedArray of the same shape as the output of - self._data.all(axis). + Performs a logical_and over the given axis and returns the result. + Masked values are considered as True during computation. + For convenience, the output array is masked where ALL the values along the + current axis are masked: if the output would have been a scalar and that + all the values are masked, then the output is `masked`. - Returns A masked array, where the mask is True if all data along - ------- - the axis are masked. + Parameters + ---------- + axis : {None, integer} + Axis to perform the operation over. + If None, perform over flattened array. + out : {None, array}, optional + Array into which the result can be placed. Its type is preserved + and it must be of the right shape to hold the output. - Notes - ----- - An exception is raised if ``out`` is not None and not of the - same type as self. + See Also + -------- + all : equivalent function + + Example + ------- + >>> array([1,2,3]).all() + True + >>> a = array([1,2,3], mask=True) + >>> (a.all() is masked) + True """ + mask = self._mask.all(axis) if out is None: d = self.filled(True).all(axis=axis).view(type(self)) - if d.ndim > 0: - d.__setmask__(self._mask.all(axis)) + if d.ndim: + d.__setmask__(mask) + elif mask: + return masked return d - elif type(out) is not type(self): - raise TypeError("The external array should have " \ - "a type %s (got %s instead)" %\ - (type(self), type(out))) self.filled(True).all(axis=axis, out=out) - if out.ndim: - out.__setmask__(self._mask.all(axis)) + if isinstance(out, MaskedArray): + if out.ndim or mask: + out.__setmask__(mask) return out def any(self, axis=None, out=None): - """Returns True if at least one entry along the given axis is - True. + """a.any(axis=None, out=None) - Returns False if all entries are False. - Masked values are considered as True during computation. + Check if any of the elements of `a` are true. - Parameter - ---------- - axis : int, optional - Axis along which the operation is performed. - If None, the operation is performed on a flatten array - out : {MaskedArray}, optional - Alternate optional output. If not None, out should be - a valid MaskedArray of the same shape as the output of - self._data.all(axis). + Performs a logical_or over the given axis and returns the result. + Masked values are considered as False during computation. - Returns A masked array, where the mask is True if all data along - ------- - the axis are masked. + Parameters + ---------- + axis : {None, integer} + Axis to perform the operation over. + If None, perform over flattened array and return a scalar. + out : {None, array}, optional + Array into which the result can be placed. Its type is preserved + and it must be of the right shape to hold the output. - Notes - ----- - An exception is raised if ``out`` is not None and not of the - same type as self. + See Also + -------- + any : equivalent function """ + mask = self._mask.all(axis) if out is None: d = self.filled(False).any(axis=axis).view(type(self)) - if d.ndim > 0: - d.__setmask__(self._mask.all(axis)) + if d.ndim: + d.__setmask__(mask) + elif mask: + d = masked return d - elif type(out) is not type(self): - raise TypeError("The external array should have a type %s "\ - "(got %s instead)" %\ - (type(self), type(out))) self.filled(False).any(axis=axis, out=out) - if out.ndim: - out.__setmask__(self._mask.all(axis)) + if isinstance(out, MaskedArray): + if out.ndim or mask: + out.__setmask__(mask) return out @@ -2023,7 +2182,8 @@ """ return narray(self.filled(0), copy=False).nonzero() - #............................................ + + def trace(self, offset=0, axis1=0, axis2=1, dtype=None, out=None): """a.trace(offset=0, axis1=0, axis2=1, dtype=None, out=None) @@ -2031,7 +2191,7 @@ indicated `axis1` and `axis2`. """ - # TODO: What are we doing with `out`? + #!!!: implement out + test! m = self._mask if m is nomask: result = super(MaskedArray, self).trace(offset=offset, axis1=axis1, @@ -2039,117 +2199,270 @@ return result.astype(dtype) else: D = self.diagonal(offset=offset, axis1=axis1, axis2=axis2) - return D.astype(dtype).filled(0).sum(axis=None) - #............................................ - def sum(self, axis=None, dtype=None): - """Sum the array over the given axis. + return D.astype(dtype).filled(0).sum(axis=None, out=out) - Masked elements are set to 0 internally. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : dtype, optional - Datatype for the intermediary computation. If not given, - the current dtype is used instead. + def sum(self, axis=None, dtype=None, out=None): + """a.sum(axis=None, dtype=None, out=None) + Return the sum of the array elements over the given axis. + Masked elements are set to 0 internally. + + Parameters + ---------- + axis : {None, -1, int}, optional + Axis along which the sum is computed. The default + (`axis` = None) is to compute over the flattened array. + dtype : {None, dtype}, optional + Determines the type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and + the type of a is an integer type of precision less than the default + platform integer, then the default platform integer precision is + used. Otherwise, the dtype is the same as that of a. + out : {None, ndarray}, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. + + """ - if self._mask is nomask: - mask = nomask - else: - mask = self._mask.all(axis) - if (not mask.ndim) and mask: - return masked - result = self.filled(0).sum(axis, dtype=dtype).view(type(self)) - if result.ndim > 0: - result.__setmask__(mask) - return result + _mask = ndarray.__getattribute__(self, '_mask') + newmask = _mask.all(axis=axis) + # No explicit output + if out is None: + result = self.filled(0).sum(axis, dtype=dtype).view(type(self)) + if result.ndim: + result.__setmask__(newmask) + elif newmask: + result = masked + return result + # Explicit output + result = self.filled(0).sum(axis, dtype=dtype, out=out) + if isinstance(out, MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = newmask + return out - def cumsum(self, axis=None, dtype=None): - """Return the cumulative sum of the elements of the array - along the given axis. - Masked values are set to 0 internally. + def cumsum(self, axis=None, dtype=None, out=None): + """a.cumsum(axis=None, dtype=None, out=None) - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + Return the cumulative sum of the elements along the given axis. + The cumulative sum is calculated over the flattened array by + default, otherwise over the specified axis. + + Masked values are set to 0 internally during the computation. + However, their position is saved, and the result will be masked at + the same locations. + + Parameters + ---------- + axis : {None, -1, int}, optional + Axis along which the sum is computed. The default + (`axis` = None) is to compute over the flattened array. + dtype : {None, dtype}, optional + Determines the type of the returned array and of the accumulator + where the elements are summed. If dtype has the value None and + the type of a is an integer type of precision less than the default + platform integer, then the default platform integer precision is + used. Otherwise, the dtype is the same as that of a. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. + WARNING : The mask is lost if out is not a valid MaskedArray ! + + Returns + ------- + cumsum : ndarray. + A new array holding the result is returned unless ``out`` is + specified, in which case a reference to ``out`` is returned. + + Example + ------- + >>> print array(arange(10),mask=[0,0,0,1,1,1,0,0,0,0]).cumsum() + [0 1 3 -- -- -- 9 16 24 33] + + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + """ - result = self.filled(0).cumsum(axis=axis, dtype=dtype).view(type(self)) - result.__setmask__(self.mask) + result = self.filled(0).cumsum(axis=axis, dtype=dtype, out=out) + if out is not None: + if isinstance(out, MaskedArray): + out.__setmask__(self.mask) + return out + result = result.view(type(self)) + result.__setmask__(self._mask) return result - def prod(self, axis=None, dtype=None): - """Return the product of the elements of the array along the - given axis. - Masked elements are set to 1 internally. + def prod(self, axis=None, dtype=None, out=None): + """a.prod(axis=None, dtype=None, out=None) - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + Return the product of the array elements over the given axis. + Masked elements are set to 1 internally for computation. + Parameters + ---------- + axis : {None, -1, int}, optional + Axis over which the product is taken. If None is used, then the + product is over all the array elements. + dtype : {None, dtype}, optional + Determines the type of the returned array and of the accumulator + where the elements are multiplied. If dtype has the value None and + the type of a is an integer type of precision less than the default + platform integer, then the default platform integer precision is + used. Otherwise, the dtype is the same as that of a. + out : {None, array}, optional + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + + Returns + ------- + product_along_axis : {array, scalar}, see dtype parameter above. + Returns an array whose shape is the same as a with the specified + axis removed. Returns a 0d array when a is 1d or axis=None. + Returns a reference to the specified output array if specified. + + See Also + -------- + prod : equivalent function + + Examples + -------- + >>> prod([1.,2.]) + 2.0 + >>> prod([1.,2.], dtype=int32) + 2 + >>> prod([[1.,2.],[3.,4.]]) + 24.0 + >>> prod([[1.,2.],[3.,4.]], axis=1) + array([ 2., 12.]) + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + """ - if self._mask is nomask: - mask = nomask - else: - mask = self._mask.all(axis) - if (not mask.ndim) and mask: - return masked - result = self.filled(1).prod(axis=axis, dtype=dtype).view(type(self)) - if result.ndim: - result.__setmask__(mask) - return result + _mask = ndarray.__getattribute__(self, '_mask') + newmask = _mask.all(axis=axis) + # No explicit output + if out is None: + result = self.filled(1).prod(axis, dtype=dtype).view(type(self)) + if result.ndim: + result.__setmask__(newmask) + elif newmask: + result = masked + return result + # Explicit output + result = self.filled(1).prod(axis, dtype=dtype, out=out) + if isinstance(out,MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = newmask + return out product = prod - def cumprod(self, axis=None, dtype=None): - """Return the cumulative product of the elements of the array - along the given axis. + def cumprod(self, axis=None, dtype=None, out=None): + """ + a.cumprod(axis=None, dtype=None, out=None) - Masked values are set to 1 internally. + Return the cumulative product of the elements along the given axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + The cumulative product is taken over the flattened array by + default, otherwise over the specified axis. - """ - result = self.filled(1).cumprod(axis=axis, dtype=dtype).view(type(self)) - result.__setmask__(self.mask) + Masked values are set to 1 internally during the computation. + However, their position is saved, and the result will be masked at + the same locations. + + Parameters + ---------- + axis : {None, -1, int}, optional + Axis along which the product is computed. The default + (`axis` = None) is to compute over the flattened array. + dtype : {None, dtype}, optional + Determines the type of the returned array and of the accumulator + where the elements are multiplied. If dtype has the value None and + the type of a is an integer type of precision less than the default + platform integer, then the default platform integer precision is + used. Otherwise, the dtype is the same as that of a. + out : ndarray, optional + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. + WARNING : The mask is lost if out is not a valid MaskedArray ! + + Returns + ------- + cumprod : ndarray. + A new array holding the result is returned unless out is + specified, in which case a reference to out is returned. + + Notes + ----- + Arithmetic is modular when using integer types, and no error is + raised on overflow. + + """ + result = self.filled(1).cumprod(axis=axis, dtype=dtype, out=out) + if out is not None: + if isinstance(out, MaskedArray): + out.__setmask__(self._mask) + return out + result = result.view(type(self)) + result.__setmask__(self._mask) return result + def mean(self, axis=None, dtype=None, out=None): - """Average the array over the given axis. Equivalent to + """a.mean(axis=None, dtype=None, out=None) -> mean - a.sum(axis, dtype) / a.size(axis). + Returns the average of the array elements. The average is taken over the + flattened array by default, otherwise over the specified axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + Parameters + ---------- + axis : integer + Axis along which the means are computed. The default is + to compute the mean of the flattened array. + dtype : type + Type to use in computing the means. For arrays of + integer type the default is float32, for arrays of float types it + is the same as the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + Returns + ------- + mean : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. + + See Also + -------- + var : variance + std : standard deviation + + Notes + ----- + The mean is the sum of the elements along the axis divided by the + number of elements. + + """ if self._mask is nomask: result = super(MaskedArray, self).mean(axis=axis, dtype=dtype) @@ -2158,7 +2471,13 @@ cnt = self.count(axis=axis) result = dsum*1./cnt if out is not None: - out.flat = result.ravel() + out.flat = result + if isinstance(out, MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = getattr(result, '_mask', nomask) + return out return result def anom(self, axis=None, dtype=None): @@ -2181,87 +2500,149 @@ else: return (self - expand_dims(m,axis)) - def var(self, axis=None, dtype=None, ddof=0): - """Return the variance, a measure of the spread of a distribution. + def var(self, axis=None, dtype=None, out=None, ddof=0): + """a.var(axis=None, dtype=None, out=None, ddof=0) -> variance - The variance is the average of the squared deviations from the - mean, i.e. var = mean(abs(x - x.mean())**2). + Returns the variance of the array elements, a measure of the spread of a + distribution. The variance is computed for the flattened array by default, + otherwise over the specified axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. If not - given, the current dtype is used instead. + Parameters + ---------- + axis : integer + Axis along which the variance is computed. The default is to + compute the variance of the flattened array. + dtype : data-type + Type to use in computing the variance. For arrays of integer type + the default is float32, for arrays of float types it is the same as + the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + ddof : {0, integer}, + Means Delta Degrees of Freedom. The divisor used in calculation is + N - ddof. - Notes - ----- - The value returned is by default a biased estimate of the - true variance, since the mean is computed by dividing by N-ddof. - For the (more standard) unbiased estimate, use ddof=1 or. - Note that for complex numbers the absolute value is taken before - squaring, so that the result is always real and nonnegative. + Returns + ------- + variance : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. + See Also + -------- + std : standard deviation + mean: average + + Notes + ----- + The variance is the average of the squared deviations from the mean, + i.e. var = mean(abs(x - x.mean())**2). The mean is computed by + dividing by N-ddof, where N is the number of elements. The argument + ddof defaults to zero; for an unbiased estimate supply ddof=1. Note + that for complex numbers the absolute value is taken before squaring, + so that the result is always real and nonnegative. + """ + # Easy case: nomask, business as usual if self._mask is nomask: - # TODO: Do we keep super, or var _data and take a view ? - return super(MaskedArray, self).var(axis=axis, dtype=dtype, - ddof=ddof) + return self._data.var(axis=axis, dtype=dtype, out=out, ddof=ddof) + # Some data are masked, yay! + cnt = self.count(axis=axis)-ddof + danom = self.anom(axis=axis, dtype=dtype) + if iscomplexobj(self): + danom = umath.absolute(danom)**2 else: - cnt = self.count(axis=axis)-ddof - danom = self.anom(axis=axis, dtype=dtype) - if iscomplexobj(self): - danom = umath.absolute(danom)**2 - else: - danom *= danom - dvar = narray(danom.sum(axis) / cnt).view(type(self)) - if axis is not None: - dvar._mask = mask_or(self._mask.all(axis), (cnt==1)) + danom *= danom + dvar = divide(danom.sum(axis), cnt).view(type(self)) + # Apply the mask if it's not a scalar + if dvar.ndim: + dvar._mask = mask_or(self._mask.all(axis), (cnt<=ddof)) dvar._update_from(self) - return dvar + elif getattr(dvar,'_mask', False): + # Make sure that masked is returned when the scalar is masked. + dvar = masked + if out is not None: + if isinstance(out, MaskedArray): + out.__setmask__(True) + else: + out.flat = np.nan + return out + # In case with have an explicit output + if out is not None: + # Set the data + out.flat = dvar + # Set the mask if needed + if isinstance(out, MaskedArray): + out.__setmask__(dvar.mask) + return out + return dvar - def std(self, axis=None, dtype=None, ddof=0): - """Return the standard deviation, a measure of the spread of a - distribution. + def std(self, axis=None, dtype=None, out=None, ddof=0): + """a.std(axis=None, dtype=None, out=None, ddof=0) - The standard deviation is the square root of the average of - the squared deviations from the mean, i.e. + Returns the standard deviation of the array elements, a measure of the + spread of a distribution. The standard deviation is computed for the + flattened array by default, otherwise over the specified axis. - std = sqrt(mean(abs(x - x.mean())**2)). + Parameters + ---------- + axis : integer + Axis along which the standard deviation is computed. The default is + to compute the standard deviation of the flattened array. + dtype : type + Type to use in computing the standard deviation. For arrays of + integer type the default is float32, for arrays of float types it + is the same as the array type. + out : ndarray + Alternative output array in which to place the result. It must have + the same shape as the expected output but the type will be cast if + necessary. + ddof : {0, integer} + Means Delta Degrees of Freedom. The divisor used in calculations + is N-ddof. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - dtype : {dtype}, optional - Datatype for the intermediary computation. - If not given, the current dtype is used instead. + Returns + ------- + standard deviation : The return type varies, see above. + A new array holding the result is returned unless out is specified, + in which case a reference to out is returned. - Notes - ----- - The value returned is by default a biased estimate of the - true standard deviation, since the mean is computed by dividing - by N-ddof. For the more standard unbiased estimate, use ddof=1. - Note that for complex numbers the absolute value is taken before - squaring, so that the result is always real and nonnegative. - """ - dvar = self.var(axis,dtype,ddof=ddof) - if axis is not None or dvar is not masked: + See Also + -------- + var : variance + mean : average + + Notes + ----- + The standard deviation is the square root of the average of the squared + deviations from the mean, i.e. var = sqrt(mean(abs(x - x.mean())**2)). The + computed standard deviation is computed by dividing by the number of + elements, N-ddof. The option ddof defaults to zero, that is, a biased + estimate. Note that for complex numbers std takes the absolute value before + squaring, so that the result is always real and nonnegative. + + """ + dvar = self.var(axis=axis,dtype=dtype,out=out, ddof=ddof) + if dvar is not masked: dvar = sqrt(dvar) + if out is not None: + out **= 0.5 + return out return dvar #............................................ def round(self, decimals=0, out=None): - result = self._data.round(decimals).view(type(self)) + result = self._data.round(decimals=decimals, out=out).view(type(self)) result._mask = self._mask result._update_from(self) + # No explicit output: we're done if out is None: return result - out[:] = result - return + if isinstance(out, MaskedArray): + out.__setmask__(self._mask) + return out round.__doc__ = ndarray.round.__doc__ #............................................ @@ -2314,49 +2695,71 @@ fill_value = default_fill_value(self) d = self.filled(fill_value).view(ndarray) return d.argsort(axis=axis, kind=kind, order=order) - #........................ - def argmin(self, axis=None, fill_value=None): - """Return an ndarray of indices for the minimum values of a - along the specified axis. - Masked values are treated as if they had the value fill_value. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. If None, the - output of minimum_fill_value(self._data) is used. + def argmin(self, axis=None, fill_value=None, out=None): + """a.argmin(axis=None, out=None) + Return array of indices to the minimum values along the given axis. + + Parameters + ---------- + axis : {None, integer} + If None, the index is into the flattened array, otherwise along + the specified axis + fill_value : {var}, optional + Value used to fill in the masked values. If None, the output of + minimum_fill_value(self._data) is used instead. + out : {None, array}, optional + Array into which the result can be placed. Its type is preserved + and it must be of the right shape to hold the output. + """ if fill_value is None: fill_value = minimum_fill_value(self) d = self.filled(fill_value).view(ndarray) - return d.argmin(axis) - #........................ - def argmax(self, axis=None, fill_value=None): - """Returns the array of indices for the maximum values of `a` - along the specified axis. + return d.argmin(axis, out=out) - Masked values are treated as if they had the value fill_value. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. If None, the - output of maximum_fill_value(self._data) is used. + def argmax(self, axis=None, fill_value=None, out=None): + """a.argmax(axis=None, out=None) + Returns array of indices of the maximum values along the given axis. + Masked values are treated as if they had the value fill_value. + + Parameters + ---------- + axis : {None, integer} + If None, the index is into the flattened array, otherwise along + the specified axis + fill_value : {var}, optional + Value used to fill in the masked values. If None, the output of + maximum_fill_value(self._data) is used instead. + out : {None, array}, optional + Array into which the result can be placed. Its type is preserved + and it must be of the right shape to hold the output. + + Returns + ------- + index_array : {integer_array} + + Examples + -------- + >>> a = arange(6).reshape(2,3) + >>> a.argmax() + 5 + >>> a.argmax(0) + array([1, 1, 1]) + >>> a.argmax(1) + array([2, 2]) + """ if fill_value is None: fill_value = maximum_fill_value(self._data) d = self.filled(fill_value).view(ndarray) - return d.argmax(axis) + return d.argmax(axis, out=out) + def sort(self, axis=-1, kind='quicksort', order=None, endwith=True, fill_value=None): """Sort along the given axis. @@ -2418,53 +2821,66 @@ filler = maximum_fill_value(self) else: filler = fill_value - idx = numpy.indices(self.shape) + idx = np.indices(self.shape) idx[axis] = self.filled(filler).argsort(axis=axis,kind=kind,order=order) idx_l = idx.tolist() tmp_mask = self._mask[idx_l].flat tmp_data = self._data[idx_l].flat - self.flat = tmp_data + self._data.flat = tmp_data self._mask.flat = tmp_mask return #............................................ - def min(self, axis=None, fill_value=None): - """Return the minimum of a along the given axis. + def min(self, axis=None, out=None, fill_value=None): + """a.min(axis=None, out=None, fill_value=None) - Masked values are filled with fill_value. + Return the minimum along a given axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. - If None, use the the output of minimum_fill_value(). + Parameters + ---------- + axis : {None, int}, optional + Axis along which to operate. By default, ``axis`` is None and the + flattened input is used. + out : array_like, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + fill_value : {var}, optional + Value used to fill in the masked values. + If None, use the output of minimum_fill_value(). - """ - mask = self._mask - # Check all/nothing case ...... - if mask is nomask: - return super(MaskedArray, self).min(axis=axis) - elif (not mask.ndim) and mask: - return masked - # Get the mask ................ - if axis is None: - mask = umath.logical_and.reduce(mask.flat) - else: - mask = umath.logical_and.reduce(mask, axis=axis) - # Skip if all masked .......... - if not mask.ndim and mask: - return masked - # Get the fill value ........... + Returns + ------- + amin : array_like + New array holding the result. + If ``out`` was specified, ``out`` is returned. + + """ + _mask = ndarray.__getattribute__(self, '_mask') + newmask = _mask.all(axis=axis) if fill_value is None: fill_value = minimum_fill_value(self) - # Get the data ................ - result = self.filled(fill_value).min(axis=axis).view(type(self)) - if result.ndim > 0: - result._mask = mask - return result + # No explicit output + if out is None: + result = self.filled(fill_value).min(axis=axis, out=out).view(type(self)) + if result.ndim: + # Set the mask + result.__setmask__(newmask) + # Get rid of Infs + if newmask.ndim: + np.putmask(result, newmask, result.fill_value) + elif newmask: + result = masked + return result + # Explicit output + result = self.filled(fill_value).min(axis=axis, out=out) + if isinstance(out, MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = newmask + else: + np.putmask(out, newmask, np.nan) + return out def mini(self, axis=None): if axis is None: @@ -2473,59 +2889,92 @@ return minimum.reduce(self, axis) #........................ - def max(self, axis=None, fill_value=None): - """Return the maximum/a along the given axis. + def max(self, axis=None, out=None, fill_value=None): + """a.max(axis=None, out=None, fill_value=None) - Masked values are filled with fill_value. + Return the maximum along a given axis. - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. - If None, use the the output of maximum_fill_value(). + Parameters + ---------- + axis : {None, int}, optional + Axis along which to operate. By default, ``axis`` is None and the + flattened input is used. + out : array_like, optional + Alternative output array in which to place the result. Must + be of the same shape and buffer length as the expected output. + fill_value : {var}, optional + Value used to fill in the masked values. + If None, use the output of maximum_fill_value(). + + Returns + ------- + amax : array_like + New array holding the result. + If ``out`` was specified, ``out`` is returned. + """ - mask = self._mask - # Check all/nothing case ...... - if mask is nomask: - return super(MaskedArray, self).max(axis=axis) - elif (not mask.ndim) and mask: - return masked - # Check the mask .............. - if axis is None: - mask = umath.logical_and.reduce(mask.flat) - else: - mask = umath.logical_and.reduce(mask, axis=axis) - # Skip if all masked .......... - if not mask.ndim and mask: - return masked - # Get the fill value .......... + _mask = ndarray.__getattribute__(self, '_mask') + newmask = _mask.all(axis=axis) if fill_value is None: fill_value = maximum_fill_value(self) - # Get the data ................ - result = self.filled(fill_value).max(axis=axis).view(type(self)) - if result.ndim > 0: - result._mask = mask - return result - #........................ - def ptp(self, axis=None, fill_value=None): - """Return the visible data range (max-min) along the given axis. + # No explicit output + if out is None: + result = self.filled(fill_value).max(axis=axis, out=out).view(type(self)) + if result.ndim: + # Set the mask + result.__setmask__(newmask) + # Get rid of Infs + if newmask.ndim: + np.putmask(result, newmask, result.fill_value) + elif newmask: + result = masked + return result + # Explicit output + result = self.filled(fill_value).max(axis=axis, out=out) + if isinstance(out, MaskedArray): + outmask = getattr(out, '_mask', nomask) + if (outmask is nomask): + outmask = out._mask = make_mask_none(out.shape) + outmask.flat = newmask + else: + np.putmask(out, newmask, np.nan) + return out - Parameters - ---------- - axis : int, optional - Axis along which to perform the operation. - If None, applies to a flattened version of the array. - fill_value : {var}, optional - Value used to fill in the masked values. If None, the - maximum uses the maximum default, the minimum uses the - minimum default. + def ptp(self, axis=None, out=None, fill_value=None): + """a.ptp(axis=None, out=None) + Return (maximum - minimum) along the the given dimension + (i.e. peak-to-peak value). + + Parameters + ---------- + axis : {None, int}, optional + Axis along which to find the peaks. If None (default) the + flattened array is used. + out : array_like + Alternative output array in which to place the result. It must + have the same shape and buffer length as the expected output + but the type will be cast if necessary. + fill_value : {var}, optional + Value used to fill in the masked values. + + Returns + ------- + ptp : ndarray. + A new array holding the result, unless ``out`` was + specified, in which case a reference to ``out`` is returned. + + """ - return self.max(axis, fill_value) - self.min(axis, fill_value) + if out is None: + result = self.max(axis=axis, fill_value=fill_value) + result -= self.min(axis=axis, fill_value=fill_value) + return result + out.flat = self.max(axis=axis, out=out, fill_value=fill_value) + out -= self.min(axis=axis, fill_value=fill_value) + return out + # Array methods --------------------------------------- copy = _arraymethod('copy') diagonal = _arraymethod('diagonal') @@ -2658,7 +3107,7 @@ isMA = isMaskedArray #backward compatibility # We define the masked singleton as a float for higher precedence... # Note that it can be tricky sometimes w/ type comparison -masked_singleton = MaskedArray(0, dtype=float_, mask=True) +masked_singleton = MaskedArray(0, dtype=np.float_, mask=True) masked = masked_singleton masked_array = MaskedArray @@ -2675,7 +3124,7 @@ for convenience. And backwards compatibility... """ - #TODO: we should try to put 'order' somwehere + #!!!: we should try to put 'order' somwehere return MaskedArray(data, mask=mask, dtype=dtype, copy=copy, subok=subok, keep_mask=keep_mask, hard_mask=hard_mask, fill_value=fill_value, ndmin=ndmin, shrink=shrink) @@ -2764,35 +3213,32 @@ self.fill_value_func = maximum_fill_value #.......................................................... -def min(array, axis=None, out=None): - """Return the minima along the given axis. +def min(obj, axis=None, out=None, fill_value=None): + try: + return obj.min(axis=axis, fill_value=fill_value, out=out) + except (AttributeError, TypeError): + # If obj doesn't have a max method, + # ...or if the method doesn't accept a fill_value argument + return asanyarray(obj).min(axis=axis, fill_value=fill_value, out=out) +min.__doc__ = MaskedArray.min.__doc__ - If `axis` is None, applies to the flattened array. +def max(obj, axis=None, out=None, fill_value=None): + try: + return obj.max(axis=axis, fill_value=fill_value, out=out) + except (AttributeError, TypeError): + # If obj doesn't have a max method, + # ...or if the method doesn't accept a fill_value argument + return asanyarray(obj).max(axis=axis, fill_value=fill_value, out=out) +max.__doc__ = MaskedArray.max.__doc__ - """ - if out is not None: - raise TypeError("Output arrays Unsupported for masked arrays") - if axis is None: - return minimum(array) - else: - return minimum.reduce(array, axis) -min.__doc__ = MaskedArray.min.__doc__ -#............................ -def max(obj, axis=None, out=None): - if out is not None: - raise TypeError("Output arrays Unsupported for masked arrays") - if axis is None: - return maximum(obj) - else: - return maximum.reduce(obj, axis) -max.__doc__ = MaskedArray.max.__doc__ -#............................. -def ptp(obj, axis=None): +def ptp(obj, axis=None, out=None, fill_value=None): """a.ptp(axis=None) = a.max(axis)-a.min(axis)""" try: - return obj.max(axis)-obj.min(axis) - except AttributeError: - return max(obj, axis=axis) - min(obj, axis=axis) + return obj.ptp(axis, out=out, fill_value=fill_value) + except (AttributeError, TypeError): + # If obj doesn't have a max method, + # ...or if the method doesn't accept a fill_value argument + return asanyarray(obj).ptp(axis=axis, fill_value=fill_value, out=out) ptp.__doc__ = MaskedArray.ptp.__doc__ @@ -2816,7 +3262,7 @@ try: return getattr(MaskedArray, self._methodname).__doc__ except: - return getattr(numpy, self._methodname).__doc__ + return getattr(np, self._methodname).__doc__ def __call__(self, a, *args, **params): if isinstance(a, MaskedArray): return getattr(a, self._methodname).__call__(*args, **params) @@ -2830,7 +3276,7 @@ try: return method(*args, **params) except SystemError: - return getattr(numpy,self._methodname).__call__(a, *args, **params) + return getattr(np,self._methodname).__call__(a, *args, **params) all = _frommethod('all') anomalies = anom = _frommethod('anom') @@ -2877,13 +3323,13 @@ # Get the result and view it as a (subclass of) MaskedArray result = umath.power(fa,fb).view(basetype) # Find where we're in trouble w/ NaNs and Infs - invalid = numpy.logical_not(numpy.isfinite(result.view(ndarray))) + invalid = np.logical_not(np.isfinite(result.view(ndarray))) # Retrieve some extra attributes if needed if isinstance(result,MaskedArray): result._update_from(a) # Add the initial mask if m is not nomask: - if numpy.isscalar(result): + if np.isscalar(result): return masked result._mask = m # Fix the invalid parts @@ -2893,18 +3339,18 @@ result[invalid] = masked result._data[invalid] = result.fill_value return result - + # if fb.dtype.char in typecodes["Integer"]: # return masked_array(umath.power(fa, fb), m) -# m = mask_or(m, (fa < 0) & (fb != fb.astype(int))) +# m = mask_or(m, (fa < 0) & (fb != fb.astype(int))) # if m is nomask: # return masked_array(umath.power(fa, fb)) # else: # fa = fa.copy() # if m.all(): # fa.flat = 1 -# else: -# numpy.putmask(fa,m,1) +# else: +# np.putmask(fa,m,1) # return masked_array(umath.power(fa, fb), m) #.............................................................................. @@ -2952,7 +3398,7 @@ else: filler = fill_value # return - indx = numpy.indices(a.shape).tolist() + indx = np.indices(a.shape).tolist() indx[axis] = filled(a,filler).argsort(axis=axis,kind=kind,order=order) return a[indx] sort.__doc__ = MaskedArray.sort.__doc__ @@ -2961,13 +3407,13 @@ def compressed(x): """Return a 1-D array of all the non-masked data.""" if getmask(x) is nomask: - return numpy.asanyarray(x) + return np.asanyarray(x) else: return x.compressed() def concatenate(arrays, axis=0): "Concatenate the arrays along the given axis." - d = numpy.concatenate([getdata(a) for a in arrays], axis) + d = np.concatenate([getdata(a) for a in arrays], axis) rcls = get_masked_subclass(*arrays) data = d.view(rcls) # Check whether one of the arrays has a non-empty mask... @@ -2977,7 +3423,7 @@ else: return data # OK, so we have to concatenate the masks - dm = numpy.concatenate([getmaskarray(a) for a in arrays], axis) + dm = np.concatenate([getmaskarray(a) for a in arrays], axis) # If we decide to keep a '_shrinkmask' option, we want to check that ... # ... all of them are True, and then check for dm.any() # shrink = numpy.logical_or.reduce([getattr(a,'_shrinkmask',True) for a in arrays]) @@ -3059,21 +3505,21 @@ if getmask(a) is nomask: if valmask is not nomask: a._sharedmask = True - a.mask = numpy.zeros(a.shape, dtype=bool_) - numpy.putmask(a._mask, mask, valmask) + a._mask = make_mask_none(a.shape, a.dtype.names) + np.putmask(a._mask, mask, valmask) elif a._hardmask: if valmask is not nomask: m = a._mask.copy() - numpy.putmask(m, mask, valmask) + np.putmask(m, mask, valmask) a.mask |= m else: if valmask is nomask: valmask = getmaskarray(values) - numpy.putmask(a._mask, mask, valmask) - numpy.putmask(a._data, mask, valdata) + np.putmask(a._mask, mask, valmask) + np.putmask(a._data, mask, valdata) return -def transpose(a,axes=None): +def transpose(a, axes=None): """Return a view of the array with dimensions permuted according to axes, as a masked array. @@ -3107,8 +3553,8 @@ # We can't use _frommethods here, as N.resize is notoriously whiny. m = getmask(x) if m is not nomask: - m = numpy.resize(m, new_shape) - result = numpy.resize(x, new_shape).view(get_masked_subclass(x)) + m = np.resize(m, new_shape) + result = np.resize(x, new_shape).view(get_masked_subclass(x)) if result.ndim: result._mask = m return result @@ -3117,18 +3563,18 @@ #................................................ def rank(obj): "maskedarray version of the numpy function." - return fromnumeric.rank(getdata(obj)) -rank.__doc__ = numpy.rank.__doc__ + return np.rank(getdata(obj)) +rank.__doc__ = np.rank.__doc__ # def shape(obj): "maskedarray version of the numpy function." - return fromnumeric.shape(getdata(obj)) -shape.__doc__ = numpy.shape.__doc__ + return np.shape(getdata(obj)) +shape.__doc__ = np.shape.__doc__ # def size(obj, axis=None): "maskedarray version of the numpy function." - return fromnumeric.size(getdata(obj), axis) -size.__doc__ = numpy.size.__doc__ + return np.size(getdata(obj), axis) +size.__doc__ = np.size.__doc__ #................................................ #####-------------------------------------------------------------------------- @@ -3158,55 +3604,97 @@ elif x is None or y is None: raise ValueError, "Either both or neither x and y should be given." # Get the condition ............... - fc = filled(condition, 0).astype(bool_) - notfc = numpy.logical_not(fc) + fc = filled(condition, 0).astype(MaskType) + notfc = np.logical_not(fc) # Get the data ...................................... xv = getdata(x) yv = getdata(y) if x is masked: ndtype = yv.dtype - xm = numpy.ones(fc.shape, dtype=MaskType) elif y is masked: ndtype = xv.dtype - ym = numpy.ones(fc.shape, dtype=MaskType) else: - ndtype = numpy.max([xv.dtype, yv.dtype]) - xm = getmask(x) - d = numpy.empty(fc.shape, dtype=ndtype).view(MaskedArray) - numpy.putmask(d._data, fc, xv.astype(ndtype)) - numpy.putmask(d._data, notfc, yv.astype(ndtype)) - d._mask = numpy.zeros(fc.shape, dtype=MaskType) - numpy.putmask(d._mask, fc, getmask(x)) - numpy.putmask(d._mask, notfc, getmask(y)) - d._mask |= getmaskarray(condition) - if not d._mask.any(): + ndtype = np.max([xv.dtype, yv.dtype]) + # Construct an empty array and fill it + d = np.empty(fc.shape, dtype=ndtype).view(MaskedArray) + _data = d._data + np.putmask(_data, fc, xv.astype(ndtype)) + np.putmask(_data, notfc, yv.astype(ndtype)) + # Create an empty mask and fill it + _mask = d._mask = np.zeros(fc.shape, dtype=MaskType) + np.putmask(_mask, fc, getmask(x)) + np.putmask(_mask, notfc, getmask(y)) + _mask |= getmaskarray(condition) + if not _mask.any(): d._mask = nomask return d -def choose (indices, t, out=None, mode='raise'): - "Return array shaped like indices with elements chosen from t" - #TODO: implement options `out` and `mode`, if possible. +def choose (indices, choices, out=None, mode='raise'): + """ + choose(a, choices, out=None, mode='raise') + + Use an index array to construct a new array from a set of choices. + + Given an array of integers and a set of n choice arrays, this method + will create a new array that merges each of the choice arrays. Where a + value in `a` is i, the new array will have the value that choices[i] + contains in the same place. + + Parameters + ---------- + a : int array + This array must contain integers in [0, n-1], where n is the number + of choices. + choices : sequence of arrays + Choice arrays. The index array and all of the choices should be + broadcastable to the same shape. + out : array, optional + If provided, the result will be inserted into this array. It should + be of the appropriate shape and dtype + mode : {'raise', 'wrap', 'clip'}, optional + Specifies how out-of-bounds indices will behave. + 'raise' : raise an error + 'wrap' : wrap around + 'clip' : clip to the range + + Returns + ------- + merged_array : array + + See Also + -------- + choose : equivalent function + + """ def fmask (x): "Returns the filled array, or True if masked." if x is masked: - return 1 + return True return filled(x) def nmask (x): "Returns the mask, True if ``masked``, False if ``nomask``." if x is masked: - return 1 - m = getmask(x) - if m is nomask: - return 0 - return m + return True + return getmask(x) + # Get the indices...... c = filled(indices, 0) - masks = [nmask(x) for x in t] - a = [fmask(x) for x in t] - d = numpy.choose(c, a) - m = numpy.choose(c, masks) - m = make_mask(mask_or(m, getmask(indices)), copy=0, shrink=True) - return masked_array(d, mask=m) + # Get the masks........ + masks = [nmask(x) for x in choices] + data = [fmask(x) for x in choices] + # Construct the mask + outputmask = np.choose(c, masks, mode=mode) + outputmask = make_mask(mask_or(outputmask, getmask(indices)), + copy=0, shrink=True) + # Get the choices...... + d = np.choose(c, data, mode=mode, out=out).view(MaskedArray) + if out is not None: + if isinstance(out, MaskedArray): + out.__setmask__(outputmask) + return out + d.__setmask__(outputmask) + return d + def round_(a, decimals=0, out=None): """Return a copy of a, rounded to 'decimals' places. @@ -3231,17 +3719,13 @@ """ if out is None: - return numpy.round_(a, decimals, out) + return np.round_(a, decimals, out) else: - numpy.round_(getdata(a), decimals, out) + np.round_(getdata(a), decimals, out) if hasattr(out, '_mask'): out._mask = getmask(a) return out -def arange(stop, start=None, step=1, dtype=None): - "maskedarray version of the numpy function." - return numpy.arange(stop, start, step, dtype).view(MaskedArray) -arange.__doc__ = numpy.arange.__doc__ def inner(a, b): "maskedarray version of the numpy function." @@ -3251,8 +3735,8 @@ fa.shape = (1,) if len(fb.shape) == 0: fb.shape = (1,) - return numpy.inner(fa, fb).view(MaskedArray) -inner.__doc__ = numpy.inner.__doc__ + return np.inner(fa, fb).view(MaskedArray) +inner.__doc__ = np.inner.__doc__ inner.__doc__ += doc_note("Masked values are replaced by 0.") innerproduct = inner @@ -3260,16 +3744,16 @@ "maskedarray version of the numpy function." fa = filled(a, 0).ravel() fb = filled(b, 0).ravel() - d = numeric.outer(fa, fb) + d = np.outer(fa, fb) ma = getmask(a) mb = getmask(b) if ma is nomask and mb is nomask: return masked_array(d) ma = getmaskarray(a) mb = getmaskarray(b) - m = make_mask(1-numeric.outer(1-ma, 1-mb), copy=0) + m = make_mask(1-np.outer(1-ma, 1-mb), copy=0) return masked_array(d, mask=m) -outer.__doc__ = numpy.outer.__doc__ +outer.__doc__ = np.outer.__doc__ outer.__doc__ += doc_note("Masked values are replaced by 0.") outerproduct = outer @@ -3310,7 +3794,7 @@ x = filled(array(d1, copy=0, mask=m), fill_value).astype(float) y = filled(array(d2, copy=0, mask=m), 1).astype(float) d = umath.less_equal(umath.absolute(x-y), atol + rtol * umath.absolute(y)) - return fromnumeric.alltrue(fromnumeric.ravel(d)) + return np.alltrue(np.ravel(d)) #.............................................................................. def asarray(a, dtype=None): @@ -3336,26 +3820,6 @@ return masked_array(a, dtype=dtype, copy=False, keep_mask=True, subok=True) -def empty(new_shape, dtype=float): - "maskedarray version of the numpy function." - return numpy.empty(new_shape, dtype).view(MaskedArray) -empty.__doc__ = numpy.empty.__doc__ - -def empty_like(a): - "maskedarray version of the numpy function." - return numpy.empty_like(a).view(MaskedArray) -empty_like.__doc__ = numpy.empty_like.__doc__ - -def ones(new_shape, dtype=float): - "maskedarray version of the numpy function." - return numpy.ones(new_shape, dtype).view(MaskedArray) -ones.__doc__ = numpy.ones.__doc__ - -def zeros(new_shape, dtype=float): - "maskedarray version of the numpy function." - return numpy.zeros(new_shape, dtype).view(MaskedArray) -zeros.__doc__ = numpy.zeros.__doc__ - #####-------------------------------------------------------------------------- #---- --- Pickling --- #####-------------------------------------------------------------------------- @@ -3405,7 +3869,7 @@ """ __doc__ = None def __init__(self, funcname): - self._func = getattr(numpy, funcname) + self._func = getattr(np, funcname) self.__doc__ = self.getdoc() def getdoc(self): "Return the doc of the function (from the doc of the method)." @@ -3413,11 +3877,15 @@ def __call__(self, a, *args, **params): return self._func.__call__(a, *args, **params).view(MaskedArray) +arange = _convert2ma('arange') +clip = np.clip +empty = _convert2ma('empty') +empty_like = _convert2ma('empty_like') frombuffer = _convert2ma('frombuffer') fromfunction = _convert2ma('fromfunction') identity = _convert2ma('identity') -indices = numpy.indices -clip = numpy.clip +indices = np.indices +ones = _convert2ma('ones') +zeros = _convert2ma('zeros') ############################################################################### - Modified: branches/cdavid/numpy/ma/extras.py =================================================================== --- branches/cdavid/numpy/ma/extras.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/ma/extras.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -567,7 +567,7 @@ The first argument is not conjugated. """ - #TODO: Works only with 2D arrays. There should be a way to get it to run with higher dimension + #!!!: Works only with 2D arrays. There should be a way to get it to run with higher dimension if strict and (a.ndim == 2) and (b.ndim == 2): a = mask_rows(a) b = mask_cols(b) @@ -842,7 +842,7 @@ def polyfit(x, y, deg, rcond=None, full=False): """%s - + Notes ----- Any masked values in x is propagated in y, and vice-versa. @@ -876,7 +876,7 @@ x = x / scale # solve least squares equation for powers of x v = vander(x, order) - c, resids, rank, s = _lstsq(v, y.filled(0), rcond) + c, resids, rank, s = _lstsq(v, y.filled(0), rcond) # warn on rank reduction, which indicates an ill conditioned matrix if rank != order and not full: warnings.warn("Polyfit may be poorly conditioned", np.RankWarning) @@ -890,7 +890,7 @@ return c, resids, rank, s, rcond else : return c - + _g = globals() for nfunc in ('vander', 'polyfit'): _g[nfunc].func_doc = _g[nfunc].func_doc % getattr(np,nfunc).__doc__ Modified: branches/cdavid/numpy/ma/mrecords.py =================================================================== --- branches/cdavid/numpy/ma/mrecords.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/ma/mrecords.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -6,11 +6,11 @@ :author: Pierre Gerard-Marchant """ -#TODO: We should make sure that no field is called '_mask','mask','_fieldmask', -#TODO: ...or whatever restricted keywords. -#TODO: An idea would be to no bother in the first place, and then rename the -#TODO: invalid fields with a trailing underscore... -#TODO: Maybe we could just overload the parser function ? +#!!!: * We should make sure that no field is called '_mask','mask','_fieldmask', +#!!!: or whatever restricted keywords. +#!!!: An idea would be to no bother in the first place, and then rename the +#!!!: invalid fields with a trailing underscore... +#!!!: Maybe we could just overload the parser function ? __author__ = "Pierre GF Gerard-Marchant" @@ -51,9 +51,6 @@ formats = '' for obj in data: obj = np.asarray(obj) -# if not isinstance(obj, ndarray): -## if not isinstance(obj, ndarray): -# raise ValueError, "item in the array list must be an ndarray." formats += _typestr[obj.dtype.type] if issubclass(obj.dtype.type, ntypes.flexible): formats += `obj.itemsize` @@ -75,7 +72,7 @@ elif isinstance(names, str): new_names = names.split(',') else: - raise NameError, "illegal input names %s" % `names` + raise NameError("illegal input names %s" % `names`) nnames = len(new_names) if nnames < ndescr: new_names += default_names[nnames:] @@ -88,7 +85,7 @@ ndescr.append(t) else: ndescr.append((n,t[1])) - return numeric.dtype(ndescr) + return np.dtype(ndescr) def _get_fieldmask(self): @@ -124,12 +121,11 @@ self = recarray.__new__(cls, shape, dtype=dtype, buf=buf, offset=offset, strides=strides, formats=formats, byteorder=byteorder, aligned=aligned,) -# self = self.view(cls) # mdtype = [(k,'|b1') for (k,_) in self.dtype.descr] if mask is nomask or not np.size(mask): if not keep_mask: - self._fieldmask = tuple([False]*len(mdtype)) + self._mask = tuple([False]*len(mdtype)) else: mask = np.array(mask, copy=copy) if mask.shape != self.shape: @@ -148,102 +144,40 @@ self._sharedmask = True else: if mask.dtype == mdtype: - _fieldmask = mask + _mask = mask else: - _fieldmask = np.array([tuple([m]*len(mdtype)) for m in mask], - dtype=mdtype) - self._fieldmask = _fieldmask + _mask = np.array([tuple([m]*len(mdtype)) for m in mask], + dtype=mdtype) + self._mask = _mask return self #...................................................... def __array_finalize__(self,obj): + MaskedArray._update_from(self,obj) # Make sure we have a _fieldmask by default .. _fieldmask = getattr(obj, '_fieldmask', None) if _fieldmask is None: mdescr = [(n,'|b1') for (n,_) in self.dtype.descr] - _mask = getattr(obj, '_mask', nomask) - if _mask is nomask: - _fieldmask = np.empty(self.shape, dtype=mdescr).view(recarray) - _fieldmask.flat = tuple([False]*len(mdescr)) + objmask = getattr(obj, '_mask', nomask) + if objmask is nomask: + _mask = np.empty(self.shape, dtype=mdescr).view(recarray) + _mask.flat = tuple([False]*len(mdescr)) else: - _fieldmask = narray([tuple([m]*len(mdescr)) for m in _mask], - dtype=mdescr).view(recarray) - # Update some of the attributes - if obj is not None: - _baseclass = getattr(obj,'_baseclass',type(obj)) + _mask = narray([tuple([m]*len(mdescr)) for m in objmask], + dtype=mdescr).view(recarray) else: - _baseclass = recarray - attrdict = dict(_fieldmask=_fieldmask, - _hardmask=getattr(obj,'_hardmask',False), - _fill_value=getattr(obj,'_fill_value',None), - _sharedmask=getattr(obj,'_sharedmask',False), - _baseclass=_baseclass) - self.__dict__.update(attrdict) - # Finalize as a regular maskedarray ..... - # Update special attributes ... - self._basedict = getattr(obj, '_basedict', getattr(obj,'__dict__',{})) - self.__dict__.update(self._basedict) + _mask = _fieldmask + # Update some of the attributes + _locdict = self.__dict__ + if _locdict['_baseclass'] == ndarray: + _locdict['_baseclass'] = recarray + _locdict.update(_mask=_mask, _fieldmask=_mask) return - #...................................................... + def _getdata(self): "Returns the data as a recarray." return ndarray.view(self,recarray) _data = property(fget=_getdata) - #...................................................... - def __setmask__(self, mask): - "Sets the mask and update the fieldmask." - names = self.dtype.names - fmask = self.__dict__['_fieldmask'] - # - if isinstance(mask,ndarray) and mask.dtype.names == names: - for n in names: - fmask[n] = mask[n].astype(bool) -# self.__dict__['_fieldmask'] = fmask.view(recarray) - return - newmask = make_mask(mask, copy=False) - if names is not None: - if self._hardmask: - for n in names: - fmask[n].__ior__(newmask) - else: - for n in names: - fmask[n].flat = newmask - return - _setmask = __setmask__ - # - def _getmask(self): - """Return the mask of the mrecord. - A record is masked when all the fields are masked. - """ - if self.size > 1: - return self._fieldmask.view((bool_, len(self.dtype))).all(1) - else: - return self._fieldmask.view((bool_, len(self.dtype))).all() - mask = _mask = property(fget=_getmask, fset=_setmask) - #...................................................... - def get_fill_value(self): - """Return the filling value. - - """ - if self._fill_value is None: - ddtype = self.dtype - fillval = _check_fill_value(None, ddtype) - self._fill_value = np.array(tuple(fillval), dtype=ddtype) - return self._fill_value - - def set_fill_value(self, value=None): - """Set the filling value to value. - - If value is None, use a default based on the data type. - - """ - ddtype = self.dtype - fillval = _check_fill_value(value, ddtype) - self._fill_value = np.array(tuple(fillval), dtype=ddtype) - - fill_value = property(fget=get_fill_value, fset=set_fill_value, - doc="Filling value.") - #...................................................... def __len__(self): "Returns the length" # We have more than one record @@ -251,138 +185,134 @@ return len(self._data) # We have only one record: return the nb of fields return len(self.dtype) - #...................................................... + def __getattribute__(self, attr): - "Returns the given attribute." try: - # Returns a generic attribute - return object.__getattribute__(self,attr) - except AttributeError: - # OK, so attr must be a field name + return object.__getattribute__(self, attr) + except AttributeError: # attr must be a fieldname pass - # Get the list of fields ...... - _names = self.dtype.names - if attr in _names: - _data = self._data - _mask = self._fieldmask -# obj = masked_array(_data.__getattribute__(attr), copy=False, -# mask=_mask.__getattribute__(attr)) - # Use a view in order to avoid the copy of the mask in MaskedArray.__new__ - obj = narray(_data.__getattribute__(attr), copy=False).view(MaskedArray) - obj._mask = _mask.__getattribute__(attr) - if not obj.ndim and obj._mask: - return masked - return obj - raise AttributeError,"No attribute '%s' !" % attr + fielddict = ndarray.__getattribute__(self,'dtype').fields + try: + res = fielddict[attr][:2] + except (TypeError, KeyError): + raise AttributeError, "record array has no attribute %s" % attr + # So far, so good... + _localdict = ndarray.__getattribute__(self,'__dict__') + _data = ndarray.view(self, _localdict['_baseclass']) + obj = _data.getfield(*res) + if obj.dtype.fields: + raise NotImplementedError("MaskedRecords is currently limited to"\ + "simple records...") + obj = obj.view(MaskedArray) + obj._baseclass = ndarray + obj._isfield = True + # Get some special attributes + _fill_value = _localdict.get('_fill_value', None) + _mask = _localdict.get('_mask', None) + # Reset the object's mask + if _mask is not None: + try: + obj._mask = _mask[attr] + except IndexError: + # Couldn't find a mask: use the default (nomask) + pass + # Reset the field values + if _fill_value is not None: + try: + obj._fill_value = _fill_value[attr] + except ValueError: + obj._fill_value = None + return obj + def __setattr__(self, attr, val): "Sets the attribute attr to the value val." -# newattr = attr not in self.__dict__ + # Should we call __setmask__ first ? + if attr in ['_mask','mask','_fieldmask','fieldmask']: + self.__setmask__(val) + return + # Create a shortcut (so that we don't have to call getattr all the time) + _localdict = self.__dict__ + # Check whether we're creating a new field + newattr = attr not in _localdict try: # Is attr a generic attribute ? ret = object.__setattr__(self, attr, val) except: # Not a generic attribute: exit if it's not a valid field - fielddict = self.dtype.names or {} + fielddict = ndarray.__getattribute__(self,'dtype').fields or {} if attr not in fielddict: exctype, value = sys.exc_info()[:2] raise exctype, value else: - if attr in ['_mask','fieldmask']: - self.__setmask__(val) - return # Get the list of names ...... - _names = self.dtype.names - if _names is None: - _names = [] - else: - _names = list(_names) + fielddict = ndarray.__getattribute__(self,'dtype').fields or {} # Check the attribute - self_dict = self.__dict__ - if attr not in _names+list(self_dict): +##### _localdict = self.__dict__ + if attr not in fielddict: return ret - if attr not in self_dict: # We just added this one + if newattr: # We just added this one try: # or this setattr worked on an internal # attribute. object.__delattr__(self, attr) except: return ret - # Case #1.: Basic field ............ - base_fmask = self._fieldmask - _names = self.dtype.names or [] - if attr in _names: - if val is masked: - fval = self.fill_value[attr] - mval = True + # Let's try to set the field + try: + res = fielddict[attr][:2] + except (TypeError,KeyError): + raise AttributeError, "record array has no attribute %s" % attr + # + if val is masked: + _fill_value = _localdict['_fill_value'] + if _fill_value is not None: + dval = _localdict['_fill_value'][attr] else: - fval = filled(val) - mval = getmaskarray(val) - if self._hardmask: - mval = mask_or(mval, base_fmask.__getattr__(attr)) - self._data.__setattr__(attr, fval) - base_fmask.__setattr__(attr, mval) - return - #............................................ + dval = val + mval = True + else: + dval = filled(val) + mval = getmaskarray(val) + obj = ndarray.__getattribute__(self,'_data').setfield(dval, *res) + _localdict['_mask'].__setitem__(attr, mval) + return obj + + def __getitem__(self, indx): """Returns all the fields sharing the same fieldname base. The fieldname base is either `_data` or `_mask`.""" _localdict = self.__dict__ - _fieldmask = _localdict['_fieldmask'] + _mask = _localdict['_fieldmask'] _data = self._data # We want a field ........ if isinstance(indx, basestring): + #!!!: Make sure _sharedmask is True to propagate back to _fieldmask + #!!!: Don't use _set_mask, there are some copies being made... + #!!!: ...that break propagation + #!!!: Don't force the mask to nomask, that wrecks easy masking obj = _data[indx].view(MaskedArray) - obj._set_mask(_fieldmask[indx]) - # Force to nomask if the mask is empty - if not obj._mask.any(): - obj._mask = nomask + obj._mask = _mask[indx] + obj._sharedmask = True + fval = _localdict['_fill_value'] + if fval is not None: + obj._fill_value = fval[indx] # Force to masked if the mask is True if not obj.ndim and obj._mask: return masked return obj # We want some elements .. # First, the data ........ - obj = narray(_data[indx], copy=False).view(mrecarray) - obj._fieldmask = narray(_fieldmask[indx], copy=False).view(recarray) + obj = np.array(_data[indx], copy=False).view(mrecarray) + obj._mask = np.array(_mask[indx], copy=False).view(recarray) return obj #.... def __setitem__(self, indx, value): "Sets the given record to value." MaskedArray.__setitem__(self, indx, value) if isinstance(indx, basestring): - self._fieldmask[indx] = ma.getmaskarray(value) - - #............................................ - def __setslice__(self, i, j, value): - "Sets the slice described by [i,j] to `value`." - _localdict = self.__dict__ - d = self._data - m = _localdict['_fieldmask'] - names = self.dtype.names - if value is masked: - for n in names: - m[i:j][n] = True - elif not self._hardmask: - fval = filled(value) - mval = getmaskarray(value) - for n in names: - d[n][i:j] = fval - m[n][i:j] = mval - else: - mindx = getmaskarray(self)[i:j] - dval = np.asarray(value) - valmask = getmask(value) - if valmask is nomask: - for n in names: - mval = mask_or(m[n][i:j], valmask) - d[n][i:j][~mval] = value - elif valmask.size > 1: - for n in names: - mval = mask_or(m[n][i:j], valmask) - d[n][i:j][~mval] = dval[~mval] - m[n][i:j] = mask_or(m[n][i:j], mval) - self._fieldmask = m - #...................................................... + self._mask[indx] = ma.getmaskarray(value) + + def __str__(self): "Calculates the string representation." if self.size > 1: @@ -411,54 +341,25 @@ return ndarray.view(self, obj) except TypeError: pass - dtype = np.dtype(obj) - if dtype.fields is None: - return self.__array__().view(dtype) + dtype_ = np.dtype(obj) + if dtype_.fields is None: + return self.__array__().view(dtype_) return ndarray.view(self, obj) - #...................................................... - def filled(self, fill_value=None): - """Returns an array of the same class as the _data part, where masked - values are filled with fill_value. - If fill_value is None, self.fill_value is used instead. - Subclassing is preserved. - - """ - _localdict = self.__dict__ - d = self._data - fm = _localdict['_fieldmask'] - if not np.asarray(fm, dtype=bool_).any(): - return d - # - if fill_value is None: - value = _check_fill_value(_localdict['_fill_value'],self.dtype) - else: - value = fill_value - if np.size(value) == 1: - value = [value,] * len(self.dtype) - # - if self is masked: - result = np.asanyarray(value) - else: - result = d.copy() - for (n, v) in zip(d.dtype.names, value): - np.putmask(np.asarray(result[n]), np.asarray(fm[n]), v) - return result - #...................................................... def harden_mask(self): "Forces the mask to hard" self._hardmask = True def soften_mask(self): "Forces the mask to soft" self._hardmask = False - #...................................................... + def copy(self): """Returns a copy of the masked record.""" _localdict = self.__dict__ copied = self._data.copy().view(type(self)) copied._fieldmask = self._fieldmask.copy() return copied - #...................................................... + def tolist(self, fill_value=None): """Copy the data portion of the array to a hierarchical python list and returns that list. @@ -654,21 +555,21 @@ # Start the conversion loop ....... for f in arr: try: - val = int(f) + int(f) except ValueError: try: - val = float(f) + float(f) except ValueError: try: val = complex(f) except ValueError: vartypes.append(arr.dtype) else: - vartypes.append(complex) + vartypes.append(np.dtype(complex)) else: - vartypes.append(float) + vartypes.append(np.dtype(float)) else: - vartypes.append(int) + vartypes.append(np.dtype(int)) return vartypes def openfile(fname): @@ -738,11 +639,12 @@ vartypes = _guessvartypes(_variables[0]) # Construct the descriptor .................. mdescr = [(n,f) for (n,f) in zip(varnames, vartypes)] + mfillv = [ma.default_fill_value(f) for f in vartypes] # Get the data and the mask ................. # We just need a list of masked_arrays. It's easier to create it like that: _mask = (_variables.T == missingchar) - _datalist = [masked_array(a,mask=m,dtype=t) - for (a,m,t) in zip(_variables.T, _mask, vartypes)] + _datalist = [masked_array(a,mask=m,dtype=t,fill_value=f) + for (a,m,t,f) in zip(_variables.T, _mask, vartypes, mfillv)] return fromarrays(_datalist, dtype=mdescr) #.................................................................... @@ -779,5 +681,3 @@ newdata._fieldmask = newmask return newdata -############################################################################### - Modified: branches/cdavid/numpy/ma/tests/test_core.py =================================================================== --- branches/cdavid/numpy/ma/tests/test_core.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/ma/tests/test_core.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,4 +1,4 @@ -# pylint: disable-msg=W0611, W0612, W0511,R0201 +# pylint: disable-msg=W0401,W0511,W0611,W0612,W0614,R0201,E1102 """Tests suite for MaskedArray & subclassing. :author: Pierre Gerard-Marchant @@ -9,47 +9,60 @@ import types import warnings -import numpy +import numpy as np import numpy.core.fromnumeric as fromnumeric -from numpy.testing import NumpyTest, NumpyTestCase -from numpy.testing import set_local_path, restore_path -from numpy.testing.utils import build_err_msg -from numpy import array as narray - -import numpy.ma.testutils +from numpy import ndarray from numpy.ma.testutils import * -import numpy.ma.core as coremodule +import numpy.ma.core from numpy.ma.core import * -pi = numpy.pi +pi = np.pi -set_local_path() -from test_old_ma import * -restore_path() - #.............................................................................. -class TestMA(NumpyTestCase): +class TestMaskedArray(TestCase): "Base test class for MaskedArrays." - def __init__(self, *args, **kwds): - NumpyTestCase.__init__(self, *args, **kwds) - self.setUp() def setUp (self): "Base data definition." - x = narray([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) - y = narray([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) + x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) + y = np.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) a10 = 10. m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 ,0, 1] xm = masked_array(x, mask=m1) ym = masked_array(y, mask=m2) - z = narray([-.5, 0., .5, .8]) + z = np.array([-.5, 0., .5, .8]) zm = masked_array(z, mask=[0,1,0,0]) - xf = numpy.where(m1, 1.e+20, x) + xf = np.where(m1, 1.e+20, x) xm.set_fill_value(1.e+20) self.d = (x, y, a10, m1, m2, xm, ym, z, zm, xf) - #........................ + + + def test_basicattributes(self): + "Tests some basic array attributes." + a = array([1,3,2]) + b = array([1,3,2], mask=[1,0,1]) + assert_equal(a.ndim, 1) + assert_equal(b.ndim, 1) + assert_equal(a.size, 3) + assert_equal(b.size, 3) + assert_equal(a.shape, (3,)) + assert_equal(b.shape, (3,)) + + + def test_basic0d(self): + "Checks masking a scalar" + x = masked_array(0) + assert_equal(str(x), '0') + x = masked_array(0,mask=True) + assert_equal(str(x), str(masked_print_option)) + x = masked_array(0, mask=False) + assert_equal(str(x), '0') + x = array(0, mask=1) + assert(x.filled().dtype is x.data.dtype) + + def test_basic1d(self): "Test of basic array creation and properties in 1 dimension." (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d @@ -58,7 +71,7 @@ assert((xm-ym).filled(0).any()) fail_if_equal(xm.mask.astype(int_), ym.mask.astype(int_)) s = x.shape - assert_equal(numpy.shape(xm), s) + assert_equal(np.shape(xm), s) assert_equal(xm.shape, s) assert_equal(xm.dtype, x.dtype) assert_equal(zm.dtype, z.dtype) @@ -67,7 +80,8 @@ assert_array_equal(xm, xf) assert_array_equal(filled(xm, 1.e20), xf) assert_array_equal(x, xm) - #........................ + + def test_basic2d(self): "Test of basic array creation and properties in 2 dimensions." (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d @@ -77,7 +91,7 @@ xm.shape = s ym.shape = s xf.shape = s - + # assert(not isMaskedArray(x)) assert(isMaskedArray(xm)) assert_equal(shape(xm), s) @@ -87,303 +101,27 @@ assert_equal(xm, xf) assert_equal(filled(xm, 1.e20), xf) assert_equal(x, xm) - #........................ - def test_basic_arithmetic (self): - "Test of basic arithmetic." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - a2d = array([[1,2],[0,4]]) - a2dm = masked_array(a2d, [[0,0],[1,0]]) - assert_equal(a2d * a2d, a2d * a2dm) - assert_equal(a2d + a2d, a2d + a2dm) - assert_equal(a2d - a2d, a2d - a2dm) - for s in [(12,), (4,3), (2,6)]: - x = x.reshape(s) - y = y.reshape(s) - xm = xm.reshape(s) - ym = ym.reshape(s) - xf = xf.reshape(s) - assert_equal(-x, -xm) - assert_equal(x + y, xm + ym) - assert_equal(x - y, xm - ym) - assert_equal(x * y, xm * ym) - assert_equal(x / y, xm / ym) - assert_equal(a10 + y, a10 + ym) - assert_equal(a10 - y, a10 - ym) - assert_equal(a10 * y, a10 * ym) - assert_equal(a10 / y, a10 / ym) - assert_equal(x + a10, xm + a10) - assert_equal(x - a10, xm - a10) - assert_equal(x * a10, xm * a10) - assert_equal(x / a10, xm / a10) - assert_equal(x**2, xm**2) - assert_equal(abs(x)**2.5, abs(xm) **2.5) - assert_equal(x**y, xm**ym) - assert_equal(numpy.add(x,y), add(xm, ym)) - assert_equal(numpy.subtract(x,y), subtract(xm, ym)) - assert_equal(numpy.multiply(x,y), multiply(xm, ym)) - assert_equal(numpy.divide(x,y), divide(xm, ym)) - #........................ - def test_mixed_arithmetic(self): - "Tests mixed arithmetics." - na = narray([1]) - ma = array([1]) - self.failUnless(isinstance(na + ma, MaskedArray)) - self.failUnless(isinstance(ma + na, MaskedArray)) - #........................ - def test_inplace_arithmetic(self): - """Test of inplace operations and rich comparisons""" - # addition - x = arange(10) - y = arange(10) - xm = arange(10) - xm[2] = masked - x += 1 - assert_equal(x, y+1) - xm += 1 - assert_equal(xm, y+1) - # subtraction - x = arange(10) - xm = arange(10) - xm[2] = masked - x -= 1 - assert_equal(x, y-1) - xm -= 1 - assert_equal(xm, y-1) - # multiplication - x = arange(10)*1.0 - xm = arange(10)*1.0 - xm[2] = masked - x *= 2.0 - assert_equal(x, y*2) - xm *= 2.0 - assert_equal(xm, y*2) - # division - x = arange(10)*2 - xm = arange(10)*2 - xm[2] = masked - x /= 2 - assert_equal(x, y) - xm /= 2 - assert_equal(xm, y) - # division, pt 2 - x = arange(10)*1.0 - xm = arange(10)*1.0 - xm[2] = masked - x /= 2.0 - assert_equal(x, y/2.0) - xm /= arange(10) - assert_equal(xm, ones((10,))) - warnings.simplefilter('ignore', DeprecationWarning) - x = arange(10).astype(float_) - xm = arange(10) - xm[2] = masked - id1 = x.raw_data().ctypes.data - x += 1. - assert (id1 == x.raw_data().ctypes.data) - assert_equal(x, y+1.) - warnings.simplefilter('default', DeprecationWarning) - - # addition w/ array - x = arange(10, dtype=float_) - xm = arange(10, dtype=float_) - xm[2] = masked - m = xm.mask - a = arange(10, dtype=float_) - a[-1] = masked - x += a - xm += a - assert_equal(x,y+a) - assert_equal(xm,y+a) - assert_equal(xm.mask, mask_or(m,a.mask)) - # subtraction w/ array - x = arange(10, dtype=float_) - xm = arange(10, dtype=float_) - xm[2] = masked - m = xm.mask - a = arange(10, dtype=float_) - a[-1] = masked - x -= a - xm -= a - assert_equal(x,y-a) - assert_equal(xm,y-a) - assert_equal(xm.mask, mask_or(m,a.mask)) - # multiplication w/ array - x = arange(10, dtype=float_) - xm = arange(10, dtype=float_) - xm[2] = masked - m = xm.mask - a = arange(10, dtype=float_) - a[-1] = masked - x *= a - xm *= a - assert_equal(x,y*a) - assert_equal(xm,y*a) - assert_equal(xm.mask, mask_or(m,a.mask)) - # division w/ array - x = arange(10, dtype=float_) - xm = arange(10, dtype=float_) - xm[2] = masked - m = xm.mask - a = arange(10, dtype=float_) - a[-1] = masked - x /= a - xm /= a - assert_equal(x,y/a) - assert_equal(xm,y/a) - assert_equal(xm.mask, mask_or(mask_or(m,a.mask), (a==0))) - # + def test_concatenate_basic(self): + "Tests concatenations." (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - z = xm/ym - assert_equal(z._mask, [1,1,1,0,0,1,1,0,0,0,1,1]) - assert_equal(z._data, [0.2,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) - xm = xm.copy() - xm /= ym - assert_equal(xm._mask, [1,1,1,0,0,1,1,0,0,0,1,1]) - assert_equal(xm._data, [1/5.,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) + # basic concatenation + assert_equal(np.concatenate((x,y)), concatenate((xm,ym))) + assert_equal(np.concatenate((x,y)), concatenate((x,y))) + assert_equal(np.concatenate((x,y)), concatenate((xm,y))) + assert_equal(np.concatenate((x,y,x)), concatenate((x,ym,x))) - - #.......................... - def test_scalararithmetic(self): - "Tests some scalar arithmetics on MaskedArrays." - xm = array(0, mask=1) - assert((1/array(0)).mask) - assert((1 + xm).mask) - assert((-xm).mask) - assert((-xm).mask) - assert(maximum(xm, xm).mask) - assert(minimum(xm, xm).mask) - assert(xm.filled().dtype is xm.data.dtype) - x = array(0, mask=0) - assert_equal(x.filled().ctypes.data, x.ctypes.data) - assert_equal(str(xm), str(masked_print_option)) - # Make sure we don't lose the shape in some circumstances - xm = array((0,0))/0. - assert_equal(xm.shape,(2,)) - assert_equal(xm.mask,[1,1]) - #......................... - def test_basic_ufuncs (self): - "Test various functions such as sin, cos." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - assert_equal(numpy.cos(x), cos(xm)) - assert_equal(numpy.cosh(x), cosh(xm)) - assert_equal(numpy.sin(x), sin(xm)) - assert_equal(numpy.sinh(x), sinh(xm)) - assert_equal(numpy.tan(x), tan(xm)) - assert_equal(numpy.tanh(x), tanh(xm)) - assert_equal(numpy.sqrt(abs(x)), sqrt(xm)) - assert_equal(numpy.log(abs(x)), log(xm)) - assert_equal(numpy.log10(abs(x)), log10(xm)) - assert_equal(numpy.exp(x), exp(xm)) - assert_equal(numpy.arcsin(z), arcsin(zm)) - assert_equal(numpy.arccos(z), arccos(zm)) - assert_equal(numpy.arctan(z), arctan(zm)) - assert_equal(numpy.arctan2(x, y), arctan2(xm, ym)) - assert_equal(numpy.absolute(x), absolute(xm)) - assert_equal(numpy.equal(x,y), equal(xm, ym)) - assert_equal(numpy.not_equal(x,y), not_equal(xm, ym)) - assert_equal(numpy.less(x,y), less(xm, ym)) - assert_equal(numpy.greater(x,y), greater(xm, ym)) - assert_equal(numpy.less_equal(x,y), less_equal(xm, ym)) - assert_equal(numpy.greater_equal(x,y), greater_equal(xm, ym)) - assert_equal(numpy.conjugate(x), conjugate(xm)) - #........................ - def test_count_func (self): - "Tests count" - ott = array([0.,1.,2.,3.], mask=[1,0,0,0]) - assert( isinstance(count(ott), int)) - assert_equal(3, count(ott)) - assert_equal(1, count(1)) - assert_equal(0, array(1,mask=[1])) - ott = ott.reshape((2,2)) - assert isinstance(count(ott,0), ndarray) - assert isinstance(count(ott), types.IntType) - assert_equal(3, count(ott)) - assert getmask(count(ott,0)) is nomask - assert_equal([1,2],count(ott,0)) - #........................ - def test_minmax_func (self): - "Tests minimum and maximum." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - xr = numpy.ravel(x) #max doesn't work if shaped - xmr = ravel(xm) - assert_equal(max(xr), maximum(xmr)) #true because of careful selection of data - assert_equal(min(xr), minimum(xmr)) #true because of careful selection of data - # - assert_equal(minimum([1,2,3],[4,0,9]), [1,0,3]) - assert_equal(maximum([1,2,3],[4,0,9]), [4,2,9]) - x = arange(5) - y = arange(5) - 2 - x[3] = masked - y[0] = masked - assert_equal(minimum(x,y), where(less(x,y), x, y)) - assert_equal(maximum(x,y), where(greater(x,y), x, y)) - assert minimum(x) == 0 - assert maximum(x) == 4 - # - x = arange(4).reshape(2,2) - x[-1,-1] = masked - assert_equal(maximum(x), 2) - - def test_minmax_methods(self): - "Additional tests on max/min" - (_, _, _, _, _, xm, _, _, _, _) = self.d - xm.shape = (xm.size,) - assert_equal(xm.max(), 10) - assert(xm[0].max() is masked) - assert(xm[0].max(0) is masked) - assert(xm[0].max(-1) is masked) - assert_equal(xm.min(), -10.) - assert(xm[0].min() is masked) - assert(xm[0].min(0) is masked) - assert(xm[0].min(-1) is masked) - assert_equal(xm.ptp(), 20.) - assert(xm[0].ptp() is masked) - assert(xm[0].ptp(0) is masked) - assert(xm[0].ptp(-1) is masked) - # - x = array([1,2,3], mask=True) - assert(x.min() is masked) - assert(x.max() is masked) - assert(x.ptp() is masked) - #........................ - def test_addsumprod (self): - "Tests add, sum, product." - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - assert_equal(numpy.add.reduce(x), add.reduce(x)) - assert_equal(numpy.add.accumulate(x), add.accumulate(x)) - assert_equal(4, sum(array(4),axis=0)) - assert_equal(4, sum(array(4), axis=0)) - assert_equal(numpy.sum(x,axis=0), sum(x,axis=0)) - assert_equal(numpy.sum(filled(xm,0),axis=0), sum(xm,axis=0)) - assert_equal(numpy.sum(x,0), sum(x,0)) - assert_equal(numpy.product(x,axis=0), product(x,axis=0)) - assert_equal(numpy.product(x,0), product(x,0)) - assert_equal(numpy.product(filled(xm,1),axis=0), product(xm,axis=0)) - s = (3,4) - x.shape = y.shape = xm.shape = ym.shape = s - if len(s) > 1: - assert_equal(numpy.concatenate((x,y),1), concatenate((xm,ym),1)) - assert_equal(numpy.add.reduce(x,1), add.reduce(x,1)) - assert_equal(numpy.sum(x,1), sum(x,1)) - assert_equal(numpy.product(x,1), product(x,1)) - #......................... - def test_concat(self): + def test_concatenate_alongaxis(self): "Tests concatenations." (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - # basic concatenation - assert_equal(numpy.concatenate((x,y)), concatenate((xm,ym))) - assert_equal(numpy.concatenate((x,y)), concatenate((x,y))) - assert_equal(numpy.concatenate((x,y)), concatenate((xm,y))) - assert_equal(numpy.concatenate((x,y,x)), concatenate((x,ym,x))) # Concatenation along an axis s = (3,4) x.shape = y.shape = xm.shape = ym.shape = s - assert_equal(xm.mask, numpy.reshape(m1, s)) - assert_equal(ym.mask, numpy.reshape(m2, s)) + assert_equal(xm.mask, np.reshape(m1, s)) + assert_equal(ym.mask, np.reshape(m2, s)) xmym = concatenate((xm,ym),1) - assert_equal(numpy.concatenate((x,y),1), xmym) - assert_equal(numpy.concatenate((xm.mask,ym.mask),1), xmym._mask) + assert_equal(np.concatenate((x,y),1), xmym) + assert_equal(np.concatenate((xm.mask,ym.mask),1), xmym._mask) # x=zeros(2) y=array(ones(2),mask=[False,True]) @@ -394,16 +132,87 @@ assert_array_equal(z,[1,1,0,0]) assert_array_equal(z.mask,[False,True,False,False]) - #........................ + def test_creation_ndmin(self): + "Check the use of ndmin" + x = array([1,2,3],mask=[1,0,0], ndmin=2) + assert_equal(x.shape,(1,3)) + assert_equal(x._data,[[1,2,3]]) + assert_equal(x._mask,[[1,0,0]]) + + def test_creation_maskcreation(self): + "Tests how masks are initialized at the creation of Maskedarrays." + data = arange(24, dtype=float_) + data[[3,6,15]] = masked + dma_1 = MaskedArray(data) + assert_equal(dma_1.mask, data.mask) + dma_2 = MaskedArray(dma_1) + assert_equal(dma_2.mask, dma_1.mask) + dma_3 = MaskedArray(dma_1, mask=[1,0,0,0]*6) + fail_if_equal(dma_3.mask, dma_1.mask) + + def test_creation_with_list_of_maskedarrays(self): + "Tests creaating a masked array from alist of masked arrays." + x = array(np.arange(5), mask=[1,0,0,0,0]) + data = array((x,x[::-1])) + assert_equal(data, [[0,1,2,3,4],[4,3,2,1,0]]) + assert_equal(data._mask, [[1,0,0,0,0],[0,0,0,0,1]]) + # + x.mask = nomask + data = array((x,x[::-1])) + assert_equal(data, [[0,1,2,3,4],[4,3,2,1,0]]) + assert(data.mask is nomask) + + def test_asarray(self): + (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d + xm.fill_value = -9999 + xmm = asarray(xm) + assert_equal(xmm._data, xm._data) + assert_equal(xmm._mask, xm._mask) + assert_equal(xmm.fill_value, xm.fill_value) + + def test_fix_invalid(self): + "Checks fix_invalid." + data = masked_array(np.sqrt([-1., 0., 1.]), mask=[0,0,1]) + data_fixed = fix_invalid(data) + assert_equal(data_fixed._data, [data.fill_value, 0., 1.]) + assert_equal(data_fixed._mask, [1., 0., 1.]) + + def test_maskedelement(self): + "Test of masked element" + x = arange(6) + x[1] = masked + assert(str(masked) == '--') + assert(x[1] is masked) + assert_equal(filled(x[1], 0), 0) + # don't know why these should raise an exception... + #self.failUnlessRaises(Exception, lambda x,y: x+y, masked, masked) + #self.failUnlessRaises(Exception, lambda x,y: x+y, masked, 2) + #self.failUnlessRaises(Exception, lambda x,y: x+y, masked, xx) + #self.failUnlessRaises(Exception, lambda x,y: x+y, xx, masked) + + def test_set_element_as_object(self): + """Tests setting elements with object""" + a = empty(1,dtype=object) + x = (1,2,3,4,5) + a[0] = x + assert_equal(a[0], x) + assert(a[0] is x) + # + import datetime + dt = datetime.datetime.now() + a[0] = dt + assert(a[0] is dt) + + def test_indexing(self): "Tests conversions and indexing" - x1 = numpy.array([1,2,4,3]) + x1 = np.array([1,2,4,3]) x2 = array(x1, mask=[1,0,0,0]) x3 = array(x1, mask=[0,1,0,1]) x4 = array(x1) # test conversion to strings junk, garbage = str(x2), repr(x2) - assert_equal(numpy.sort(x1),sort(x2,endwith=False)) + assert_equal(np.sort(x1),sort(x2,endwith=False)) # tests of indexing assert type(x2[1]) is type(x1[1]) assert x1[1] == x2[1] @@ -430,21 +239,22 @@ x4[:] = masked_array([1,2,3,4],[0,1,1,0]) assert allequal(getmask(x4), array([0,1,1,0])) assert allequal(x4, array([1,2,3,4])) - x1 = numpy.arange(5)*1.0 + x1 = np.arange(5)*1.0 x2 = masked_values(x1, 3.0) assert_equal(x1,x2) assert allequal(array([0,0,0,1,0],MaskType), x2.mask) #FIXME: Well, eh, fill_value is now a property assert_equal(3.0, x2.fill_value()) assert_equal(3.0, x2.fill_value) x1 = array([1,'hello',2,3],object) - x2 = numpy.array([1,'hello',2,3],object) + x2 = np.array([1,'hello',2,3],object) s1 = x1[1] s2 = x2[1] assert_equal(type(s2), str) assert_equal(type(s1), str) assert_equal(s1, s2) assert x1[1:1].shape == (0,) - #........................ + + def test_copy(self): "Tests of some subtle points of copying and sizing." n = [0,0,1,0,0] @@ -455,7 +265,7 @@ assert(m is not m3) warnings.simplefilter('ignore', DeprecationWarning) - x1 = numpy.arange(5) + x1 = np.arange(5) y1 = array(x1, mask=m) #assert( y1._data is x1) assert_equal(y1._data.__array_interface__, x1.__array_interface__) @@ -510,48 +320,56 @@ y = masked_array(x, copy=True) assert_not_equal(y._data.ctypes.data, x._data.ctypes.data) assert_not_equal(y._mask.ctypes.data, x._mask.ctypes.data) - #........................ - def test_where(self): - "Test the where function" - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - d = where(xm>2,xm,-9) - assert_equal(d, [-9.,-9.,-9.,-9., -9., 4., -9., -9., 10., -9., -9., 3.]) - assert_equal(d._mask, xm._mask) - d = where(xm>2,-9,ym) - assert_equal(d, [5.,0.,3., 2., -1.,-9.,-9., -10., -9., 1., 0., -9.]) - assert_equal(d._mask, [1,0,1,0,0,0,1,0,0,0,0,0]) - d = where(xm>2, xm, masked) - assert_equal(d, [-9.,-9.,-9.,-9., -9., 4., -9., -9., 10., -9., -9., 3.]) - tmp = xm._mask.copy() - tmp[(xm<=2).filled(True)] = True - assert_equal(d._mask, tmp) + + + def test_pickling(self): + "Tests pickling" + import cPickle + a = arange(10) + a[::3] = masked + a.fill_value = 999 + a_pickled = cPickle.loads(a.dumps()) + assert_equal(a_pickled._mask, a._mask) + assert_equal(a_pickled._data, a._data) + assert_equal(a_pickled.fill_value, 999) # - ixm = xm.astype(int_) - d = where(ixm>2, ixm, masked) - assert_equal(d, [-9,-9,-9,-9, -9, 4, -9, -9, 10, -9, -9, 3]) - assert_equal(d.dtype, ixm.dtype) + a = array(np.matrix(range(10)), mask=[1,0,1,0,0]*2) + a_pickled = cPickle.loads(a.dumps()) + assert_equal(a_pickled._mask, a._mask) + assert_equal(a_pickled, a) + assert(isinstance(a_pickled._data,np.matrix)) + + + def test_single_element_subscript(self): + "Tests single element subscripts of Maskedarrays." + a = array([1,3,2]) + b = array([1,3,2], mask=[1,0,1]) + assert_equal(a[0].shape, ()) + assert_equal(b[0].shape, ()) + assert_equal(b[1].shape, ()) + + + def test_topython(self): + "Tests some communication issues with Python." + assert_equal(1, int(array(1))) + assert_equal(1.0, float(array(1))) + assert_equal(1, int(array([[[1]]]))) + assert_equal(1.0, float(array([[1]]))) + self.assertRaises(TypeError, float, array([1,1])) # - x = arange(10) - x[3] = masked - c = x >= 8 - z = where(c , x, masked) - assert z.dtype is x.dtype - assert z[3] is masked - assert z[4] is masked - assert z[7] is masked - assert z[8] is not masked - assert z[9] is not masked - assert_equal(x,z) + warnings.simplefilter('ignore',UserWarning) + assert np.isnan(float(array([1],mask=[1]))) + warnings.simplefilter('default',UserWarning) # - z = where(c , masked, x) - assert z.dtype is x.dtype - assert z[3] is masked - assert z[4] is not masked - assert z[7] is not masked - assert z[8] is masked - assert z[9] is masked + a = array([1,2,3],mask=[1,0,0]) + self.assertRaises(TypeError, lambda:float(a)) + assert_equal(float(a[-1]), 3.) + assert(np.isnan(float(a[0]))) + self.assertRaises(TypeError, int, a) + assert_equal(int(a[-1]), 3) + self.assertRaises(MAError, lambda:int(a[0])) - #........................ + def test_oddfeatures_1(self): "Test of other odd features" x = arange(20) @@ -563,7 +381,7 @@ assert_equal(z.imag, 10*x) assert_equal((z*conjugate(z)).real, 101*x*x) z.imag[...] = 0.0 - + # x = arange(10) x[3] = masked assert str(x[3]) == str(masked) @@ -579,8 +397,8 @@ assert z[8] is masked assert z[9] is masked assert_equal(x,z) - # - #........................ + + def test_oddfeatures_2(self): "Tests some more features." x = array([1.,2.,3.,4.,5.]) @@ -595,22 +413,8 @@ assert z[1] is not masked assert z[2] is masked # - x = arange(6) - x[5] = masked - y = arange(6)*10 - y[2] = masked - c = array([1,1,1,0,0,0], mask=[1,0,0,0,0,0]) - cm = c.filled(1) - z = where(c,x,y) - zm = where(cm,x,y) - assert_equal(z, zm) - assert getmask(zm) is nomask - assert_equal(zm, [0,1,2,30,40,50]) - z = where(c, masked, 1) - assert_equal(z, [99,99,99,1,1,1]) - z = where(c, 1, masked) - assert_equal(z, [99, 1, 1, 99, 99, 99]) - #........................ + + def test_oddfeatures_3(self): """Tests some generic features.""" atest = array([10], mask=True) @@ -619,44 +423,251 @@ atest[idx] = btest[idx] assert_equal(atest,[20]) #........................ - def test_oddfeatures_4(self): - """Tests some generic features.""" - atest = ones((10,10,10), dtype=float_) - btest = zeros(atest.shape, MaskType) - ctest = masked_where(btest,atest) - assert_equal(atest,ctest) + +#------------------------------------------------------------------------------ + +class TestMaskedArrayArithmetic(TestCase): + "Base test class for MaskedArrays." + + def setUp (self): + "Base data definition." + x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) + y = np.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) + a10 = 10. + m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] + m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 ,0, 1] + xm = masked_array(x, mask=m1) + ym = masked_array(y, mask=m2) + z = np.array([-.5, 0., .5, .8]) + zm = masked_array(z, mask=[0,1,0,0]) + xf = np.where(m1, 1.e+20, x) + xm.set_fill_value(1.e+20) + self.d = (x, y, a10, m1, m2, xm, ym, z, zm, xf) + + + def test_basic_arithmetic (self): + "Test of basic arithmetic." + (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d + a2d = array([[1,2],[0,4]]) + a2dm = masked_array(a2d, [[0,0],[1,0]]) + assert_equal(a2d * a2d, a2d * a2dm) + assert_equal(a2d + a2d, a2d + a2dm) + assert_equal(a2d - a2d, a2d - a2dm) + for s in [(12,), (4,3), (2,6)]: + x = x.reshape(s) + y = y.reshape(s) + xm = xm.reshape(s) + ym = ym.reshape(s) + xf = xf.reshape(s) + assert_equal(-x, -xm) + assert_equal(x + y, xm + ym) + assert_equal(x - y, xm - ym) + assert_equal(x * y, xm * ym) + assert_equal(x / y, xm / ym) + assert_equal(a10 + y, a10 + ym) + assert_equal(a10 - y, a10 - ym) + assert_equal(a10 * y, a10 * ym) + assert_equal(a10 / y, a10 / ym) + assert_equal(x + a10, xm + a10) + assert_equal(x - a10, xm - a10) + assert_equal(x * a10, xm * a10) + assert_equal(x / a10, xm / a10) + assert_equal(x**2, xm**2) + assert_equal(abs(x)**2.5, abs(xm) **2.5) + assert_equal(x**y, xm**ym) + assert_equal(np.add(x,y), add(xm, ym)) + assert_equal(np.subtract(x,y), subtract(xm, ym)) + assert_equal(np.multiply(x,y), multiply(xm, ym)) + assert_equal(np.divide(x,y), divide(xm, ym)) + + def test_mixed_arithmetic(self): + "Tests mixed arithmetics." + na = np.array([1]) + ma = array([1]) + self.failUnless(isinstance(na + ma, MaskedArray)) + self.failUnless(isinstance(ma + na, MaskedArray)) + + + def test_limits_arithmetic(self): + tiny = np.finfo(float).tiny + a = array([tiny, 1./tiny, 0.]) + assert_equal(getmaskarray(a/2), [0,0,0]) + assert_equal(getmaskarray(2/a), [1,0,1]) + + def test_masked_singleton_arithmetic(self): + "Tests some scalar arithmetics on MaskedArrays." + # Masked singleton should remain masked no matter what + xm = array(0, mask=1) + assert((1/array(0)).mask) + assert((1 + xm).mask) + assert((-xm).mask) + assert(maximum(xm, xm).mask) + assert(minimum(xm, xm).mask) + + def test_arithmetic_with_masked_singleton(self): + "Checks that there's no collapsing to masked" + x = masked_array([1,2]) + y = x * masked + assert_equal(y.shape, x.shape) + assert_equal(y._mask, [True, True]) + y = x[0] * masked + assert y is masked + y = x + masked + assert_equal(y.shape, x.shape) + assert_equal(y._mask, [True, True]) + + + + def test_scalar_arithmetic(self): + x = array(0, mask=0) + assert_equal(x.filled().ctypes.data, x.ctypes.data) + # Make sure we don't lose the shape in some circumstances + xm = array((0,0))/0. + assert_equal(xm.shape,(2,)) + assert_equal(xm.mask,[1,1]) + + def test_basic_ufuncs (self): + "Test various functions such as sin, cos." + (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d + assert_equal(np.cos(x), cos(xm)) + assert_equal(np.cosh(x), cosh(xm)) + assert_equal(np.sin(x), sin(xm)) + assert_equal(np.sinh(x), sinh(xm)) + assert_equal(np.tan(x), tan(xm)) + assert_equal(np.tanh(x), tanh(xm)) + assert_equal(np.sqrt(abs(x)), sqrt(xm)) + assert_equal(np.log(abs(x)), log(xm)) + assert_equal(np.log10(abs(x)), log10(xm)) + assert_equal(np.exp(x), exp(xm)) + assert_equal(np.arcsin(z), arcsin(zm)) + assert_equal(np.arccos(z), arccos(zm)) + assert_equal(np.arctan(z), arctan(zm)) + assert_equal(np.arctan2(x, y), arctan2(xm, ym)) + assert_equal(np.absolute(x), absolute(xm)) + assert_equal(np.equal(x,y), equal(xm, ym)) + assert_equal(np.not_equal(x,y), not_equal(xm, ym)) + assert_equal(np.less(x,y), less(xm, ym)) + assert_equal(np.greater(x,y), greater(xm, ym)) + assert_equal(np.less_equal(x,y), less_equal(xm, ym)) + assert_equal(np.greater_equal(x,y), greater_equal(xm, ym)) + assert_equal(np.conjugate(x), conjugate(xm)) + + + def test_count_func (self): + "Tests count" + ott = array([0.,1.,2.,3.], mask=[1,0,0,0]) + assert( isinstance(count(ott), int)) + assert_equal(3, count(ott)) + assert_equal(1, count(1)) + assert_equal(0, array(1,mask=[1])) + ott = ott.reshape((2,2)) + assert isinstance(count(ott,0), ndarray) + assert isinstance(count(ott), types.IntType) + assert_equal(3, count(ott)) + assert getmask(count(ott,0)) is nomask + assert_equal([1,2],count(ott,0)) + + def test_minmax_func (self): + "Tests minimum and maximum." + (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d + xr = np.ravel(x) #max doesn't work if shaped + xmr = ravel(xm) + assert_equal(max(xr), maximum(xmr)) #true because of careful selection of data + assert_equal(min(xr), minimum(xmr)) #true because of careful selection of data + # + assert_equal(minimum([1,2,3],[4,0,9]), [1,0,3]) + assert_equal(maximum([1,2,3],[4,0,9]), [4,2,9]) + x = arange(5) + y = arange(5) - 2 + x[3] = masked + y[0] = masked + assert_equal(minimum(x,y), where(less(x,y), x, y)) + assert_equal(maximum(x,y), where(greater(x,y), x, y)) + assert minimum(x) == 0 + assert maximum(x) == 4 + # + x = arange(4).reshape(2,2) + x[-1,-1] = masked + assert_equal(maximum(x), 2) + + + def test_minmax_funcs_with_output(self): + "Tests the min/max functions with explicit outputs" + mask = np.random.rand(12).round() + xm = array(np.random.uniform(0,10,12),mask=mask) + xm.shape = (3,4) + for funcname in ('min', 'max'): + # Initialize + npfunc = getattr(np, funcname) + mafunc = getattr(numpy.ma.core, funcname) + # Use the np version + nout = np.empty((4,), dtype=int) + result = npfunc(xm,axis=0,out=nout) + assert(result is nout) + # Use the ma version + nout.fill(-999) + result = mafunc(xm,axis=0,out=nout) + assert(result is nout) + + + def test_minmax_methods(self): + "Additional tests on max/min" + (_, _, _, _, _, xm, _, _, _, _) = self.d + xm.shape = (xm.size,) + assert_equal(xm.max(), 10) + assert(xm[0].max() is masked) + assert(xm[0].max(0) is masked) + assert(xm[0].max(-1) is masked) + assert_equal(xm.min(), -10.) + assert(xm[0].min() is masked) + assert(xm[0].min(0) is masked) + assert(xm[0].min(-1) is masked) + assert_equal(xm.ptp(), 20.) + assert(xm[0].ptp() is masked) + assert(xm[0].ptp(0) is masked) + assert(xm[0].ptp(-1) is masked) + # + x = array([1,2,3], mask=True) + assert(x.min() is masked) + assert(x.max() is masked) + assert(x.ptp() is masked) #........................ - def test_maskingfunctions(self): - "Tests masking functions." - x = array([1.,2.,3.,4.,5.]) - x[2] = masked - assert_equal(masked_where(greater(x, 2), x), masked_greater(x,2)) - assert_equal(masked_where(greater_equal(x, 2), x), masked_greater_equal(x,2)) - assert_equal(masked_where(less(x, 2), x), masked_less(x,2)) - assert_equal(masked_where(less_equal(x, 2), x), masked_less_equal(x,2)) - assert_equal(masked_where(not_equal(x, 2), x), masked_not_equal(x,2)) - assert_equal(masked_where(equal(x, 2), x), masked_equal(x,2)) - assert_equal(masked_where(not_equal(x,2), x), masked_not_equal(x,2)) - assert_equal(masked_inside(range(5), 1, 3), [0, 199, 199, 199, 4]) - assert_equal(masked_outside(range(5), 1, 3),[199,1,2,3,199]) - assert_equal(masked_inside(array(range(5), mask=[1,0,0,0,0]), 1, 3).mask, [1,1,1,1,0]) - assert_equal(masked_outside(array(range(5), mask=[0,1,0,0,0]), 1, 3).mask, [1,1,0,0,1]) - assert_equal(masked_equal(array(range(5), mask=[1,0,0,0,0]), 2).mask, [1,0,1,0,0]) - assert_equal(masked_not_equal(array([2,2,1,2,1], mask=[1,0,0,0,0]), 2).mask, [1,0,1,0,1]) - assert_equal(masked_where([1,1,0,0,0], [1,2,3,4,5]), [99,99,3,4,5]) - #........................ + def test_addsumprod (self): + "Tests add, sum, product." + (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d + assert_equal(np.add.reduce(x), add.reduce(x)) + assert_equal(np.add.accumulate(x), add.accumulate(x)) + assert_equal(4, sum(array(4),axis=0)) + assert_equal(4, sum(array(4), axis=0)) + assert_equal(np.sum(x,axis=0), sum(x,axis=0)) + assert_equal(np.sum(filled(xm,0),axis=0), sum(xm,axis=0)) + assert_equal(np.sum(x,0), sum(x,0)) + assert_equal(np.product(x,axis=0), product(x,axis=0)) + assert_equal(np.product(x,0), product(x,0)) + assert_equal(np.product(filled(xm,1),axis=0), product(xm,axis=0)) + s = (3,4) + x.shape = y.shape = xm.shape = ym.shape = s + if len(s) > 1: + assert_equal(np.concatenate((x,y),1), concatenate((xm,ym),1)) + assert_equal(np.add.reduce(x,1), add.reduce(x,1)) + assert_equal(np.sum(x,1), sum(x,1)) + assert_equal(np.product(x,1), product(x,1)) + + + + def test_TakeTransposeInnerOuter(self): "Test of take, transpose, inner, outer products" x = arange(24) - y = numpy.arange(24) + y = np.arange(24) x[5:6] = masked x = x.reshape(2,3,4) y = y.reshape(2,3,4) - assert_equal(numpy.transpose(y,(2,0,1)), transpose(x,(2,0,1))) - assert_equal(numpy.take(y, (2,0,1), 1), take(x, (2,0,1), 1)) - assert_equal(numpy.inner(filled(x,0),filled(y,0)), + assert_equal(np.transpose(y,(2,0,1)), transpose(x,(2,0,1))) + assert_equal(np.take(y, (2,0,1), 1), take(x, (2,0,1), 1)) + assert_equal(np.inner(filled(x,0),filled(y,0)), inner(x, y)) - assert_equal(numpy.outer(filled(x,0),filled(y,0)), + assert_equal(np.outer(filled(x,0),filled(y,0)), outer(x, y)) y = array(['abc', 1, 'def', 2, 3], object) y[2] = masked @@ -664,211 +675,256 @@ assert t[0] == 'abc' assert t[1] == 2 assert t[2] == 3 - #....................... - def test_maskedelement(self): - "Test of masked element" - x = arange(6) - x[1] = masked - assert(str(masked) == '--') - assert(x[1] is masked) - assert_equal(filled(x[1], 0), 0) - # don't know why these should raise an exception... - #self.failUnlessRaises(Exception, lambda x,y: x+y, masked, masked) - #self.failUnlessRaises(Exception, lambda x,y: x+y, masked, 2) - #self.failUnlessRaises(Exception, lambda x,y: x+y, masked, xx) - #self.failUnlessRaises(Exception, lambda x,y: x+y, xx, masked) - #........................ - def test_scalar(self): - "Checks masking a scalar" - x = masked_array(0) - assert_equal(str(x), '0') - x = masked_array(0,mask=True) - assert_equal(str(x), str(masked_print_option)) - x = masked_array(0, mask=False) - assert_equal(str(x), '0') - #........................ - def test_usingmasked(self): - "Checks that there's no collapsing to masked" - x = masked_array([1,2]) - y = x * masked - assert_equal(y.shape, x.shape) - assert_equal(y._mask, [True, True]) - y = x[0] * masked - assert y is masked - y = x + masked - assert_equal(y.shape, x.shape) - assert_equal(y._mask, [True, True]) - #........................ - def test_topython(self): - "Tests some communication issues with Python." - assert_equal(1, int(array(1))) - assert_equal(1.0, float(array(1))) - assert_equal(1, int(array([[[1]]]))) - assert_equal(1.0, float(array([[1]]))) - self.assertRaises(TypeError, float, array([1,1])) - warnings.simplefilter('ignore',UserWarning) - assert numpy.isnan(float(array([1],mask=[1]))) - warnings.simplefilter('default',UserWarning) + def test_imag_real(self): + "Check complex" + xx = array([1+10j,20+2j], mask=[1,0]) + assert_equal(xx.imag,[10,2]) + assert_equal(xx.imag.filled(), [1e+20,2]) + assert_equal(xx.imag.dtype, xx._data.imag.dtype) + assert_equal(xx.real,[1,20]) + assert_equal(xx.real.filled(), [1e+20,20]) + assert_equal(xx.real.dtype, xx._data.real.dtype) + + + def test_methods_with_output(self): + xm = array(np.random.uniform(0,10,12)).reshape(3,4) + xm[:,0] = xm[0] = xm[-1,-1] = masked # - a = array([1,2,3],mask=[1,0,0]) - self.assertRaises(TypeError, lambda:float(a)) - assert_equal(float(a[-1]), 3.) - assert(numpy.isnan(float(a[0]))) - self.assertRaises(TypeError, int, a) - assert_equal(int(a[-1]), 3) - self.assertRaises(MAError, lambda:int(a[0])) - #........................ - def test_arraymethods(self): - "Tests some MaskedArray methods." - a = array([1,3,2]) - b = array([1,3,2], mask=[1,0,1]) - assert_equal(a.any(), a.data.any()) - assert_equal(a.all(), a.data.all()) - assert_equal(a.argmax(), a.data.argmax()) - assert_equal(a.argmin(), a.data.argmin()) - assert_equal(a.choose(0,1,2,3,4), a.data.choose(0,1,2,3,4)) - assert_equal(a.compress([1,0,1]), a.data.compress([1,0,1])) - assert_equal(a.conj(), a.data.conj()) - assert_equal(a.conjugate(), a.data.conjugate()) + funclist = ('sum','prod','var','std', 'max', 'min', 'ptp', 'mean',) # - m = array([[1,2],[3,4]]) - assert_equal(m.diagonal(), m.data.diagonal()) - assert_equal(a.sum(), a.data.sum()) - assert_equal(a.take([1,2]), a.data.take([1,2])) - assert_equal(m.transpose(), m.data.transpose()) - #........................ - def test_basicattributes(self): - "Tests some basic array attributes." - a = array([1,3,2]) - b = array([1,3,2], mask=[1,0,1]) - assert_equal(a.ndim, 1) - assert_equal(b.ndim, 1) - assert_equal(a.size, 3) - assert_equal(b.size, 3) - assert_equal(a.shape, (3,)) - assert_equal(b.shape, (3,)) - #........................ - def test_single_element_subscript(self): - "Tests single element subscripts of Maskedarrays." - a = array([1,3,2]) - b = array([1,3,2], mask=[1,0,1]) - assert_equal(a[0].shape, ()) - assert_equal(b[0].shape, ()) - assert_equal(b[1].shape, ()) - #........................ - def test_maskcreation(self): - "Tests how masks are initialized at the creation of Maskedarrays." - data = arange(24, dtype=float_) - data[[3,6,15]] = masked - dma_1 = MaskedArray(data) - assert_equal(dma_1.mask, data.mask) - dma_2 = MaskedArray(dma_1) - assert_equal(dma_2.mask, dma_1.mask) - dma_3 = MaskedArray(dma_1, mask=[1,0,0,0]*6) - fail_if_equal(dma_3.mask, dma_1.mask) + for funcname in funclist: + npfunc = getattr(np, funcname) + xmmeth = getattr(xm, funcname) + + # A ndarray as explicit input + output = np.empty(4, dtype=float) + output.fill(-9999) + result = npfunc(xm, axis=0,out=output) + # ... the result should be the given output + assert(result is output) + assert_equal(result, xmmeth(axis=0, out=output)) + # + output = empty(4, dtype=int) + result = xmmeth(axis=0, out=output) + assert(result is output) + assert(output[0] is masked) - def test_pickling(self): - "Tests pickling" - import cPickle +#------------------------------------------------------------------------------ + +class TestMaskedArrayAttributes(TestCase): + + + def test_keepmask(self): + "Tests the keep mask flag" + x = masked_array([1,2,3], mask=[1,0,0]) + mx = masked_array(x) + assert_equal(mx.mask, x.mask) + mx = masked_array(x, mask=[0,1,0], keep_mask=False) + assert_equal(mx.mask, [0,1,0]) + mx = masked_array(x, mask=[0,1,0], keep_mask=True) + assert_equal(mx.mask, [1,1,0]) + # We default to true + mx = masked_array(x, mask=[0,1,0]) + assert_equal(mx.mask, [1,1,0]) + + def test_hardmask(self): + "Test hard_mask" + d = arange(5) + n = [0,0,0,1,1] + m = make_mask(n) + xh = array(d, mask = m, hard_mask=True) + # We need to copy, to avoid updating d in xh! + xs = array(d, mask = m, hard_mask=False, copy=True) + xh[[1,4]] = [10,40] + xs[[1,4]] = [10,40] + assert_equal(xh._data, [0,10,2,3,4]) + assert_equal(xs._data, [0,10,2,3,40]) + #assert_equal(xh.mask.ctypes.data, m.ctypes.data) + assert_equal(xs.mask, [0,0,0,1,0]) + assert(xh._hardmask) + assert(not xs._hardmask) + xh[1:4] = [10,20,30] + xs[1:4] = [10,20,30] + assert_equal(xh._data, [0,10,20,3,4]) + assert_equal(xs._data, [0,10,20,30,40]) + #assert_equal(xh.mask.ctypes.data, m.ctypes.data) + assert_equal(xs.mask, nomask) + xh[0] = masked + xs[0] = masked + assert_equal(xh.mask, [1,0,0,1,1]) + assert_equal(xs.mask, [1,0,0,0,0]) + xh[:] = 1 + xs[:] = 1 + assert_equal(xh._data, [0,1,1,3,4]) + assert_equal(xs._data, [1,1,1,1,1]) + assert_equal(xh.mask, [1,0,0,1,1]) + assert_equal(xs.mask, nomask) + # Switch to soft mask + xh.soften_mask() + xh[:] = arange(5) + assert_equal(xh._data, [0,1,2,3,4]) + assert_equal(xh.mask, nomask) + # Switch back to hard mask + xh.harden_mask() + xh[xh<3] = masked + assert_equal(xh._data, [0,1,2,3,4]) + assert_equal(xh._mask, [1,1,1,0,0]) + xh[filled(xh>1,False)] = 5 + assert_equal(xh._data, [0,1,2,5,5]) + assert_equal(xh._mask, [1,1,1,0,0]) + # + xh = array([[1,2],[3,4]], mask = [[1,0],[0,0]], hard_mask=True) + xh[0] = 0 + assert_equal(xh._data, [[1,0],[3,4]]) + assert_equal(xh._mask, [[1,0],[0,0]]) + xh[-1,-1] = 5 + assert_equal(xh._data, [[1,0],[3,5]]) + assert_equal(xh._mask, [[1,0],[0,0]]) + xh[filled(xh<5,False)] = 2 + assert_equal(xh._data, [[1,2],[2,5]]) + assert_equal(xh._mask, [[1,0],[0,0]]) + # + "Another test of hardmask" + d = arange(5) + n = [0,0,0,1,1] + m = make_mask(n) + xh = array(d, mask = m, hard_mask=True) + xh[4:5] = 999 + #assert_equal(xh.mask.ctypes.data, m.ctypes.data) + xh[0:1] = 999 + assert_equal(xh._data,[999,1,2,3,4]) + + def test_smallmask(self): + "Checks the behaviour of _smallmask" a = arange(10) - a[::3] = masked - a.fill_value = 999 - a_pickled = cPickle.loads(a.dumps()) - assert_equal(a_pickled._mask, a._mask) - assert_equal(a_pickled._data, a._data) - assert_equal(a_pickled.fill_value, 999) + a[1] = masked + a[1] = 1 + assert_equal(a._mask, nomask) + a = arange(10) + a._smallmask = False + a[1] = masked + a[1] = 1 + assert_equal(a._mask, zeros(10)) + + +#------------------------------------------------------------------------------ + +class TestFillingValues(TestCase): + # + def test_check_on_scalar(self): + "Test _check_fill_value" + _check_fill_value = np.ma.core._check_fill_value # - a = array(numpy.matrix(range(10)), mask=[1,0,1,0,0]*2) - a_pickled = cPickle.loads(a.dumps()) - assert_equal(a_pickled._mask, a._mask) - assert_equal(a_pickled, a) - assert(isinstance(a_pickled._data,numpy.matrix)) - # + fval = _check_fill_value(0,int) + assert_equal(fval, 0) + fval = _check_fill_value(None,int) + assert_equal(fval, default_fill_value(0)) + # + fval = _check_fill_value(0,"|S3") + assert_equal(fval, "0") + fval = _check_fill_value(None,"|S3") + assert_equal(fval, default_fill_value("|S3")) + # + fval = _check_fill_value(1e+20,int) + assert_equal(fval, default_fill_value(0)) + + + def test_check_on_fields(self): + "Tests _check_fill_value with records" + _check_fill_value = np.ma.core._check_fill_value + ndtype = [('a',int),('b',float),('c',"|S3")] + # A check on a list should return a single record + fval = _check_fill_value([-999,-999.9,"???"], ndtype) + assert(isinstance(fval,ndarray)) + assert_equal(fval.item(), [-999,-999.9,"???"]) + # A check on Non should output the defaults + fval = _check_fill_value(None, ndtype) + assert(isinstance(fval,ndarray)) + assert_equal(fval.item(), [default_fill_value(0), + default_fill_value(0.), + default_fill_value("0")]) + #.....Using a flexi-ndarray as fill_value should work + fill_val = np.array((-999,-999.9,"???"),dtype=ndtype) + fval = _check_fill_value(fill_val, ndtype) + assert(isinstance(fval,ndarray)) + assert_equal(fval.item(), [-999,-999.9,"???"]) + #.....Using a flexi-ndarray w/ a different type shouldn't matter + fill_val = np.array((-999,-999.9,"???"), + dtype=[("A",int),("B",float),("C","|S3")]) + fval = _check_fill_value(fill_val, ndtype) + assert(isinstance(fval,ndarray)) + assert_equal(fval.item(), [-999,-999.9,"???"]) + #.....Using an object-array shouldn't matter either + fill_value = np.array((-999,-999.9,"???"), dtype=object) + fval = _check_fill_value(fill_val, ndtype) + assert(isinstance(fval,ndarray)) + assert_equal(fval.item(), [-999,-999.9,"???"]) + # + fill_value = np.array((-999,-999.9,"???")) + fval = _check_fill_value(fill_val, ndtype) + assert(isinstance(fval,ndarray)) + assert_equal(fval.item(), [-999,-999.9,"???"]) + #.....One-field-only flexi-ndarray should work as well + ndtype = [("a",int)] + fval = _check_fill_value(-999, ndtype) + assert(isinstance(fval,ndarray)) + assert_equal(fval.item(), (-999,)) + + + def test_fillvalue_conversion(self): + "Tests the behavior of fill_value during conversion" + # We had a tailored comment to make sure special attributes are properly + # dealt with + a = array(['3', '4', '5']) + a._basedict.update({'comment':"updated!"}) + # + b = array(a, dtype=int) + assert_equal(b._data, [3,4,5]) + assert_equal(b.fill_value, default_fill_value(0)) + # + b = array(a, dtype=float) + assert_equal(b._data, [3,4,5]) + assert_equal(b.fill_value, default_fill_value(0.)) + # + b = a.astype(int) + assert_equal(b._data, [3,4,5]) + assert_equal(b.fill_value, default_fill_value(0)) + assert_equal(b._basedict['comment'], "updated!") + # + b = a.astype([('a','|S3')]) + assert_equal(b['a']._data, a._data) + assert_equal(b['a'].fill_value, a.fill_value) + + def test_fillvalue(self): - "Having fun with the fill_value" + "Yet more fun with the fill_value" data = masked_array([1,2,3],fill_value=-999) series = data[[0,2,1]] assert_equal(series._fill_value, data._fill_value) # mtype = [('f',float_),('s','|S3')] - x = array([(1,'a'),(2,'b'),(numpy.pi,'pi')], dtype=mtype) + x = array([(1,'a'),(2,'b'),(pi,'pi')], dtype=mtype) x.fill_value=999 - assert_equal(x.fill_value,[999.,'999']) + assert_equal(x.fill_value.item(),[999.,'999']) assert_equal(x['f'].fill_value, 999) assert_equal(x['s'].fill_value, '999') # x.fill_value=(9,'???') - assert_equal(x.fill_value, (9,'???')) + assert_equal(x.fill_value.item(), (9,'???')) assert_equal(x['f'].fill_value, 9) assert_equal(x['s'].fill_value, '???') # x = array([1,2,3.1]) x.fill_value = 999 - assert_equal(numpy.asarray(x.fill_value).dtype, float_) + assert_equal(np.asarray(x.fill_value).dtype, float_) assert_equal(x.fill_value, 999.) - # - def test_asarray(self): - (x, y, a10, m1, m2, xm, ym, z, zm, xf) = self.d - xmm = asarray(xm) - assert_equal(xmm._data, xm._data) - assert_equal(xmm._mask, xm._mask) - # - def test_fix_invalid(self): - "Checks fix_invalid." - data = masked_array(numpy.sqrt([-1., 0., 1.]), mask=[0,0,1]) - data_fixed = fix_invalid(data) - assert_equal(data_fixed._data, [data.fill_value, 0., 1.]) - assert_equal(data_fixed._mask, [1., 0., 1.]) - # - def test_imag_real(self): - "Check complex" - xx = array([1+10j,20+2j], mask=[1,0]) - assert_equal(xx.imag,[10,2]) - assert_equal(xx.imag.filled(), [1e+20,2]) - assert_equal(xx.imag.dtype, xx._data.imag.dtype) - assert_equal(xx.real,[1,20]) - assert_equal(xx.real.filled(), [1e+20,20]) - assert_equal(xx.real.dtype, xx._data.real.dtype) - # - def test_ndmin(self): - "Check the use of ndmin" - x = array([1,2,3],mask=[1,0,0], ndmin=2) - assert_equal(x.shape,(1,3)) - assert_equal(x._data,[[1,2,3]]) - assert_equal(x._mask,[[1,0,0]]) - # - def test_record(self): - "Check record access" - mtype = [('f',float_),('s','|S3')] - x = array([(1,'a'),(2,'b'),(numpy.pi,'pi')], dtype=mtype) - x[1] = masked - # - (xf, xs) = (x['f'], x['s']) - assert_equal(xf.data, [1,2,numpy.pi]) - assert_equal(xf.mask, [0,1,0]) - assert_equal(xf.dtype, float_) - assert_equal(xs.data, ['a', 'b', 'pi']) - assert_equal(xs.mask, [0,1,0]) - assert_equal(xs.dtype, '|S3') - # - def test_set_records(self): - "Check setting an element of a record)" - mtype = [('f',float_),('s','|S3')] - x = array([(1,'a'),(2,'b'),(numpy.pi,'pi')], dtype=mtype) - x[0] = (10,'A') - (xf, xs) = (x['f'], x['s']) - assert_equal(xf.data, [10,2,numpy.pi]) - assert_equal(xf.dtype, float_) - assert_equal(xs.data, ['A', 'b', 'pi']) - assert_equal(xs.dtype, '|S3') +#------------------------------------------------------------------------------ -#............................................................................... - -class TestUfuncs(NumpyTestCase): +class TestUfuncs(TestCase): "Test class for the application of ufuncs on MaskedArrays." def setUp(self): "Base data definition." @@ -896,12 +952,11 @@ 'less', 'greater', 'logical_and', 'logical_or', 'logical_xor', ]: - #print f try: uf = getattr(umath, f) except AttributeError: uf = getattr(fromnumeric, f) - mf = getattr(coremodule, f) + mf = getattr(numpy.ma.core, f) args = self.d[:uf.nin] ur = uf(*args) mr = mf(*args) @@ -928,13 +983,146 @@ assert(amask.max(1)[0].mask) assert(amask.min(1)[0].mask) -#............................................................................... -class TestArrayMethods(NumpyTestCase): +#------------------------------------------------------------------------------ + +class TestMaskedArrayInPlaceArithmetics(TestCase): + "Test MaskedArray Arithmetics" + + def setUp(self): + x = arange(10) + y = arange(10) + xm = arange(10) + xm[2] = masked + self.intdata = (x, y, xm) + self.floatdata = (x.astype(float), y.astype(float), xm.astype(float)) + + def test_inplace_addition_scalar(self): + """Test of inplace additions""" + (x, y, xm) = self.intdata + xm[2] = masked + x += 1 + assert_equal(x, y+1) + xm += 1 + assert_equal(xm, y+1) + # + warnings.simplefilter('ignore', DeprecationWarning) + (x, _, xm) = self.floatdata + id1 = x.raw_data().ctypes.data + x += 1. + assert (id1 == x.raw_data().ctypes.data) + assert_equal(x, y+1.) + warnings.simplefilter('default', DeprecationWarning) + + def test_inplace_addition_array(self): + """Test of inplace additions""" + (x, y, xm) = self.intdata + m = xm.mask + a = arange(10, dtype=float) + a[-1] = masked + x += a + xm += a + assert_equal(x,y+a) + assert_equal(xm,y+a) + assert_equal(xm.mask, mask_or(m,a.mask)) + + def test_inplace_subtraction_scalar(self): + """Test of inplace subtractions""" + (x, y, xm) = self.intdata + x -= 1 + assert_equal(x, y-1) + xm -= 1 + assert_equal(xm, y-1) + + def test_inplace_subtraction_array(self): + """Test of inplace subtractions""" + (x, y, xm) = self.floatdata + m = xm.mask + a = arange(10, dtype=float_) + a[-1] = masked + x -= a + xm -= a + assert_equal(x,y-a) + assert_equal(xm,y-a) + assert_equal(xm.mask, mask_or(m,a.mask)) + + def test_inplace_multiplication_scalar(self): + """Test of inplace multiplication""" + (x, y, xm) = self.floatdata + x *= 2.0 + assert_equal(x, y*2) + xm *= 2.0 + assert_equal(xm, y*2) + + def test_inplace_multiplication_array(self): + """Test of inplace multiplication""" + (x, y, xm) = self.floatdata + m = xm.mask + a = arange(10, dtype=float_) + a[-1] = masked + x *= a + xm *= a + assert_equal(x,y*a) + assert_equal(xm,y*a) + assert_equal(xm.mask, mask_or(m,a.mask)) + + def test_inplace_division_scalar_int(self): + """Test of inplace division""" + (x, y, xm) = self.intdata + x = arange(10)*2 + xm = arange(10)*2 + xm[2] = masked + x /= 2 + assert_equal(x, y) + xm /= 2 + assert_equal(xm, y) + + def test_inplace_division_scalar_float(self): + """Test of inplace division""" + (x, y, xm) = self.floatdata + x /= 2.0 + assert_equal(x, y/2.0) + xm /= arange(10) + assert_equal(xm, ones((10,))) + + def test_inplace_division_array_float(self): + """Test of inplace division""" + (x, y, xm) = self.floatdata + m = xm.mask + a = arange(10, dtype=float_) + a[-1] = masked + x /= a + xm /= a + assert_equal(x,y/a) + assert_equal(xm,y/a) + assert_equal(xm.mask, mask_or(mask_or(m,a.mask), (a==0))) + + def test_inplace_division_misc(self): + # + x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) + y = np.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) + m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] + m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 ,0, 1] + xm = masked_array(x, mask=m1) + ym = masked_array(y, mask=m2) + # + z = xm/ym + assert_equal(z._mask, [1,1,1,0,0,1,1,0,0,0,1,1]) + assert_equal(z._data, [0.2,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) + # + xm = xm.copy() + xm /= ym + assert_equal(xm._mask, [1,1,1,0,0,1,1,0,0,0,1,1]) + assert_equal(xm._data, [1/5.,1.,1./3.,-1.,-pi/2.,-1.,5.,1.,1.,1.,2.,1.]) + + +#------------------------------------------------------------------------------ + +class TestMaskedArrayMethods(TestCase): "Test class for miscellaneous MaskedArrays methods." def setUp(self): "Base data definition." - x = numpy.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, + x = np.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479, @@ -943,7 +1131,7 @@ X = x.reshape(6,6) XX = x.reshape(3,2,2,3) - m = numpy.array([0, 1, 0, 1, 0, 0, + m = np.array([0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, @@ -953,7 +1141,7 @@ mX = array(data=X,mask=m.reshape(X.shape)) mXX = array(data=XX,mask=m.reshape(XX.shape)) - m2 = numpy.array([1, 1, 0, 1, 0, 0, + m2 = np.array([1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, @@ -964,78 +1152,103 @@ m2XX = array(data=XX,mask=m2.reshape(XX.shape)) self.d = (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) - #------------------------------------------------------ - def test_trace(self): - "Tests trace on MaskedArrays." - (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d - mXdiag = mX.diagonal() - assert_equal(mX.trace(), mX.diagonal().compressed().sum()) - assert_almost_equal(mX.trace(), - X.trace() - sum(mXdiag.mask*X.diagonal(),axis=0)) + def test_generic_methods(self): + "Tests some MaskedArray methods." + a = array([1,3,2]) + b = array([1,3,2], mask=[1,0,1]) + assert_equal(a.any(), a.data.any()) + assert_equal(a.all(), a.data.all()) + assert_equal(a.argmax(), a.data.argmax()) + assert_equal(a.argmin(), a.data.argmin()) + assert_equal(a.choose(0,1,2,3,4), a.data.choose(0,1,2,3,4)) + assert_equal(a.compress([1,0,1]), a.data.compress([1,0,1])) + assert_equal(a.conj(), a.data.conj()) + assert_equal(a.conjugate(), a.data.conjugate()) + # + m = array([[1,2],[3,4]]) + assert_equal(m.diagonal(), m.data.diagonal()) + assert_equal(a.sum(), a.data.sum()) + assert_equal(a.take([1,2]), a.data.take([1,2])) + assert_equal(m.transpose(), m.data.transpose()) - def test_clip(self): - "Tests clip on MaskedArrays." - (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d - clipped = mx.clip(2,8) - assert_equal(clipped.mask,mx.mask) - assert_equal(clipped.data,x.clip(2,8)) - assert_equal(clipped.data,mx.data.clip(2,8)) - def test_ptp(self): - "Tests ptp on MaskedArrays." - (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d - (n,m) = X.shape - assert_equal(mx.ptp(),mx.compressed().ptp()) - rows = numpy.zeros(n,numpy.float_) - cols = numpy.zeros(m,numpy.float_) - for k in range(m): - cols[k] = mX[:,k].compressed().ptp() - for k in range(n): - rows[k] = mX[k].compressed().ptp() - assert_equal(mX.ptp(0),cols) - assert_equal(mX.ptp(1),rows) + def test_allany(self): + """Checks the any/all methods/functions.""" + x = np.array([[ 0.13, 0.26, 0.90], + [ 0.28, 0.33, 0.63], + [ 0.31, 0.87, 0.70]]) + m = np.array([[ True, False, False], + [False, False, False], + [True, True, False]], dtype=np.bool_) + mx = masked_array(x, mask=m) + xbig = np.array([[False, False, True], + [False, False, True], + [False, True, True]], dtype=np.bool_) + mxbig = (mx > 0.5) + mxsmall = (mx < 0.5) + # + assert (mxbig.all()==False) + assert (mxbig.any()==True) + assert_equal(mxbig.all(0),[False, False, True]) + assert_equal(mxbig.all(1), [False, False, True]) + assert_equal(mxbig.any(0),[False, False, True]) + assert_equal(mxbig.any(1), [True, True, True]) + # + assert (mxsmall.all()==False) + assert (mxsmall.any()==True) + assert_equal(mxsmall.all(0), [True, True, False]) + assert_equal(mxsmall.all(1), [False, False, False]) + assert_equal(mxsmall.any(0), [True, True, False]) + assert_equal(mxsmall.any(1), [True, True, False]) - def test_swapaxes(self): - "Tests swapaxes on MaskedArrays." - (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d - mXswapped = mX.swapaxes(0,1) - assert_equal(mXswapped[-1],mX[:,-1]) - mXXswapped = mXX.swapaxes(0,2) - assert_equal(mXXswapped.shape,(2,2,3,3)) - def test_cumsumprod(self): - "Tests cumsum & cumprod on MaskedArrays." - (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d - mXcp = mX.cumsum(0) - assert_equal(mXcp.data,mX.filled(0).cumsum(0)) - mXcp = mX.cumsum(1) - assert_equal(mXcp.data,mX.filled(0).cumsum(1)) + def test_allany_onmatrices(self): + x = np.array([[ 0.13, 0.26, 0.90], + [ 0.28, 0.33, 0.63], + [ 0.31, 0.87, 0.70]]) + X = np.matrix(x) + m = np.array([[ True, False, False], + [False, False, False], + [True, True, False]], dtype=np.bool_) + mX = masked_array(X, mask=m) + mXbig = (mX > 0.5) + mXsmall = (mX < 0.5) # - mXcp = mX.cumprod(0) - assert_equal(mXcp.data,mX.filled(1).cumprod(0)) - mXcp = mX.cumprod(1) - assert_equal(mXcp.data,mX.filled(1).cumprod(1)) + assert (mXbig.all()==False) + assert (mXbig.any()==True) + assert_equal(mXbig.all(0), np.matrix([False, False, True])) + assert_equal(mXbig.all(1), np.matrix([False, False, True]).T) + assert_equal(mXbig.any(0), np.matrix([False, False, True])) + assert_equal(mXbig.any(1), np.matrix([ True, True, True]).T) + # + assert (mXsmall.all()==False) + assert (mXsmall.any()==True) + assert_equal(mXsmall.all(0), np.matrix([True, True, False])) + assert_equal(mXsmall.all(1), np.matrix([False, False, False]).T) + assert_equal(mXsmall.any(0), np.matrix([True, True, False])) + assert_equal(mXsmall.any(1), np.matrix([True, True, False]).T) - def test_varstd(self): - "Tests var & std on MaskedArrays." - (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d - assert_almost_equal(mX.var(axis=None),mX.compressed().var()) - assert_almost_equal(mX.std(axis=None),mX.compressed().std()) - assert_almost_equal(mX.std(axis=None,ddof=1), - mX.compressed().std(ddof=1)) - assert_almost_equal(mX.var(axis=None,ddof=1), - mX.compressed().var(ddof=1)) - assert_equal(mXX.var(axis=3).shape,XX.var(axis=3).shape) - assert_equal(mX.var().shape,X.var().shape) - (mXvar0,mXvar1) = (mX.var(axis=0), mX.var(axis=1)) - assert_almost_equal(mX.var(axis=None,ddof=2),mX.compressed().var(ddof=2)) - assert_almost_equal(mX.std(axis=None,ddof=2),mX.compressed().std(ddof=2)) - for k in range(6): - assert_almost_equal(mXvar1[k],mX[k].compressed().var()) - assert_almost_equal(mXvar0[k],mX[:,k].compressed().var()) - assert_almost_equal(numpy.sqrt(mXvar0[k]), mX[:,k].compressed().std()) - def test_argmin(self): + def test_allany_oddities(self): + "Some fun with all and any" + store = empty(1, dtype=bool) + full = array([1,2,3], mask=True) + # + assert(full.all() is masked) + full.all(out=store) + assert(store) + assert(store._mask, True) + assert(store is not masked) + # + store = empty(1, dtype=bool) + assert(full.any() is masked) + full.any(out=store) + assert(not store) + assert(store._mask, True) + assert(store is not masked) + + + def test_argmax_argmin(self): "Tests argmin & argmax on MaskedArrays." (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d # @@ -1058,6 +1271,90 @@ assert_equal(mX.argmax(1), [2,4,1,1,4,1]) assert_equal(m2X.argmax(1), [2,4,1,1,1,1]) + + def test_clip(self): + "Tests clip on MaskedArrays." + x = np.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, + 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, + 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, + 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479, + 7.189, 9.645, 5.395, 4.961, 9.894, 2.893, + 7.357, 9.828, 6.272, 3.758, 6.693, 0.993]) + m = np.array([0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, + 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, + 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0]) + mx = array(x,mask=m) + clipped = mx.clip(2,8) + assert_equal(clipped.mask,mx.mask) + assert_equal(clipped.data,x.clip(2,8)) + assert_equal(clipped.data,mx.data.clip(2,8)) + + + def test_compress(self): + "test compress" + a = masked_array([1., 2., 3., 4., 5.], fill_value=9999) + condition = (a > 1.5) & (a < 3.5) + assert_equal(a.compress(condition),[2.,3.]) + # + a[[2,3]] = masked + b = a.compress(condition) + assert_equal(b._data,[2.,3.]) + assert_equal(b._mask,[0,1]) + assert_equal(b.fill_value,9999) + assert_equal(b,a[condition]) + # + condition = (a<4.) + b = a.compress(condition) + assert_equal(b._data,[1.,2.,3.]) + assert_equal(b._mask,[0,0,1]) + assert_equal(b.fill_value,9999) + assert_equal(b,a[condition]) + # + a = masked_array([[10,20,30],[40,50,60]], mask=[[0,0,1],[1,0,0]]) + b = a.compress(a.ravel() >= 22) + assert_equal(b._data, [30, 40, 50, 60]) + assert_equal(b._mask, [1,1,0,0]) + # + x = np.array([3,1,2]) + b = a.compress(x >= 2, axis=1) + assert_equal(b._data, [[10,30],[40,60]]) + assert_equal(b._mask, [[0,1],[1,0]]) + + + def test_compressed(self): + "Tests compressed" + a = array([1,2,3,4],mask=[0,0,0,0]) + b = a.compressed() + assert_equal(b, a) + a[0] = masked + b = a.compressed() + assert_equal(b, [2,3,4]) + # + a = array(np.matrix([1,2,3,4]), mask=[0,0,0,0]) + b = a.compressed() + assert_equal(b,a) + assert(isinstance(b,np.matrix)) + a[0,0] = masked + b = a.compressed() + assert_equal(b, [[2,3,4]]) + + + def test_empty(self): + "Tests empty/like" + datatype = [('a',int_),('b',float_),('c','|S8')] + a = masked_array([(1,1.1,'1.1'),(2,2.2,'2.2'),(3,3.3,'3.3')], + dtype=datatype) + assert_equal(len(a.fill_value.item()), len(datatype)) + # + b = empty_like(a) + assert_equal(b.shape, a.shape) + assert_equal(b.fill_value, a.fill_value) + # + b = empty(len(a), dtype=datatype) + assert_equal(b.shape, a.shape) + assert_equal(b.fill_value, a.fill_value) + + def test_put(self): "Tests put." d = arange(5) @@ -1089,6 +1386,7 @@ assert_array_equal(x, [0,1,2,3,4,5,6,7,8,9,]) assert_equal(x.mask, [1,0,0,0,1,1,0,0,0,0]) + def test_put_hardmask(self): "Tests put on hardmask" d = arange(5) @@ -1098,164 +1396,76 @@ xh.put([4,2,0,1,3],[1,2,3,4,5]) assert_equal(xh._data, [3,4,2,4,5]) - def test_take(self): - "Tests take" - x = masked_array([10,20,30,40],[0,1,0,1]) - assert_equal(x.take([0,0,3]), masked_array([10, 10, 40], [0,0,1]) ) - assert_equal(x.take([0,0,3]), x[[0,0,3]]) - assert_equal(x.take([[0,1],[0,1]]), - masked_array([[10,20],[10,20]], [[0,1],[0,1]]) ) - # - x = array([[10,20,30],[40,50,60]], mask=[[0,0,1],[1,0,0,]]) - assert_equal(x.take([0,2], axis=1), - array([[10,30],[40,60]], mask=[[0,1],[1,0]])) - assert_equal(take(x, [0,2], axis=1), - array([[10,30],[40,60]], mask=[[0,1],[1,0]])) - #........................ - def test_anyall(self): - """Checks the any/all methods/functions.""" - x = numpy.array([[ 0.13, 0.26, 0.90], - [ 0.28, 0.33, 0.63], - [ 0.31, 0.87, 0.70]]) - m = numpy.array([[ True, False, False], - [False, False, False], - [True, True, False]], dtype=numpy.bool_) - mx = masked_array(x, mask=m) - xbig = numpy.array([[False, False, True], - [False, False, True], - [False, True, True]], dtype=numpy.bool_) - mxbig = (mx > 0.5) - mxsmall = (mx < 0.5) - # - assert (mxbig.all()==False) - assert (mxbig.any()==True) - assert_equal(mxbig.all(0),[False, False, True]) - assert_equal(mxbig.all(1), [False, False, True]) - assert_equal(mxbig.any(0),[False, False, True]) - assert_equal(mxbig.any(1), [True, True, True]) - # - assert (mxsmall.all()==False) - assert (mxsmall.any()==True) - assert_equal(mxsmall.all(0), [True, True, False]) - assert_equal(mxsmall.all(1), [False, False, False]) - assert_equal(mxsmall.any(0), [True, True, False]) - assert_equal(mxsmall.any(1), [True, True, False]) - # - X = numpy.matrix(x) - mX = masked_array(X, mask=m) - mXbig = (mX > 0.5) - mXsmall = (mX < 0.5) - # - assert (mXbig.all()==False) - assert (mXbig.any()==True) - assert_equal(mXbig.all(0), numpy.matrix([False, False, True])) - assert_equal(mXbig.all(1), numpy.matrix([False, False, True]).T) - assert_equal(mXbig.any(0), numpy.matrix([False, False, True])) - assert_equal(mXbig.any(1), numpy.matrix([ True, True, True]).T) - # - assert (mXsmall.all()==False) - assert (mXsmall.any()==True) - assert_equal(mXsmall.all(0), numpy.matrix([True, True, False])) - assert_equal(mXsmall.all(1), numpy.matrix([False, False, False]).T) - assert_equal(mXsmall.any(0), numpy.matrix([True, True, False])) - assert_equal(mXsmall.any(1), numpy.matrix([True, True, False]).T) - def test_keepmask(self): - "Tests the keep mask flag" - x = masked_array([1,2,3], mask=[1,0,0]) - mx = masked_array(x) - assert_equal(mx.mask, x.mask) - mx = masked_array(x, mask=[0,1,0], keep_mask=False) - assert_equal(mx.mask, [0,1,0]) - mx = masked_array(x, mask=[0,1,0], keep_mask=True) - assert_equal(mx.mask, [1,1,0]) - # We default to true - mx = masked_array(x, mask=[0,1,0]) - assert_equal(mx.mask, [1,1,0]) + def test_putmask(self): + x = arange(6)+1 + mx = array(x, mask=[0,0,0,1,1,1]) + mask = [0,0,1,0,0,1] + # w/o mask, w/o masked values + xx = x.copy() + putmask(xx, mask, 99) + assert_equal(xx, [1,2,99,4,5,99]) + # w/ mask, w/o masked values + mxx = mx.copy() + putmask(mxx, mask, 99) + assert_equal(mxx._data, [1,2,99,4,5,99]) + assert_equal(mxx._mask, [0,0,0,1,1,0]) + # w/o mask, w/ masked values + values = array([10,20,30,40,50,60],mask=[1,1,1,0,0,0]) + xx = x.copy() + putmask(xx, mask, values) + assert_equal(xx._data, [1,2,30,4,5,60]) + assert_equal(xx._mask, [0,0,1,0,0,0]) + # w/ mask, w/ masked values + mxx = mx.copy() + putmask(mxx, mask, values) + assert_equal(mxx._data, [1,2,30,4,5,60]) + assert_equal(mxx._mask, [0,0,1,1,1,0]) + # w/ mask, w/ masked values + hardmask + mxx = mx.copy() + mxx.harden_mask() + putmask(mxx, mask, values) + assert_equal(mxx, [1,2,30,4,5,60]) - def test_hardmask(self): - "Test hard_mask" - d = arange(5) - n = [0,0,0,1,1] - m = make_mask(n) - xh = array(d, mask = m, hard_mask=True) - # We need to copy, to avoid updating d in xh! - xs = array(d, mask = m, hard_mask=False, copy=True) - xh[[1,4]] = [10,40] - xs[[1,4]] = [10,40] - assert_equal(xh._data, [0,10,2,3,4]) - assert_equal(xs._data, [0,10,2,3,40]) - #assert_equal(xh.mask.ctypes.data, m.ctypes.data) - assert_equal(xs.mask, [0,0,0,1,0]) - assert(xh._hardmask) - assert(not xs._hardmask) - xh[1:4] = [10,20,30] - xs[1:4] = [10,20,30] - assert_equal(xh._data, [0,10,20,3,4]) - assert_equal(xs._data, [0,10,20,30,40]) - #assert_equal(xh.mask.ctypes.data, m.ctypes.data) - assert_equal(xs.mask, nomask) - xh[0] = masked - xs[0] = masked - assert_equal(xh.mask, [1,0,0,1,1]) - assert_equal(xs.mask, [1,0,0,0,0]) - xh[:] = 1 - xs[:] = 1 - assert_equal(xh._data, [0,1,1,3,4]) - assert_equal(xs._data, [1,1,1,1,1]) - assert_equal(xh.mask, [1,0,0,1,1]) - assert_equal(xs.mask, nomask) - # Switch to soft mask - xh.soften_mask() - xh[:] = arange(5) - assert_equal(xh._data, [0,1,2,3,4]) - assert_equal(xh.mask, nomask) - # Switch back to hard mask - xh.harden_mask() - xh[xh<3] = masked - assert_equal(xh._data, [0,1,2,3,4]) - assert_equal(xh._mask, [1,1,1,0,0]) - xh[filled(xh>1,False)] = 5 - assert_equal(xh._data, [0,1,2,5,5]) - assert_equal(xh._mask, [1,1,1,0,0]) - # - xh = array([[1,2],[3,4]], mask = [[1,0],[0,0]], hard_mask=True) - xh[0] = 0 - assert_equal(xh._data, [[1,0],[3,4]]) - assert_equal(xh._mask, [[1,0],[0,0]]) - xh[-1,-1] = 5 - assert_equal(xh._data, [[1,0],[3,5]]) - assert_equal(xh._mask, [[1,0],[0,0]]) - xh[filled(xh<5,False)] = 2 - assert_equal(xh._data, [[1,2],[2,5]]) - assert_equal(xh._mask, [[1,0],[0,0]]) - # - "Another test of hardmask" - d = arange(5) - n = [0,0,0,1,1] - m = make_mask(n) - xh = array(d, mask = m, hard_mask=True) - xh[4:5] = 999 - #assert_equal(xh.mask.ctypes.data, m.ctypes.data) - xh[0:1] = 999 - assert_equal(xh._data,[999,1,2,3,4]) - def test_smallmask(self): - "Checks the behaviour of _smallmask" - a = arange(10) - a[1] = masked - a[1] = 1 - assert_equal(a._mask, nomask) - a = arange(10) - a._smallmask = False - a[1] = masked - a[1] = 1 - assert_equal(a._mask, zeros(10)) + def test_ravel(self): + "Tests ravel" + a = array([[1,2,3,4,5]], mask=[[0,1,0,0,0]]) + aravel = a.ravel() + assert_equal(a._mask.shape, a.shape) + a = array([0,0], mask=[1,1]) + aravel = a.ravel() + assert_equal(a._mask.shape, a.shape) + a = array(np.matrix([1,2,3,4,5]), mask=[[0,1,0,0,0]]) + aravel = a.ravel() + assert_equal(a.shape,(1,5)) + assert_equal(a._mask.shape, a.shape) + # Checs that small_mask is preserved + a = array([1,2,3,4],mask=[0,0,0,0],shrink=False) + assert_equal(a.ravel()._mask, [0,0,0,0]) + # Test that the fill_value is preserved + a.fill_value = -99 + a.shape = (2,2) + ar = a.ravel() + assert_equal(ar._mask, [0,0,0,0]) + assert_equal(ar._data, [1,2,3,4]) + assert_equal(ar.fill_value, -99) + def test_reshape(self): + "Tests reshape" + x = arange(4) + x[0] = masked + y = x.reshape(2,2) + assert_equal(y.shape, (2,2,)) + assert_equal(y._mask.shape, (2,2,)) + assert_equal(x.shape, (4,)) + assert_equal(x._mask.shape, (4,)) + + def test_sort(self): "Test sort" - x = array([1,4,2,3],mask=[0,1,0,0],dtype=numpy.uint8) + x = array([1,4,2,3],mask=[0,1,0,0],dtype=np.uint8) # sortedx = sort(x) assert_equal(sortedx._data,[1,2,3,4]) @@ -1269,7 +1479,7 @@ assert_equal(x._data,[1,2,3,4]) assert_equal(x._mask,[0,0,0,1]) # - x = array([1,4,2,3],mask=[0,1,0,0],dtype=numpy.uint8) + x = array([1,4,2,3],mask=[0,1,0,0],dtype=np.uint8) x.sort(endwith=False) assert_equal(x._data, [4,1,2,3]) assert_equal(x._mask, [1,0,0,0]) @@ -1278,14 +1488,15 @@ sortedx = sort(x) assert(not isinstance(sorted, MaskedArray)) # - x = array([0,1,-1,-2,2], mask=nomask, dtype=numpy.int8) + x = array([0,1,-1,-2,2], mask=nomask, dtype=np.int8) sortedx = sort(x, endwith=False) assert_equal(sortedx._data, [-2,-1,0,1,2]) - x = array([0,1,-1,-2,2], mask=[0,1,0,0,1], dtype=numpy.int8) + x = array([0,1,-1,-2,2], mask=[0,1,0,0,1], dtype=np.int8) sortedx = sort(x, endwith=False) assert_equal(sortedx._data, [1,2,-2,-1,0]) assert_equal(sortedx._mask, [1,1,0,0,0]) + def test_sort_2d(self): "Check sort of 2D array." # 2D array w/o mask @@ -1327,60 +1538,59 @@ assert_equal(am, an) - def test_ravel(self): - "Tests ravel" - a = array([[1,2,3,4,5]], mask=[[0,1,0,0,0]]) - aravel = a.ravel() - assert_equal(a._mask.shape, a.shape) - a = array([0,0], mask=[1,1]) - aravel = a.ravel() - assert_equal(a._mask.shape, a.shape) - a = array(numpy.matrix([1,2,3,4,5]), mask=[[0,1,0,0,0]]) - aravel = a.ravel() - assert_equal(a.shape,(1,5)) - assert_equal(a._mask.shape, a.shape) - # Checs that small_mask is preserved - a = array([1,2,3,4],mask=[0,0,0,0],shrink=False) - assert_equal(a.ravel()._mask, [0,0,0,0]) - # Test that the fill_value is preserved - a.fill_value = -99 - a.shape = (2,2) - ar = a.ravel() - assert_equal(ar._mask, [0,0,0,0]) - assert_equal(ar._data, [1,2,3,4]) - assert_equal(ar.fill_value, -99) + def test_squeeze(self): + "Check squeeze" + data = masked_array([[1,2,3]]) + assert_equal(data.squeeze(), [1,2,3]) + data = masked_array([[1,2,3]], mask=[[1,1,1]]) + assert_equal(data.squeeze(), [1,2,3]) + assert_equal(data.squeeze()._mask, [1,1,1]) + data = masked_array([[1]], mask=True) + assert(data.squeeze() is masked) - def test_reshape(self): - "Tests reshape" - x = arange(4) - x[0] = masked - y = x.reshape(2,2) - assert_equal(y.shape, (2,2,)) - assert_equal(y._mask.shape, (2,2,)) - assert_equal(x.shape, (4,)) - assert_equal(x._mask.shape, (4,)) - def test_compressed(self): - "Tests compressed" - a = array([1,2,3,4],mask=[0,0,0,0]) - b = a.compressed() - assert_equal(b, a) - a[0] = masked - b = a.compressed() - assert_equal(b, [2,3,4]) + def test_swapaxes(self): + "Tests swapaxes on MaskedArrays." + x = np.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, + 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, + 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, + 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479, + 7.189, 9.645, 5.395, 4.961, 9.894, 2.893, + 7.357, 9.828, 6.272, 3.758, 6.693, 0.993]) + m = np.array([0, 1, 0, 1, 0, 0, + 1, 0, 1, 1, 0, 1, + 0, 0, 0, 1, 0, 1, + 0, 0, 0, 1, 1, 1, + 1, 0, 0, 1, 0, 0, + 0, 0, 1, 0, 1, 0]) + mX = array(x,mask=m).reshape(6,6) + mXX = mX.reshape(3,2,2,3) # - a = array(numpy.matrix([1,2,3,4]), mask=[0,0,0,0]) - b = a.compressed() - assert_equal(b,a) - assert(isinstance(b,numpy.matrix)) - a[0,0] = masked - b = a.compressed() - assert_equal(b, [[2,3,4]]) + mXswapped = mX.swapaxes(0,1) + assert_equal(mXswapped[-1],mX[:,-1]) + mXXswapped = mXX.swapaxes(0,2) + assert_equal(mXXswapped.shape,(2,2,3,3)) + + def test_take(self): + "Tests take" + x = masked_array([10,20,30,40],[0,1,0,1]) + assert_equal(x.take([0,0,3]), masked_array([10, 10, 40], [0,0,1]) ) + assert_equal(x.take([0,0,3]), x[[0,0,3]]) + assert_equal(x.take([[0,1],[0,1]]), + masked_array([[10,20],[10,20]], [[0,1],[0,1]]) ) + # + x = array([[10,20,30],[40,50,60]], mask=[[0,0,1],[1,0,0,]]) + assert_equal(x.take([0,2], axis=1), + array([[10,30],[40,60]], mask=[[0,1],[1,0]])) + assert_equal(take(x, [0,2], axis=1), + array([[10,30],[40,60]], mask=[[0,1],[1,0]])) + + def test_tolist(self): "Tests to list" - x = array(numpy.arange(12)) + x = array(np.arange(12)) x[[1,-2]] = masked xlist = x.tolist() assert(xlist[1] is None) @@ -1401,96 +1611,165 @@ assert_equal(x.tolist(), [(1,1.1,'one'),(2,2.2,'two'),(None,None,None)]) - def test_squeeze(self): - "Check squeeze" - data = masked_array([[1,2,3]]) - assert_equal(data.squeeze(), [1,2,3]) - data = masked_array([[1,2,3]], mask=[[1,1,1]]) - assert_equal(data.squeeze(), [1,2,3]) - assert_equal(data.squeeze()._mask, [1,1,1]) - data = masked_array([[1]], mask=True) - assert(data.squeeze() is masked) +#------------------------------------------------------------------------------ - def test_putmask(self): - x = arange(6)+1 - mx = array(x, mask=[0,0,0,1,1,1]) - mask = [0,0,1,0,0,1] - # w/o mask, w/o masked values - xx = x.copy() - putmask(xx, mask, 99) - assert_equal(xx, [1,2,99,4,5,99]) - # w/ mask, w/o masked values - mxx = mx.copy() - putmask(mxx, mask, 99) - assert_equal(mxx._data, [1,2,99,4,5,99]) - assert_equal(mxx._mask, [0,0,0,1,1,0]) - # w/o mask, w/ masked values - values = array([10,20,30,40,50,60],mask=[1,1,1,0,0,0]) - xx = x.copy() - putmask(xx, mask, values) - assert_equal(xx._data, [1,2,30,4,5,60]) - assert_equal(xx._mask, [0,0,1,0,0,0]) - # w/ mask, w/ masked values - mxx = mx.copy() - putmask(mxx, mask, values) - assert_equal(mxx._data, [1,2,30,4,5,60]) - assert_equal(mxx._mask, [0,0,1,1,1,0]) - # w/ mask, w/ masked values + hardmask - mxx = mx.copy() - mxx.harden_mask() - putmask(mxx, mask, values) - assert_equal(mxx, [1,2,30,4,5,60]) - def test_compress(self): - "test compress" - a = masked_array([1., 2., 3., 4., 5.], fill_value=9999) - condition = (a > 1.5) & (a < 3.5) - assert_equal(a.compress(condition),[2.,3.]) +class TestMaskArrayMathMethod(TestCase): + + def setUp(self): + "Base data definition." + x = np.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, + 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, + 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, + 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479, + 7.189, 9.645, 5.395, 4.961, 9.894, 2.893, + 7.357, 9.828, 6.272, 3.758, 6.693, 0.993]) + X = x.reshape(6,6) + XX = x.reshape(3,2,2,3) + + m = np.array([0, 1, 0, 1, 0, 0, + 1, 0, 1, 1, 0, 1, + 0, 0, 0, 1, 0, 1, + 0, 0, 0, 1, 1, 1, + 1, 0, 0, 1, 0, 0, + 0, 0, 1, 0, 1, 0]) + mx = array(data=x,mask=m) + mX = array(data=X,mask=m.reshape(X.shape)) + mXX = array(data=XX,mask=m.reshape(XX.shape)) + + m2 = np.array([1, 1, 0, 1, 0, 0, + 1, 1, 1, 1, 0, 1, + 0, 0, 1, 1, 0, 1, + 0, 0, 0, 1, 1, 1, + 1, 0, 0, 1, 1, 0, + 0, 0, 1, 0, 1, 1]) + m2x = array(data=x,mask=m2) + m2X = array(data=X,mask=m2.reshape(X.shape)) + m2XX = array(data=XX,mask=m2.reshape(XX.shape)) + self.d = (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) + + + def test_cumsumprod(self): + "Tests cumsum & cumprod on MaskedArrays." + (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d + mXcp = mX.cumsum(0) + assert_equal(mXcp.data,mX.filled(0).cumsum(0)) + mXcp = mX.cumsum(1) + assert_equal(mXcp.data,mX.filled(0).cumsum(1)) # - a[[2,3]] = masked - b = a.compress(condition) - assert_equal(b._data,[2.,3.]) - assert_equal(b._mask,[0,1]) - assert_equal(b.fill_value,9999) - assert_equal(b,a[condition]) + mXcp = mX.cumprod(0) + assert_equal(mXcp.data,mX.filled(1).cumprod(0)) + mXcp = mX.cumprod(1) + assert_equal(mXcp.data,mX.filled(1).cumprod(1)) + + + def test_cumsumprod_with_output(self): + "Tests cumsum/cumprod w/ output" + xm = array(np.random.uniform(0,10,12)).reshape(3,4) + xm[:,0] = xm[0] = xm[-1,-1] = masked # - condition = (a<4.) - b = a.compress(condition) - assert_equal(b._data,[1.,2.,3.]) - assert_equal(b._mask,[0,0,1]) - assert_equal(b.fill_value,9999) - assert_equal(b,a[condition]) + for funcname in ('cumsum','cumprod'): + npfunc = getattr(np, funcname) + xmmeth = getattr(xm, funcname) + + # A ndarray as explicit input + output = np.empty((3,4), dtype=float) + output.fill(-9999) + result = npfunc(xm, axis=0,out=output) + # ... the result should be the given output + assert(result is output) + assert_equal(result, xmmeth(axis=0, out=output)) + # + output = empty((3,4), dtype=int) + result = xmmeth(axis=0, out=output) + assert(result is output) + + + def test_ptp(self): + "Tests ptp on MaskedArrays." + (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d + (n,m) = X.shape + assert_equal(mx.ptp(),mx.compressed().ptp()) + rows = np.zeros(n,np.float_) + cols = np.zeros(m,np.float_) + for k in range(m): + cols[k] = mX[:,k].compressed().ptp() + for k in range(n): + rows[k] = mX[k].compressed().ptp() + assert_equal(mX.ptp(0),cols) + assert_equal(mX.ptp(1),rows) + + + def test_trace(self): + "Tests trace on MaskedArrays." + (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d + mXdiag = mX.diagonal() + assert_equal(mX.trace(), mX.diagonal().compressed().sum()) + assert_almost_equal(mX.trace(), + X.trace() - sum(mXdiag.mask*X.diagonal(),axis=0)) + + + def test_varstd(self): + "Tests var & std on MaskedArrays." + (x,X,XX,m,mx,mX,mXX,m2x,m2X,m2XX) = self.d + assert_almost_equal(mX.var(axis=None),mX.compressed().var()) + assert_almost_equal(mX.std(axis=None),mX.compressed().std()) + assert_almost_equal(mX.std(axis=None,ddof=1), + mX.compressed().std(ddof=1)) + assert_almost_equal(mX.var(axis=None,ddof=1), + mX.compressed().var(ddof=1)) + assert_equal(mXX.var(axis=3).shape,XX.var(axis=3).shape) + assert_equal(mX.var().shape,X.var().shape) + (mXvar0,mXvar1) = (mX.var(axis=0), mX.var(axis=1)) + assert_almost_equal(mX.var(axis=None,ddof=2),mX.compressed().var(ddof=2)) + assert_almost_equal(mX.std(axis=None,ddof=2),mX.compressed().std(ddof=2)) + for k in range(6): + assert_almost_equal(mXvar1[k],mX[k].compressed().var()) + assert_almost_equal(mXvar0[k],mX[:,k].compressed().var()) + assert_almost_equal(np.sqrt(mXvar0[k]), mX[:,k].compressed().std()) + + + def test_varstd_specialcases(self): + "Test a special case for var" + nout = np.empty(1, dtype=float) + mout = empty(1, dtype=float) # - a = masked_array([[10,20,30],[40,50,60]], mask=[[0,0,1],[1,0,0]]) - b = a.compress(a.ravel() >= 22) - assert_equal(b._data, [30, 40, 50, 60]) - assert_equal(b._mask, [1,1,0,0]) + x = array(arange(10), mask=True) + for methodname in ('var', 'std'): + method = getattr(x,methodname) + assert(method() is masked) + assert(method(0) is masked) + assert(method(-1) is masked) + # Using a masked array as explicit output + _ = method(out=mout) + assert(mout is not masked) + assert_equal(mout.mask, True) + # Using a ndarray as explicit output + _ = method(out=nout) + assert(np.isnan(nout)) # - x = numpy.array([3,1,2]) - b = a.compress(x >= 2, axis=1) - assert_equal(b._data, [[10,30],[40,60]]) - assert_equal(b._mask, [[0,1],[1,0]]) - # - def test_empty(self): - "Tests empty/like" - datatype = [('a',int_),('b',float_),('c','|S8')] - a = masked_array([(1,1.1,'1.1'),(2,2.2,'2.2'),(3,3.3,'3.3')], - dtype=datatype) - assert_equal(len(a.fill_value), len(datatype)) - # - b = empty_like(a) - assert_equal(b.shape, a.shape) - assert_equal(b.fill_value, a.fill_value) - # - b = empty(len(a), dtype=datatype) - assert_equal(b.shape, a.shape) - assert_equal(b.fill_value, a.fill_value) + x = array(arange(10), mask=True) + x[-1] = 9 + for methodname in ('var', 'std'): + method = getattr(x,methodname) + assert(method(ddof=1) is masked) + assert(method(0, ddof=1) is masked) + assert(method(-1, ddof=1) is masked) + # Using a masked array as explicit output + _ = method(out=mout, ddof=1) + assert(mout is not masked) + assert_equal(mout.mask, True) + # Using a ndarray as explicit output + _ = method(out=nout, ddof=1) + assert(np.isnan(nout)) -class TestArrayMethodsComplex(NumpyTestCase): +#------------------------------------------------------------------------------ + +class TestMaskedArrayMathMethodsComplex(TestCase): "Test class for miscellaneous MaskedArrays methods." def setUp(self): "Base data definition." - x = numpy.array([ 8.375j, 7.545j, 8.828j, 8.5j , 1.757j, 5.928, + x = np.array([ 8.375j, 7.545j, 8.828j, 8.5j , 1.757j, 5.928, 8.43 , 7.78 , 9.865, 5.878, 8.979, 4.732, 3.012, 6.022, 5.095, 3.116, 5.238, 3.957, 6.04 , 9.63 , 7.712, 3.382, 4.489, 6.479j, @@ -1499,7 +1778,7 @@ X = x.reshape(6,6) XX = x.reshape(3,2,2,3) - m = numpy.array([0, 1, 0, 1, 0, 0, + m = np.array([0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, @@ -1509,7 +1788,7 @@ mX = array(data=X,mask=m.reshape(X.shape)) mXX = array(data=XX,mask=m.reshape(XX.shape)) - m2 = numpy.array([1, 1, 0, 1, 0, 0, + m2 = np.array([1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, @@ -1534,19 +1813,65 @@ for k in range(6): assert_almost_equal(mXvar1[k],mX[k].compressed().var()) assert_almost_equal(mXvar0[k],mX[:,k].compressed().var()) - assert_almost_equal(numpy.sqrt(mXvar0[k]), mX[:,k].compressed().std()) + assert_almost_equal(np.sqrt(mXvar0[k]), mX[:,k].compressed().std()) -#.............................................................................. -class TestMiscFunctions(NumpyTestCase): +#------------------------------------------------------------------------------ + +class TestMaskedArrayFunctions(TestCase): "Test class for miscellaneous functions." # - def test_masked_where(self): + def setUp(self): + x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) + y = np.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) + a10 = 10. + m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] + m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 ,0, 1] + xm = masked_array(x, mask=m1) + ym = masked_array(y, mask=m2) + z = np.array([-.5, 0., .5, .8]) + zm = masked_array(z, mask=[0,1,0,0]) + xf = np.where(m1, 1.e+20, x) + xm.set_fill_value(1.e+20) + self.info = (xm, ym) + + # + def test_masked_where_bool(self): x = [1,2] y = masked_where(False,x) assert_equal(y,[1,2]) assert_equal(y[1],2) - # + + def test_masked_where_condition(self): + "Tests masking functions." + x = array([1.,2.,3.,4.,5.]) + x[2] = masked + assert_equal(masked_where(greater(x, 2), x), masked_greater(x,2)) + assert_equal(masked_where(greater_equal(x, 2), x), masked_greater_equal(x,2)) + assert_equal(masked_where(less(x, 2), x), masked_less(x,2)) + assert_equal(masked_where(less_equal(x, 2), x), masked_less_equal(x,2)) + assert_equal(masked_where(not_equal(x, 2), x), masked_not_equal(x,2)) + assert_equal(masked_where(equal(x, 2), x), masked_equal(x,2)) + assert_equal(masked_where(not_equal(x,2), x), masked_not_equal(x,2)) + assert_equal(masked_where([1,1,0,0,0], [1,2,3,4,5]), [99,99,3,4,5]) + + def test_masked_where_oddities(self): + """Tests some generic features.""" + atest = ones((10,10,10), dtype=float_) + btest = zeros(atest.shape, MaskType) + ctest = masked_where(btest,atest) + assert_equal(atest,ctest) + + + def test_masked_otherfunctions(self): + assert_equal(masked_inside(range(5), 1, 3), [0, 199, 199, 199, 4]) + assert_equal(masked_outside(range(5), 1, 3),[199,1,2,3,199]) + assert_equal(masked_inside(array(range(5), mask=[1,0,0,0,0]), 1, 3).mask, [1,1,1,1,0]) + assert_equal(masked_outside(array(range(5), mask=[0,1,0,0,0]), 1, 3).mask, [1,1,0,0,1]) + assert_equal(masked_equal(array(range(5), mask=[1,0,0,0,0]), 2).mask, [1,0,1,0,0]) + assert_equal(masked_not_equal(array([2,2,1,2,1], mask=[1,0,0,0,0]), 2).mask, [1,0,1,0,1]) + + def test_round(self): a = array([1.23456, 2.34567, 3.45678, 4.56789, 5.67890], mask=[0,1,0,0,0]) @@ -1556,12 +1881,45 @@ b = empty_like(a) a.round(out=b) assert_equal(b, [1., 2., 3., 5., 6.]) - # + + x = array([1.,2.,3.,4.,5.]) + c = array([1,1,1,0,0]) + x[2] = masked + z = where(c, x, -x) + assert_equal(z, [1.,2.,0., -4., -5]) + c[0] = masked + z = where(c, x, -x) + assert_equal(z, [1.,2.,0., -4., -5]) + assert z[0] is masked + assert z[1] is not masked + assert z[2] is masked + + + def test_round_with_output(self): + "Testing round with an explicit output" + + xm = array(np.random.uniform(0,10,12)).reshape(3,4) + xm[:,0] = xm[0] = xm[-1,-1] = masked + + # A ndarray as explicit input + output = np.empty((3,4), dtype=float) + output.fill(-9999) + result = np.round(xm, decimals=2,out=output) + # ... the result should be the given output + assert(result is output) + assert_equal(result, xm.round(decimals=2, out=output)) + # + output = empty((3,4), dtype=float) + result = xm.round(decimals=2, out=output) + assert(result is output) + + def test_identity(self): a = identity(5) assert(isinstance(a, MaskedArray)) - assert_equal(a, numpy.identity(5)) - # + assert_equal(a, np.identity(5)) + + def test_power(self): x = -1.1 assert_almost_equal(power(x,2.), 1.21) @@ -1582,10 +1940,211 @@ assert_equal(x._mask, y._mask) assert_almost_equal(x,y) assert_almost_equal(x._data,y._data) - + def test_where(self): + "Test the where function" + x = np.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) + y = np.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) + a10 = 10. + m1 = [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0] + m2 = [0, 0, 1, 0, 0, 1, 1, 0, 0, 0 ,0, 1] + xm = masked_array(x, mask=m1) + ym = masked_array(y, mask=m2) + z = np.array([-.5, 0., .5, .8]) + zm = masked_array(z, mask=[0,1,0,0]) + xf = np.where(m1, 1.e+20, x) + xm.set_fill_value(1.e+20) + + d = where(xm>2,xm,-9) + assert_equal(d, [-9.,-9.,-9.,-9., -9., 4., -9., -9., 10., -9., -9., 3.]) + assert_equal(d._mask, xm._mask) + d = where(xm>2,-9,ym) + assert_equal(d, [5.,0.,3., 2., -1.,-9.,-9., -10., -9., 1., 0., -9.]) + assert_equal(d._mask, [1,0,1,0,0,0,1,0,0,0,0,0]) + d = where(xm>2, xm, masked) + assert_equal(d, [-9.,-9.,-9.,-9., -9., 4., -9., -9., 10., -9., -9., 3.]) + tmp = xm._mask.copy() + tmp[(xm<=2).filled(True)] = True + assert_equal(d._mask, tmp) + # + ixm = xm.astype(int_) + d = where(ixm>2, ixm, masked) + assert_equal(d, [-9,-9,-9,-9, -9, 4, -9, -9, 10, -9, -9, 3]) + assert_equal(d.dtype, ixm.dtype) + + def test_where_with_masked_choice(self): + x = arange(10) + x[3] = masked + c = x >= 8 + # Set False to masked + z = where(c , x, masked) + assert z.dtype is x.dtype + assert z[3] is masked + assert z[4] is masked + assert z[7] is masked + assert z[8] is not masked + assert z[9] is not masked + assert_equal(x,z) + # Set True to masked + z = where(c , masked, x) + assert z.dtype is x.dtype + assert z[3] is masked + assert z[4] is not masked + assert z[7] is not masked + assert z[8] is masked + assert z[9] is masked + + def test_where_with_masked_condition(self): + x = array([1.,2.,3.,4.,5.]) + c = array([1,1,1,0,0]) + x[2] = masked + z = where(c, x, -x) + assert_equal(z, [1.,2.,0., -4., -5]) + c[0] = masked + z = where(c, x, -x) + assert_equal(z, [1.,2.,0., -4., -5]) + assert z[0] is masked + assert z[1] is not masked + assert z[2] is masked + # + x = arange(1,6) + x[-1] = masked + y = arange(1,6)*10 + y[2] = masked + c = array([1,1,1,0,0], mask=[1,0,0,0,0]) + cm = c.filled(1) + z = where(c,x,y) + zm = where(cm,x,y) + assert_equal(z, zm) + assert getmask(zm) is nomask + assert_equal(zm, [1,2,3,40,50]) + z = where(c, masked, 1) + assert_equal(z, [99,99,99,1,1]) + z = where(c, 1, masked) + assert_equal(z, [99, 1, 1, 99, 99]) + + + def test_choose(self): + "Test choose" + choices = [[0, 1, 2, 3], [10, 11, 12, 13], + [20, 21, 22, 23], [30, 31, 32, 33]] + chosen = choose([2, 3, 1, 0], choices) + assert_equal(chosen, array([20, 31, 12, 3])) + chosen = choose([2, 4, 1, 0], choices, mode='clip') + assert_equal(chosen, array([20, 31, 12, 3])) + chosen = choose([2, 4, 1, 0], choices, mode='wrap') + assert_equal(chosen, array([20, 1, 12, 3])) + # Check with some masked indices + indices_ = array([2, 4, 1, 0], mask=[1,0,0,1]) + chosen = choose(indices_, choices, mode='wrap') + assert_equal(chosen, array([99, 1, 12, 99])) + assert_equal(chosen.mask, [1,0,0,1]) + # Check with some masked choices + choices = array(choices, mask=[[0, 0, 0, 1], [1, 1, 0, 1], + [1, 0, 0, 0], [0, 0, 0, 0]]) + indices_ = [2, 3, 1, 0] + chosen = choose(indices_, choices, mode='wrap') + assert_equal(chosen, array([20, 31, 12, 3])) + assert_equal(chosen.mask, [1,0,0,1]) + + + def test_choose_with_out(self): + "Test choose with an explicit out keyword" + choices = [[0, 1, 2, 3], [10, 11, 12, 13], + [20, 21, 22, 23], [30, 31, 32, 33]] + store = empty(4, dtype=int) + chosen = choose([2, 3, 1, 0], choices, out=store) + assert_equal(store, array([20, 31, 12, 3])) + assert(store is chosen) + # Check with some masked indices + out + store = empty(4, dtype=int) + indices_ = array([2, 3, 1, 0], mask=[1,0,0,1]) + chosen = choose(indices_, choices, mode='wrap', out=store) + assert_equal(store, array([99, 31, 12, 99])) + assert_equal(store.mask, [1,0,0,1]) + # Check with some masked choices + out ina ndarray ! + choices = array(choices, mask=[[0, 0, 0, 1], [1, 1, 0, 1], + [1, 0, 0, 0], [0, 0, 0, 0]]) + indices_ = [2, 3, 1, 0] + store = empty(4, dtype=int).view(ndarray) + chosen = choose(indices_, choices, mode='wrap', out=store) + assert_equal(store, array([999999, 31, 12, 999999])) + + +#------------------------------------------------------------------------------ + +class TestMaskedFields(TestCase): + # + def setUp(self): + ilist = [1,2,3,4,5] + flist = [1.1,2.2,3.3,4.4,5.5] + slist = ['one','two','three','four','five'] + ddtype = [('a',int),('b',float),('c','|S8')] + mdtype = [('a',bool),('b',bool),('c',bool)] + mask = [0,1,0,0,1] + base = array(zip(ilist,flist,slist), mask=mask, dtype=ddtype) + self.data = dict(base=base, mask=mask, ddtype=ddtype, mdtype=mdtype) + + def test_set_records_masks(self): + base = self.data['base'] + mdtype = self.data['mdtype'] + # Set w/ nomask or masked + base.mask = nomask + assert_equal_records(base._mask, np.zeros(base.shape, dtype=mdtype)) + base.mask = masked + assert_equal_records(base._mask, np.ones(base.shape, dtype=mdtype)) + # Set w/ simple boolean + base.mask = False + assert_equal_records(base._mask, np.zeros(base.shape, dtype=mdtype)) + base.mask = True + assert_equal_records(base._mask, np.ones(base.shape, dtype=mdtype)) + # Set w/ list + base.mask = [0,0,0,1,1] + assert_equal_records(base._mask, + np.array([(x,x,x) for x in [0,0,0,1,1]], + dtype=mdtype)) + + def test_set_record_element(self): + "Check setting an element of a record)" + base = self.data['base'] + (base_a, base_b, base_c) = (base['a'], base['b'], base['c']) + base[0] = (pi, pi, 'pi') + + assert_equal(base_a.dtype, int) + assert_equal(base_a.data, [3,2,3,4,5]) + + assert_equal(base_b.dtype, float) + assert_equal(base_b.data, [pi, 2.2, 3.3, 4.4, 5.5]) + + assert_equal(base_c.dtype, '|S8') + assert_equal(base_c.data, ['pi','two','three','four','five']) + + def test_set_record_slice(self): + base = self.data['base'] + (base_a, base_b, base_c) = (base['a'], base['b'], base['c']) + base[:3] = (pi, pi, 'pi') + + assert_equal(base_a.dtype, int) + assert_equal(base_a.data, [3,3,3,4,5]) + + assert_equal(base_b.dtype, float) + assert_equal(base_b.data, [pi, pi, pi, 4.4, 5.5]) + + assert_equal(base_c.dtype, '|S8') + assert_equal(base_c.data, ['pi','pi','pi','four','five']) + + def test_mask_element(self): + "Check record access" + base = self.data['base'] + (base_a, base_b, base_c) = (base['a'], base['b'], base['c']) + base[0] = masked + # + for n in ('a','b','c'): + assert_equal(base[n].mask, [1,1,0,0,1]) + assert_equal(base[n].data, base.data[n]) + ############################################################################### #------------------------------------------------------------------------------ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/ma/tests/test_extras.py =================================================================== --- branches/cdavid/numpy/ma/tests/test_extras.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/ma/tests/test_extras.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -11,21 +11,15 @@ __revision__ = "$Revision: 3473 $" __date__ = '$Date: 2007-10-29 17:18:13 +0200 (Mon, 29 Oct 2007) $' -import numpy as N -from numpy.testing import NumpyTest, NumpyTestCase -from numpy.testing.utils import build_err_msg - -import numpy.ma.testutils +import numpy +from numpy.testing import * from numpy.ma.testutils import * - -import numpy.ma.core from numpy.ma.core import * -import numpy.ma.extras from numpy.ma.extras import * -class TestAverage(NumpyTestCase): +class TestAverage(TestCase): "Several tests of average. Why so many ? Good point..." - def check_testAverage1(self): + def test_testAverage1(self): "Test of average." ott = array([0.,1.,2.,3.], mask=[1,0,0,0]) assert_equal(2.0, average(ott,axis=0)) @@ -44,7 +38,7 @@ result, wts = average(ott, axis=0, returned=1) assert_equal(wts, [1., 0.]) - def check_testAverage2(self): + def test_testAverage2(self): "More tests of average." w1 = [0,1,1,1,1,0] w2 = [[0,1,1,1,1,0],[1,0,0,0,0,1]] @@ -52,12 +46,15 @@ assert_equal(average(x, axis=0), 2.5) assert_equal(average(x, axis=0, weights=w1), 2.5) y = array([arange(6, dtype=float_), 2.0*arange(6)]) - assert_equal(average(y, None), N.add.reduce(N.arange(6))*3./12.) - assert_equal(average(y, axis=0), N.arange(6) * 3./2.) - assert_equal(average(y, axis=1), [average(x,axis=0), average(x,axis=0) * 2.0]) + assert_equal(average(y, None), np.add.reduce(np.arange(6))*3./12.) + assert_equal(average(y, axis=0), np.arange(6) * 3./2.) + assert_equal(average(y, axis=1), + [average(x,axis=0), average(x,axis=0) * 2.0]) assert_equal(average(y, None, weights=w2), 20./6.) - assert_equal(average(y, axis=0, weights=w2), [0.,1.,2.,3.,4.,10.]) - assert_equal(average(y, axis=1), [average(x,axis=0), average(x,axis=0) * 2.0]) + assert_equal(average(y, axis=0, weights=w2), + [0.,1.,2.,3.,4.,10.]) + assert_equal(average(y, axis=1), + [average(x,axis=0), average(x,axis=0) * 2.0]) m1 = zeros(6) m2 = [0,0,1,1,0,0] m3 = [[0,0,1,1,0,0],[0,1,1,1,1,0]] @@ -74,7 +71,7 @@ assert_equal(average(z, axis=1), [2.5, 5.0]) assert_equal(average(z,axis=0, weights=w2), [0.,1., 99., 99., 4.0, 10.0]) - def check_testAverage3(self): + def test_testAverage3(self): "Yet more tests of average!" a = arange(6) b = arange(6) * 3 @@ -98,9 +95,9 @@ a2dma = average(a2dm, axis=1) assert_equal(a2dma, [1.5, 4.0]) -class TestConcatenator(NumpyTestCase): +class TestConcatenator(TestCase): "Tests for mr_, the equivalent of r_ for masked arrays." - def check_1d(self): + def test_1d(self): "Tests mr_ on 1D arrays." assert_array_equal(mr_[1,2,3,4,5,6],array([1,2,3,4,5,6])) b = ones(5) @@ -111,30 +108,30 @@ assert_array_equal(c,[1,1,1,1,1,0,0,1,1,1,1,1]) assert_array_equal(c.mask, mr_[m,0,0,m]) - def check_2d(self): + def test_2d(self): "Tests mr_ on 2D arrays." a_1 = rand(5,5) a_2 = rand(5,5) - m_1 = N.round_(rand(5,5),0) - m_2 = N.round_(rand(5,5),0) + m_1 = np.round_(rand(5,5),0) + m_2 = np.round_(rand(5,5),0) b_1 = masked_array(a_1,mask=m_1) b_2 = masked_array(a_2,mask=m_2) d = mr_['1',b_1,b_2] # append columns assert(d.shape == (5,10)) assert_array_equal(d[:,:5],b_1) assert_array_equal(d[:,5:],b_2) - assert_array_equal(d.mask, N.r_['1',m_1,m_2]) + assert_array_equal(d.mask, np.r_['1',m_1,m_2]) d = mr_[b_1,b_2] assert(d.shape == (10,5)) assert_array_equal(d[:5,:],b_1) assert_array_equal(d[5:,:],b_2) - assert_array_equal(d.mask, N.r_[m_1,m_2]) + assert_array_equal(d.mask, np.r_[m_1,m_2]) -class TestNotMasked(NumpyTestCase): +class TestNotMasked(TestCase): "Tests notmasked_edges and notmasked_contiguous." - def check_edges(self): + def test_edges(self): "Tests unmasked_edges" - a = masked_array(N.arange(24).reshape(3,8), + a = masked_array(np.arange(24).reshape(3,8), mask=[[0,0,0,0,1,1,1,0], [1,1,1,1,1,1,1,1], [0,0,0,0,0,0,1,0],]) @@ -149,9 +146,9 @@ assert_equal(tmp[0], (array([0,2,]), array([0,0]))) assert_equal(tmp[1], (array([0,2,]), array([7,7]))) - def check_contiguous(self): + def test_contiguous(self): "Tests notmasked_contiguous" - a = masked_array(N.arange(24).reshape(3,8), + a = masked_array(np.arange(24).reshape(3,8), mask=[[0,0,0,0,1,1,1,1], [1,1,1,1,1,1,1,1], [0,0,0,0,0,0,1,0],]) @@ -172,11 +169,11 @@ assert_equal(tmp[2][-1], slice(7,7,None)) assert_equal(tmp[2][-2], slice(0,5,None)) -class Test2DFunctions(NumpyTestCase): +class Test2DFunctions(TestCase): "Tests 2D functions" - def check_compress2d(self): + def test_compress2d(self): "Tests compress2d" - x = array(N.arange(9).reshape(3,3), mask=[[1,0,0],[0,0,0],[0,0,0]]) + x = array(np.arange(9).reshape(3,3), mask=[[1,0,0],[0,0,0],[0,0,0]]) assert_equal(compress_rowcols(x), [[4,5],[7,8]] ) assert_equal(compress_rowcols(x,0), [[3,4,5],[6,7,8]] ) assert_equal(compress_rowcols(x,1), [[1,2],[4,5],[7,8]] ) @@ -193,9 +190,9 @@ assert_equal(compress_rowcols(x,0).size, 0 ) assert_equal(compress_rowcols(x,1).size, 0 ) # - def check_mask_rowcols(self): + def test_mask_rowcols(self): "Tests mask_rowcols." - x = array(N.arange(9).reshape(3,3), mask=[[1,0,0],[0,0,0],[0,0,0]]) + x = array(np.arange(9).reshape(3,3), mask=[[1,0,0],[0,0,0],[0,0,0]]) assert_equal(mask_rowcols(x).mask, [[1,1,1],[1,0,0],[1,0,0]] ) assert_equal(mask_rowcols(x,0).mask, [[1,1,1],[0,0,0],[0,0,0]] ) assert_equal(mask_rowcols(x,1).mask, [[1,0,0],[1,0,0],[1,0,0]] ) @@ -208,13 +205,16 @@ assert_equal(mask_rowcols(x,0).mask, [[1,1,1],[1,1,1],[0,0,0]] ) assert_equal(mask_rowcols(x,1,).mask, [[1,1,0],[1,1,0],[1,1,0]] ) x = array(x._data, mask=[[1,0,0],[0,1,0],[0,0,1]]) - assert(mask_rowcols(x).all()) - assert(mask_rowcols(x,0).all()) - assert(mask_rowcols(x,1).all()) + assert(mask_rowcols(x).all() is masked) + assert(mask_rowcols(x,0).all() is masked) + assert(mask_rowcols(x,1).all() is masked) + assert(mask_rowcols(x).mask.all()) + assert(mask_rowcols(x,0).mask.all()) + assert(mask_rowcols(x,1).mask.all()) # def test_dot(self): "Tests dot product" - n = N.arange(1,7) + n = np.arange(1,7) # m = [1,0,0,0,0,0] a = masked_array(n, mask=m).reshape(2,3) @@ -224,9 +224,9 @@ c = dot(b,a,True) assert_equal(c.mask, [[1,1,1],[1,0,0],[1,0,0]]) c = dot(a,b,False) - assert_equal(c, N.dot(a.filled(0), b.filled(0))) + assert_equal(c, np.dot(a.filled(0), b.filled(0))) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0), a.filled(0))) + assert_equal(c, np.dot(b.filled(0), a.filled(0))) # m = [0,0,0,0,0,1] a = masked_array(n, mask=m).reshape(2,3) @@ -236,10 +236,10 @@ c = dot(b,a,True) assert_equal(c.mask, [[0,0,1],[0,0,1],[1,1,1]]) c = dot(a,b,False) - assert_equal(c, N.dot(a.filled(0), b.filled(0))) + assert_equal(c, np.dot(a.filled(0), b.filled(0))) assert_equal(c, dot(a,b)) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0), a.filled(0))) + assert_equal(c, np.dot(b.filled(0), a.filled(0))) # m = [0,0,0,0,0,0] a = masked_array(n, mask=m).reshape(2,3) @@ -254,37 +254,37 @@ c = dot(a,b,True) assert_equal(c.mask,[[1,1],[0,0]]) c = dot(a,b,False) - assert_equal(c, N.dot(a.filled(0),b.filled(0))) + assert_equal(c, np.dot(a.filled(0),b.filled(0))) c = dot(b,a,True) assert_equal(c.mask,[[1,0,0],[1,0,0],[1,0,0]]) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0),a.filled(0))) + assert_equal(c, np.dot(b.filled(0),a.filled(0))) # a = masked_array(n, mask=[0,0,0,0,0,1]).reshape(2,3) b = masked_array(n, mask=[0,0,0,0,0,0]).reshape(3,2) c = dot(a,b,True) assert_equal(c.mask,[[0,0],[1,1]]) c = dot(a,b) - assert_equal(c, N.dot(a.filled(0),b.filled(0))) + assert_equal(c, np.dot(a.filled(0),b.filled(0))) c = dot(b,a,True) assert_equal(c.mask,[[0,0,1],[0,0,1],[0,0,1]]) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0), a.filled(0))) + assert_equal(c, np.dot(b.filled(0), a.filled(0))) # a = masked_array(n, mask=[0,0,0,0,0,1]).reshape(2,3) b = masked_array(n, mask=[0,0,1,0,0,0]).reshape(3,2) c = dot(a,b,True) assert_equal(c.mask,[[1,0],[1,1]]) c = dot(a,b,False) - assert_equal(c, N.dot(a.filled(0),b.filled(0))) + assert_equal(c, np.dot(a.filled(0),b.filled(0))) c = dot(b,a,True) assert_equal(c.mask,[[0,0,1],[1,1,1],[0,0,1]]) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0),a.filled(0))) + assert_equal(c, np.dot(b.filled(0),a.filled(0))) def test_ediff1d(self): "Tests mediff1d" - x = masked_array(N.arange(5), mask=[1,0,0,0,1]) + x = masked_array(np.arange(5), mask=[1,0,0,0,1]) difx_d = (x._data[1:]-x._data[:-1]) difx_m = (x._mask[1:]-x._mask[:-1]) dx = ediff1d(x) @@ -292,43 +292,40 @@ assert_equal(dx._mask, difx_m) # dx = ediff1d(x, to_begin=masked) - assert_equal(dx._data, N.r_[0,difx_d]) - assert_equal(dx._mask, N.r_[1,difx_m]) + assert_equal(dx._data, np.r_[0,difx_d]) + assert_equal(dx._mask, np.r_[1,difx_m]) dx = ediff1d(x, to_begin=[1,2,3]) - assert_equal(dx._data, N.r_[[1,2,3],difx_d]) - assert_equal(dx._mask, N.r_[[0,0,0],difx_m]) + assert_equal(dx._data, np.r_[[1,2,3],difx_d]) + assert_equal(dx._mask, np.r_[[0,0,0],difx_m]) # dx = ediff1d(x, to_end=masked) - assert_equal(dx._data, N.r_[difx_d,0]) - assert_equal(dx._mask, N.r_[difx_m,1]) + assert_equal(dx._data, np.r_[difx_d,0]) + assert_equal(dx._mask, np.r_[difx_m,1]) dx = ediff1d(x, to_end=[1,2,3]) - assert_equal(dx._data, N.r_[difx_d,[1,2,3]]) - assert_equal(dx._mask, N.r_[difx_m,[0,0,0]]) + assert_equal(dx._data, np.r_[difx_d,[1,2,3]]) + assert_equal(dx._mask, np.r_[difx_m,[0,0,0]]) # dx = ediff1d(x, to_end=masked, to_begin=masked) - assert_equal(dx._data, N.r_[0,difx_d,0]) - assert_equal(dx._mask, N.r_[1,difx_m,1]) + assert_equal(dx._data, np.r_[0,difx_d,0]) + assert_equal(dx._mask, np.r_[1,difx_m,1]) dx = ediff1d(x, to_end=[1,2,3], to_begin=masked) - assert_equal(dx._data, N.r_[0,difx_d,[1,2,3]]) - assert_equal(dx._mask, N.r_[1,difx_m,[0,0,0]]) + assert_equal(dx._data, np.r_[0,difx_d,[1,2,3]]) + assert_equal(dx._mask, np.r_[1,difx_m,[0,0,0]]) # dx = ediff1d(x._data, to_end=masked, to_begin=masked) - assert_equal(dx._data, N.r_[0,difx_d,0]) - assert_equal(dx._mask, N.r_[1,0,0,0,0,1]) + assert_equal(dx._data, np.r_[0,difx_d,0]) + assert_equal(dx._mask, np.r_[1,0,0,0,0,1]) -class TestApplyAlongAxis(NumpyTestCase): +class TestApplyAlongAxis(TestCase): "Tests 2D functions" - def check_3d(self): + def test_3d(self): a = arange(12.).reshape(2,2,3) def myfunc(b): return b[1] xa = apply_along_axis(myfunc,2,a) assert_equal(xa,[[1,4],[7,10]]) -class TestMedian(NumpyTestCase): - def __init__(self, *args, **kwds): - NumpyTestCase.__init__(self, *args, **kwds) - # +class TestMedian(TestCase): def test_2d(self): "Tests median w/ 2D" (n,p) = (101,30) @@ -355,7 +352,7 @@ assert_equal(median(x,0), [[12,10],[8,9],[16,17]]) -class TestPolynomial(NumpyTestCase): +class TestPolynomial(TestCase): # def test_polyfit(self): "Tests polyfit" @@ -388,4 +385,4 @@ ############################################################################### #------------------------------------------------------------------------------ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/ma/tests/test_mrecords.py =================================================================== --- branches/cdavid/numpy/ma/tests/test_mrecords.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/ma/tests/test_mrecords.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -27,10 +27,10 @@ fromarrays, fromtextfile, fromrecords, addfield #.............................................................................. -class TestMRecords(NumpyTestCase): +class TestMRecords(TestCase): "Base test class for MaskedArrays." def __init__(self, *args, **kwds): - NumpyTestCase.__init__(self, *args, **kwds) + TestCase.__init__(self, *args, **kwds) self.setup() def setup(self): @@ -46,7 +46,8 @@ "Test creation by view" base = self.base mbase = base.view(mrecarray) - assert_equal(mbase._mask, base._mask) + assert_equal(mbase.recordmask, base.recordmask) + assert_equal_records(mbase._mask, base._mask) assert isinstance(mbase._data, recarray) assert_equal_records(mbase._data, base._data.view(recarray)) for field in ('a','b','c'): @@ -66,14 +67,18 @@ assert isinstance(mbase_first, mrecarray) assert_equal(mbase_first.dtype, mbase.dtype) assert_equal(mbase_first.tolist(), (1,1.1,'one')) - assert_equal(mbase_first.mask, nomask) + # Used to be mask, now it's recordmask + assert_equal(mbase_first.recordmask, nomask) + # _fieldmask and _mask should be the same thing assert_equal(mbase_first._fieldmask.item(), (False, False, False)) + assert_equal(mbase_first._mask.item(), (False, False, False)) assert_equal(mbase_first['a'], mbase['a'][0]) mbase_last = mbase[-1] assert isinstance(mbase_last, mrecarray) assert_equal(mbase_last.dtype, mbase.dtype) assert_equal(mbase_last.tolist(), (None,None,None)) - assert_equal(mbase_last.mask, True) + # Used to be mask, now it's recordmask + assert_equal(mbase_last.recordmask, True) assert_equal(mbase_last._fieldmask.item(), (True, True, True)) assert_equal(mbase_last['a'], mbase['a'][-1]) assert (mbase_last['a'] is masked) @@ -81,7 +86,11 @@ mbase_sl = mbase[:2] assert isinstance(mbase_sl, mrecarray) assert_equal(mbase_sl.dtype, mbase.dtype) - assert_equal(mbase_sl._mask, [0,1]) + # Used to be mask, now it's recordmask + assert_equal(mbase_sl.recordmask, [0,1]) + assert_equal_records(mbase_sl.mask, + np.array([(False,False,False),(True,True,True)], + dtype=mbase._mask.dtype)) assert_equal_records(mbase_sl, base[:2].view(mrecarray)) for field in ('a','b','c'): assert_equal(getattr(mbase_sl,field), base[:2][field]) @@ -100,13 +109,16 @@ mbase.a = 1 assert_equal(mbase['a']._data, [1]*5) assert_equal(ma.getmaskarray(mbase['a']), [0]*5) - assert_equal(mbase._mask, [False]*5) + # Use to be _mask, now it's recordmask + assert_equal(mbase.recordmask, [False]*5) assert_equal(mbase._fieldmask.tolist(), np.array([(0,0,0),(0,1,1),(0,0,0),(0,0,0),(0,1,1)], dtype=bool)) # Set a field to mask ........................ mbase.c = masked + # Use to be mask, and now it's still mask ! assert_equal(mbase.c.mask, [1]*5) + assert_equal(mbase.c.recordmask, [1]*5) assert_equal(ma.getmaskarray(mbase['c']), [1]*5) assert_equal(ma.getdata(mbase['c']), ['N/A']*5) assert_equal(mbase._fieldmask.tolist(), @@ -126,10 +138,26 @@ rdata = data.view(MaskedRecords) val = ma.array([10,20,30], mask=[1,0,0]) # + import warnings + warnings.simplefilter("ignore") rdata['num'] = val assert_equal(rdata.num, val) assert_equal(rdata.num.mask, [1,0,0]) - + + def test_set_fields_mask(self): + "Tests setting the mask of a field." + base = self.base.copy() + # This one has already a mask.... + mbase = base.view(mrecarray) + mbase['a'][-2] = masked + assert_equal(mbase.a, [1,2,3,4,5]) + assert_equal(mbase.a._mask, [0,1,0,1,1]) + # This one has not yet + mbase = fromarrays([np.arange(5), np.random.rand(5)], + dtype=[('a',int),('b',float)]) + mbase['a'][-2] = masked + assert_equal(mbase.a, [0,1,2,3,4]) + assert_equal(mbase.a._mask, [0,0,0,1,0]) # def test_set_mask(self): base = self.base.copy() @@ -185,10 +213,11 @@ assert_equal(mbase._fieldmask.tolist(), np.array([(0,0,0),(1,1,1),(0,0,0),(1,1,1),(1,1,1)], dtype=bool)) - assert_equal(mbase._mask, [0,1,0,1,1]) + # Used to be mask, now it's recordmask! + assert_equal(mbase.recordmask, [0,1,0,1,1]) # Set slices ................................. mbase = base.view(mrecarray).copy() - mbase[:2] = 5 + mbase[:2] = (5,5,5) assert_equal(mbase.a._data, [5,5,3,4,5]) assert_equal(mbase.a._mask, [0,0,0,0,1]) assert_equal(mbase.b._data, [5.,5.,3.3,4.4,5.5]) @@ -210,13 +239,29 @@ base = self.base.copy() mbase = base.view(mrecarray) mbase.harden_mask() - mbase[-2:] = 5 - assert_equal(mbase.a._data, [1,2,3,5,5]) - assert_equal(mbase.b._data, [1.1,2.2,3.3,5,5.5]) - assert_equal(mbase.c._data, ['one','two','three','5','five']) - assert_equal(mbase.a._mask, [0,1,0,0,1]) - assert_equal(mbase.b._mask, mbase.a._mask) - assert_equal(mbase.b._mask, mbase.c._mask) + try: + mbase[-2:] = (5,5,5) + assert_equal(mbase.a._data, [1,2,3,5,5]) + assert_equal(mbase.b._data, [1.1,2.2,3.3,5,5.5]) + assert_equal(mbase.c._data, ['one','two','three','5','five']) + assert_equal(mbase.a._mask, [0,1,0,0,1]) + assert_equal(mbase.b._mask, mbase.a._mask) + assert_equal(mbase.b._mask, mbase.c._mask) + except NotImplementedError: + # OK, not implemented yet... + pass + except AssertionError: + raise + else: + raise Exception("Flexible hard masks should be supported !") + # Not using a tuple should crash + try: + mbase[-2:] = 3 + except (NotImplementedError, TypeError): + pass + else: + raise TypeError("Should have expected a readable buffer object!") + def test_hardmask(self): "Test hardmask" @@ -225,11 +270,14 @@ mbase.harden_mask() assert(mbase._hardmask) mbase._mask = nomask - assert_equal(mbase._mask, [0,1,0,0,1]) + assert_equal_records(mbase._mask, base._mask) mbase.soften_mask() assert(not mbase._hardmask) mbase._mask = nomask - assert(mbase['b']._mask is nomask) + # So, the mask of a field is no longer set to nomask... + assert_equal_records(mbase._mask, + ma.make_mask_none(base.shape,base.dtype.names)) + assert(ma.make_mask(mbase['b']._mask) is nomask) assert_equal(mbase['a']._mask,mbase['b']._mask) # def test_pickling(self): @@ -270,10 +318,10 @@ [(1,1.1,None),(2,2.2,'two'),(None,None,'three')]) ################################################################################ -class TestMRecordsImport(NumpyTestCase): +class TestMRecordsImport(TestCase): "Base test class for MaskedArrays." def __init__(self, *args, **kwds): - NumpyTestCase.__init__(self, *args, **kwds) + TestCase.__init__(self, *args, **kwds) self.setup() def setup(self): @@ -297,9 +345,9 @@ # One record only _x = ma.array([1,1.1,'one'], mask=[1,0,0],) assert_equal_records(fromarrays(_x, dtype=mrec.dtype), mrec[0]) - + def test_fromrecords(self): "Test construction from records." (mrec, nrec, ddtype) = self.data @@ -358,12 +406,12 @@ """ import os from datetime import datetime - fname = 'tmp%s' % datetime.now().strftime("%y%m%d%H%M%S%s") - f = open(fname, 'w') - f.write(fcontent) - f.close() - mrectxt = fromtextfile(fname,delimitor=',',varnames='ABCDEFG') - os.unlink(fname) + import tempfile + (tmp_fd,tmp_fl) = tempfile.mkstemp() + os.write(tmp_fd, fcontent) + os.close(tmp_fd) + mrectxt = fromtextfile(tmp_fl, delimitor=',',varnames='ABCDEFG') + os.remove(tmp_fl) # assert(isinstance(mrectxt, MaskedRecords)) assert_equal(mrectxt.F, [1,1,1,1]) @@ -381,4 +429,4 @@ ############################################################################### #------------------------------------------------------------------------------ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/ma/tests/test_old_ma.py =================================================================== --- branches/cdavid/numpy/ma/tests/test_old_ma.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/ma/tests/test_old_ma.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -2,7 +2,9 @@ import types, time from numpy.ma import * from numpy.core.numerictypes import float32 -from numpy.testing import NumpyTestCase, NumpyTest +from numpy.ma.core import umath +from numpy.testing import * + pi = numpy.pi def eq(v,w, msg=''): result = allclose(v,w) @@ -13,11 +15,7 @@ %s"""% (msg, str(v), str(w)) return result -class TestMa(NumpyTestCase): - def __init__(self, *args, **kwds): - NumpyTestCase.__init__(self, *args, **kwds) - self.setUp() - +class TestMa(TestCase): def setUp (self): x=numpy.array([1.,1.,1.,-2., pi/2.0, 4., 5., -10., 10., 1., 2., 3.]) y=numpy.array([5.,0.,3., 2., -1., -4., 0., -10., 10., 1., 0., 3.]) @@ -33,7 +31,7 @@ xm.set_fill_value(1.e+20) self.d = (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) - def check_testBasic1d(self): + def test_testBasic1d(self): "Test of basic array creation and properties in 1 dimension." (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d self.failIf(isMaskedArray(x)) @@ -47,7 +45,7 @@ self.failUnless(eq(filled(xm, 1.e20), xf)) self.failUnless(eq(x, xm)) - def check_testBasic2d(self): + def test_testBasic2d(self): "Test of basic array creation and properties in 2 dimensions." for s in [(4,3), (6,2)]: (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d @@ -68,7 +66,7 @@ self.failUnless(eq(x, xm)) self.setUp() - def check_testArithmetic (self): + def test_testArithmetic (self): "Test of basic arithmetic." (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d a2d = array([[1,2],[0,4]]) @@ -110,13 +108,13 @@ numpy.seterr(**olderr) - def check_testMixedArithmetic(self): + def test_testMixedArithmetic(self): na = numpy.array([1]) ma = array([1]) self.failUnless(isinstance(na + ma, MaskedArray)) self.failUnless(isinstance(ma + na, MaskedArray)) - def check_testUfuncs1 (self): + def test_testUfuncs1 (self): "Test various functions such as sin, cos." (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d self.failUnless (eq(numpy.cos(x), cos(xm))) @@ -148,7 +146,7 @@ self.failUnless (eq(numpy.concatenate((x,y)), concatenate((xm,y)))) self.failUnless (eq(numpy.concatenate((x,y,x)), concatenate((x,ym,x)))) - def check_xtestCount (self): + def test_xtestCount (self): "Test count" ott = array([0.,1.,2.,3.], mask=[1,0,0,0]) self.failUnless( isinstance(count(ott), types.IntType)) @@ -162,15 +160,19 @@ assert getmask(count(ott,0)) is nomask self.failUnless (eq([1,2],count(ott,0))) - def check_testMinMax (self): + def test_testMinMax (self): "Test minimum and maximum." (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d xr = numpy.ravel(x) #max doesn't work if shaped xmr = ravel(xm) - self.failUnless (eq(max(xr), maximum(xmr))) #true because of careful selection of data - self.failUnless (eq(min(xr), minimum(xmr))) #true because of careful selection of data - def check_testAddSumProd (self): + #true because of careful selection of data + self.failUnless(eq(max(xr), maximum(xmr))) + + #true because of careful selection of data + self.failUnless(eq(min(xr), minimum(xmr))) + + def test_testAddSumProd (self): "Test add, sum, product." (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d self.failUnless (eq(numpy.add.reduce(x), add.reduce(x))) @@ -182,15 +184,17 @@ self.failUnless (eq(numpy.sum(x,0), sum(x,0))) self.failUnless (eq(numpy.product(x,axis=0), product(x,axis=0))) self.failUnless (eq(numpy.product(x,0), product(x,0))) - self.failUnless (eq(numpy.product(filled(xm,1),axis=0), product(xm,axis=0))) + self.failUnless (eq(numpy.product(filled(xm,1),axis=0), + product(xm,axis=0))) if len(s) > 1: - self.failUnless (eq(numpy.concatenate((x,y),1), concatenate((xm,ym),1))) + self.failUnless (eq(numpy.concatenate((x,y),1), + concatenate((xm,ym),1))) self.failUnless (eq(numpy.add.reduce(x,1), add.reduce(x,1))) self.failUnless (eq(numpy.sum(x,1), sum(x,1))) self.failUnless (eq(numpy.product(x,1), product(x,1))) - def check_testCI(self): + def test_testCI(self): "Test of conversions and indexing" x1 = numpy.array([1,2,4,3]) x2 = array(x1, mask = [1,0,0,0]) @@ -239,7 +243,7 @@ self.assertEqual(s1, s2) assert x1[1:1].shape == (0,) - def check_testCopySize(self): + def test_testCopySize(self): "Tests of some subtle points of copying and sizing." n = [0,0,1,0,0] m = make_mask(n) @@ -278,7 +282,7 @@ y6 = repeat(x4, 2, axis=0) self.failUnless( eq(y5, y6)) - def check_testPut(self): + def test_testPut(self): "Test of put" d = arange(5) n = [0,0,0,1,1] @@ -298,14 +302,14 @@ self.failUnless( x[3] is masked) self.failUnless( x[4] is masked) - def check_testMaPut(self): + def test_testMaPut(self): (x, y, a10, m1, m2, xm, ym, z, zm, xf, s) = self.d m = [1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1] i = numpy.nonzero(m)[0] put(ym, i, zm) assert all(take(ym, i, axis=0) == zm) - def check_testOddFeatures(self): + def test_testOddFeatures(self): "Test of other odd features" x = arange(20); x=x.reshape(4,5) x.flat[5] = 12 @@ -357,7 +361,8 @@ assert z[1] is not masked assert z[2] is masked assert eq(masked_where(greater(x, 2), x), masked_greater(x,2)) - assert eq(masked_where(greater_equal(x, 2), x), masked_greater_equal(x,2)) + assert eq(masked_where(greater_equal(x, 2), x), + masked_greater_equal(x,2)) assert eq(masked_where(less(x, 2), x), masked_less(x,2)) assert eq(masked_where(less_equal(x, 2), x), masked_less_equal(x,2)) assert eq(masked_where(not_equal(x, 2), x), masked_not_equal(x,2)) @@ -365,10 +370,14 @@ assert eq(masked_where(not_equal(x,2), x), masked_not_equal(x,2)) assert eq(masked_inside(range(5), 1, 3), [0, 199, 199, 199, 4]) assert eq(masked_outside(range(5), 1, 3),[199,1,2,3,199]) - assert eq(masked_inside(array(range(5), mask=[1,0,0,0,0]), 1, 3).mask, [1,1,1,1,0]) - assert eq(masked_outside(array(range(5), mask=[0,1,0,0,0]), 1, 3).mask, [1,1,0,0,1]) - assert eq(masked_equal(array(range(5), mask=[1,0,0,0,0]), 2).mask, [1,0,1,0,0]) - assert eq(masked_not_equal(array([2,2,1,2,1], mask=[1,0,0,0,0]), 2).mask, [1,0,1,0,1]) + assert eq(masked_inside(array(range(5), mask=[1,0,0,0,0]), 1, 3).mask, + [1,1,1,1,0]) + assert eq(masked_outside(array(range(5), mask=[0,1,0,0,0]), 1, 3).mask, + [1,1,0,0,1]) + assert eq(masked_equal(array(range(5), mask=[1,0,0,0,0]), 2).mask, + [1,0,1,0,0]) + assert eq(masked_not_equal(array([2,2,1,2,1], mask=[1,0,0,0,0]), 2).mask, + [1,0,1,0,1]) assert eq(masked_where([1,1,0,0,0], [1,2,3,4,5]), [99,99,3,4,5]) atest = ones((10,10,10), dtype=float32) btest = zeros(atest.shape, MaskType) @@ -395,7 +404,7 @@ z = where(c, 1, masked) assert eq(z, [99, 1, 1, 99, 99, 99]) - def check_testMinMax(self): + def test_testMinMax(self): "Test of minumum, maximum." assert eq(minimum([1,2,3],[4,0,9]), [1,0,3]) assert eq(maximum([1,2,3],[4,0,9]), [4,2,9]) @@ -408,7 +417,7 @@ assert minimum(x) == 0 assert maximum(x) == 4 - def check_testTakeTransposeInnerOuter(self): + def test_testTakeTransposeInnerOuter(self): "Test of take, transpose, inner, outer products" x = arange(24) y = numpy.arange(24) @@ -428,7 +437,7 @@ assert t[1] == 2 assert t[2] == 3 - def check_testInplace(self): + def test_testInplace(self): """Test of inplace operations and rich comparisons""" y = arange(10) @@ -478,7 +487,7 @@ x += 1. assert eq(x, y+1.) - def check_testPickle(self): + def test_testPickle(self): "Test of pickling" import pickle x = arange(12) @@ -488,7 +497,7 @@ y = pickle.loads(s) assert eq(x,y) - def check_testMasked(self): + def test_testMasked(self): "Test of masked element" xx=arange(6) xx[1] = masked @@ -501,7 +510,7 @@ #self.failUnlessRaises(Exception, lambda x,y: x+y, masked, xx) #self.failUnlessRaises(Exception, lambda x,y: x+y, xx, masked) - def check_testAverage1(self): + def test_testAverage1(self): "Test of average." ott = array([0.,1.,2.,3.], mask=[1,0,0,0]) self.failUnless(eq(2.0, average(ott,axis=0))) @@ -520,7 +529,7 @@ result, wts = average(ott, axis=0, returned=1) self.failUnless(eq(wts, [1., 0.])) - def check_testAverage2(self): + def test_testAverage2(self): "More tests of average." w1 = [0,1,1,1,1,0] w2 = [[0,1,1,1,1,0],[1,0,0,0,0,1]] @@ -528,12 +537,16 @@ self.failUnless(allclose(average(x, axis=0), 2.5)) self.failUnless(allclose(average(x, axis=0, weights=w1), 2.5)) y=array([arange(6), 2.0*arange(6)]) - self.failUnless(allclose(average(y, None), numpy.add.reduce(numpy.arange(6))*3./12.)) + self.failUnless(allclose(average(y, None), + numpy.add.reduce(numpy.arange(6))*3./12.)) self.failUnless(allclose(average(y, axis=0), numpy.arange(6) * 3./2.)) - self.failUnless(allclose(average(y, axis=1), [average(x,axis=0), average(x,axis=0) * 2.0])) + self.failUnless(allclose(average(y, axis=1), + [average(x,axis=0), average(x,axis=0) * 2.0])) self.failUnless(allclose(average(y, None, weights=w2), 20./6.)) - self.failUnless(allclose(average(y, axis=0, weights=w2), [0.,1.,2.,3.,4.,10.])) - self.failUnless(allclose(average(y, axis=1), [average(x,axis=0), average(x,axis=0) * 2.0])) + self.failUnless(allclose(average(y, axis=0, weights=w2), + [0.,1.,2.,3.,4.,10.])) + self.failUnless(allclose(average(y, axis=1), + [average(x,axis=0), average(x,axis=0) * 2.0])) m1 = zeros(6) m2 = [0,0,1,1,0,0] m3 = [[0,0,1,1,0,0],[0,1,1,1,1,0]] @@ -548,7 +561,8 @@ self.failUnless(allclose(average(z, None), 20./6.)) self.failUnless(allclose(average(z, axis=0), [0.,1.,99.,99.,4.0, 7.5])) self.failUnless(allclose(average(z, axis=1), [2.5, 5.0])) - self.failUnless(allclose( average(z,axis=0, weights=w2), [0.,1., 99., 99., 4.0, 10.0])) + self.failUnless(allclose( average(z,axis=0, weights=w2), + [0.,1., 99., 99., 4.0, 10.0])) a = arange(6) b = arange(6) * 3 @@ -572,7 +586,7 @@ a2dma = average(a2dm, axis=1) self.failUnless(eq(a2dma, [1.5, 4.0])) - def check_testToPython(self): + def test_testToPython(self): self.assertEqual(1, int(array(1))) self.assertEqual(1.0, float(array(1))) self.assertEqual(1, int(array([[[1]]]))) @@ -581,7 +595,7 @@ self.failUnlessRaises(ValueError, bool, array([0,1])) self.failUnlessRaises(ValueError, bool, array([0,0],mask=[0,1])) - def check_testScalarArithmetic(self): + def test_testScalarArithmetic(self): xm = array(0, mask=1) self.failUnless((1/array(0)).mask) self.failUnless((1 + xm).mask) @@ -594,7 +608,7 @@ self.failUnless(x.filled() == x.data) self.failUnlessEqual(str(xm), str(masked_print_option)) - def check_testArrayMethods(self): + def test_testArrayMethods(self): a = array([1,3,2]) b = array([1,3,2], mask=[1,0,1]) self.failUnless(eq(a.any(), a.data.any())) @@ -611,29 +625,29 @@ self.failUnless(eq(a.take([1,2]), a.data.take([1,2]))) self.failUnless(eq(m.transpose(), m.data.transpose())) - def check_testArrayAttributes(self): + def test_testArrayAttributes(self): a = array([1,3,2]) b = array([1,3,2], mask=[1,0,1]) self.failUnlessEqual(a.ndim, 1) - def check_testAPI(self): + def test_testAPI(self): self.failIf([m for m in dir(numpy.ndarray) if m not in dir(MaskedArray) and not m.startswith('_')]) - def check_testSingleElementSubscript(self): + def test_testSingleElementSubscript(self): a = array([1,3,2]) b = array([1,3,2], mask=[1,0,1]) self.failUnlessEqual(a[0].shape, ()) self.failUnlessEqual(b[0].shape, ()) self.failUnlessEqual(b[1].shape, ()) -class TestUfuncs(NumpyTestCase): +class TestUfuncs(TestCase): def setUp(self): self.d = (array([1.0, 0, -1, pi/2]*2, mask=[0,1]+[0]*6), array([1.0, 0, -1, pi/2]*2, mask=[1,0]+[0]*6),) - def check_testUfuncRegression(self): + def test_testUfuncRegression(self): for f in ['sqrt', 'log', 'log10', 'exp', 'conjugate', 'sin', 'cos', 'tan', 'arcsin', 'arccos', 'arctan', @@ -660,8 +674,11 @@ mf = getattr(numpy.ma, f) args = self.d[:uf.nin] olderr = numpy.geterr() - if f in ['sqrt', 'arctanh', 'arcsin', 'arccos', 'arccosh', 'arctanh', 'log', - 'log10','divide','true_divide', 'floor_divide', 'remainder', 'fmod']: + f_invalid_ignore = ['sqrt', 'arctanh', 'arcsin', 'arccos', + 'arccosh', 'arctanh', 'log', 'log10','divide', + 'true_divide', 'floor_divide', 'remainder', + 'fmod'] + if f in f_invalid_ignore: numpy.seterr(invalid='ignore') if f in ['arctanh', 'log', 'log10']: numpy.seterr(divide='ignore') @@ -694,7 +711,7 @@ self.failUnless(eq(nonzero(x), [0])) -class TestArrayMethods(NumpyTestCase): +class TestArrayMethods(TestCase): def setUp(self): x = numpy.array([ 8.375, 7.545, 8.828, 8.5 , 1.757, 5.928, @@ -798,56 +815,55 @@ return m1 is nomask return (m1 == m2).all() -def timingTest(): - for f in [testf, testinplace]: - for n in [1000,10000,50000]: - t = testta(n, f) - t1 = testtb(n, f) - t2 = testtc(n, f) - print f.test_name - print """\ -n = %7d -numpy time (ms) %6.1f -MA maskless ratio %6.1f -MA masked ratio %6.1f -""" % (n, t*1000.0, t1/t, t2/t) +#def timingTest(): +# for f in [testf, testinplace]: +# for n in [1000,10000,50000]: +# t = testta(n, f) +# t1 = testtb(n, f) +# t2 = testtc(n, f) +# print f.test_name +# print """\ +#n = %7d +#numpy time (ms) %6.1f +#MA maskless ratio %6.1f +#MA masked ratio %6.1f +#""" % (n, t*1000.0, t1/t, t2/t) -def testta(n, f): - x=numpy.arange(n) + 1.0 - tn0 = time.time() - z = f(x) - return time.time() - tn0 +#def testta(n, f): +# x=numpy.arange(n) + 1.0 +# tn0 = time.time() +# z = f(x) +# return time.time() - tn0 -def testtb(n, f): - x=arange(n) + 1.0 - tn0 = time.time() - z = f(x) - return time.time() - tn0 +#def testtb(n, f): +# x=arange(n) + 1.0 +# tn0 = time.time() +# z = f(x) +# return time.time() - tn0 -def testtc(n, f): - x=arange(n) + 1.0 - x[0] = masked - tn0 = time.time() - z = f(x) - return time.time() - tn0 +#def testtc(n, f): +# x=arange(n) + 1.0 +# x[0] = masked +# tn0 = time.time() +# z = f(x) +# return time.time() - tn0 -def testf(x): - for i in range(25): - y = x **2 + 2.0 * x - 1.0 - w = x **2 + 1.0 - z = (y / w) ** 2 - return z -testf.test_name = 'Simple arithmetic' +#def testf(x): +# for i in range(25): +# y = x **2 + 2.0 * x - 1.0 +# w = x **2 + 1.0 +# z = (y / w) ** 2 +# return z +#testf.test_name = 'Simple arithmetic' -def testinplace(x): - for i in range(25): - y = x**2 - y += 2.0*x - y -= 1.0 - y /= x - return y -testinplace.test_name = 'Inplace operations' +#def testinplace(x): +# for i in range(25): +# y = x**2 +# y += 2.0*x +# y -= 1.0 +# y /= x +# return y +#testinplace.test_name = 'Inplace operations' if __name__ == "__main__": - NumpyTest('numpy.ma').run() - #timingTest() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/ma/tests/test_subclassing.py =================================================================== --- branches/cdavid/numpy/ma/tests/test_subclassing.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/ma/tests/test_subclassing.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -13,15 +13,10 @@ import numpy as N import numpy.core.numeric as numeric -from numpy.testing import NumpyTest, NumpyTestCase - -import numpy.ma.testutils +from numpy.testing import * from numpy.ma.testutils import * - -import numpy.ma.core as coremodule from numpy.ma.core import * - class SubArray(N.ndarray): """Defines a generic N.ndarray subclass, that stores some metadata in the dictionary `info`.""" @@ -36,6 +31,7 @@ result = N.ndarray.__add__(self, other) result.info.update({'added':result.info.pop('added',0)+1}) return result + subarray = SubArray class MSubArray(SubArray,MaskedArray): @@ -53,6 +49,7 @@ _view._sharedmask = False return _view _series = property(fget=_get_series) + msubarray = MSubArray class MMatrix(MaskedArray, N.matrix,): @@ -69,14 +66,13 @@ _view._sharedmask = False return _view _series = property(fget=_get_series) + mmatrix = MMatrix - - -class TestSubclassing(NumpyTestCase): +class TestSubclassing(TestCase): """Test suite for masked subclasses of ndarray.""" - def check_data_subclassing(self): + def test_data_subclassing(self): "Tests whether the subclass is kept." x = N.arange(5) m = [0,0,1,0,0] @@ -86,7 +82,7 @@ assert_equal(xmsub._data, xsub) assert isinstance(xmsub._data, SubArray) - def check_maskedarray_subclassing(self): + def test_maskedarray_subclassing(self): "Tests subclassing MaskedArray" x = N.arange(5) mx = mmatrix(x,mask=[0,1,0,0,0]) @@ -101,7 +97,7 @@ assert isinstance(hypot(mx,mx), mmatrix) assert isinstance(hypot(mx,x), mmatrix) - def check_attributepropagation(self): + def test_attributepropagation(self): x = array(arange(5), mask=[0]+[1]*4) my = masked_array(subarray(x)) ym = msubarray(x) @@ -128,7 +124,7 @@ assert hasattr(mxsub, 'info') assert_equal(mxsub.info, xsub.info) - def check_subclasspreservation(self): + def test_subclasspreservation(self): "Checks that masked_array(...,subok=True) preserves the class." x = N.arange(5) m = [0,0,1,0,0] @@ -158,8 +154,8 @@ ################################################################################ if __name__ == '__main__': - NumpyTest().run() - # + nose.run(argv=['', __file__]) + if 0: x = array(arange(5), mask=[0]+[1]*4) my = masked_array(subarray(x)) Modified: branches/cdavid/numpy/ma/testutils.py =================================================================== --- branches/cdavid/numpy/ma/testutils.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/ma/testutils.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -10,16 +10,18 @@ __date__ = "$Date: 2007-11-13 10:01:14 +0200 (Tue, 13 Nov 2007) $" -import numpy as N -from numpy.core import ndarray -from numpy.core.numerictypes import float_ +import operator + +import numpy as np +from numpy import ndarray, float_ import numpy.core.umath as umath -from numpy.testing import NumpyTest, NumpyTestCase +from numpy.testing import * from numpy.testing.utils import build_err_msg, rand +import numpy.testing.utils as utils import core from core import mask_or, getmask, getmaskarray, masked_array, nomask, masked -from core import filled, equal, less +from core import fix_invalid, filled, equal, less #------------------------------------------------------------------------------ def approx (a, b, fill_value=True, rtol=1.e-5, atol=1.e-8): @@ -35,12 +37,13 @@ d1 = filled(a) d2 = filled(b) if d1.dtype.char == "O" or d2.dtype.char == "O": - return N.equal(d1,d2).ravel() + return np.equal(d1,d2).ravel() x = filled(masked_array(d1, copy=False, mask=m), fill_value).astype(float_) y = filled(masked_array(d2, copy=False, mask=m), 1).astype(float_) - d = N.less_equal(umath.absolute(x-y), atol + rtol * umath.absolute(y)) + d = np.less_equal(umath.absolute(x-y), atol + rtol * umath.absolute(y)) return d.ravel() + def almost(a, b, decimal=6, fill_value=True): """Returns True if a and b are equal up to decimal places. If fill_value is True, masked values considered equal. Otherwise, masked values @@ -50,13 +53,13 @@ d1 = filled(a) d2 = filled(b) if d1.dtype.char == "O" or d2.dtype.char == "O": - return N.equal(d1,d2).ravel() + return np.equal(d1,d2).ravel() x = filled(masked_array(d1, copy=False, mask=m), fill_value).astype(float_) y = filled(masked_array(d2, copy=False, mask=m), 1).astype(float_) - d = N.around(N.abs(x-y),decimal) <= 10.0**(-decimal) + d = np.around(np.abs(x-y),decimal) <= 10.0**(-decimal) return d.ravel() - + #................................................ def _assert_equal_on_sequences(actual, desired, err_msg=''): "Asserts the equality of two non-array sequences." @@ -69,11 +72,12 @@ """Asserts that two records are equal. Pretty crude for now.""" assert_equal(a.dtype, b.dtype) for f in a.dtype.names: - (af, bf) = (getattr(a,f), getattr(b,f)) + (af, bf) = (operator.getitem(a,f), operator.getitem(b,f)) if not (af is masked) and not (bf is masked): - assert_equal(getattr(a,f), getattr(b,f)) + assert_equal(operator.getitem(a,f), operator.getitem(b,f)) return + def assert_equal(actual,desired,err_msg=''): """Asserts that two items are equal. """ @@ -95,16 +99,18 @@ # Case #4. arrays or equivalent if ((actual is masked) and not (desired is masked)) or \ ((desired is masked) and not (actual is masked)): - msg = build_err_msg([actual, desired], err_msg, header='', names=('x', 'y')) + msg = build_err_msg([actual, desired], + err_msg, header='', names=('x', 'y')) raise ValueError(msg) - actual = N.array(actual, copy=False, subok=True) - desired = N.array(desired, copy=False, subok=True) - if actual.dtype.char in "OS" and desired.dtype.char in "OS": + actual = np.array(actual, copy=False, subok=True) + desired = np.array(desired, copy=False, subok=True) + if actual.dtype.char in "OSV" and desired.dtype.char in "OSV": return _assert_equal_on_sequences(actual.tolist(), desired.tolist(), err_msg='') return assert_array_equal(actual, desired, err_msg) -#............................. + + def fail_if_equal(actual,desired,err_msg='',): """Raises an assertion error if two items are equal. """ @@ -120,119 +126,91 @@ for k in range(len(desired)): fail_if_equal(actual[k], desired[k], 'item=%r\n%s' % (k,err_msg)) return - if isinstance(actual, N.ndarray) or isinstance(desired, N.ndarray): + if isinstance(actual, np.ndarray) or isinstance(desired, np.ndarray): return fail_if_array_equal(actual, desired, err_msg) msg = build_err_msg([actual, desired], err_msg) assert desired != actual, msg assert_not_equal = fail_if_equal -#............................ -def assert_almost_equal(actual,desired,decimal=7,err_msg=''): + + +def assert_almost_equal(actual, desired, decimal=7, err_msg='', verbose=True): """Asserts that two items are almost equal. The test is equivalent to abs(desired-actual) < 0.5 * 10**(-decimal) """ - if isinstance(actual, N.ndarray) or isinstance(desired, N.ndarray): - return assert_array_almost_equal(actual, desired, decimal, err_msg) - msg = build_err_msg([actual, desired], err_msg) + if isinstance(actual, np.ndarray) or isinstance(desired, np.ndarray): + return assert_array_almost_equal(actual, desired, decimal=decimal, + err_msg=err_msg, verbose=verbose) + msg = build_err_msg([actual, desired], + err_msg=err_msg, verbose=verbose) assert round(abs(desired - actual),decimal) == 0, msg -#............................ -def assert_array_compare(comparison, x, y, err_msg='', header='', + + +assert_close = assert_almost_equal + + +def assert_array_compare(comparison, x, y, err_msg='', verbose=True, header='', fill_value=True): """Asserts that a comparison relation between two masked arrays is satisfied elementwise.""" + # Fill the data first xf = filled(x) yf = filled(y) + # Allocate a common mask and refill m = mask_or(getmask(x), getmask(y)) - - x = masked_array(xf, copy=False, subok=False, mask=m).filled(fill_value) - y = masked_array(yf, copy=False, subok=False, mask=m).filled(fill_value) - + x = masked_array(xf, copy=False, mask=m) + y = masked_array(yf, copy=False, mask=m) if ((x is masked) and not (y is masked)) or \ ((y is masked) and not (x is masked)): - msg = build_err_msg([x, y], err_msg, header=header, names=('x', 'y')) + msg = build_err_msg([x, y], err_msg=err_msg, verbose=verbose, + header=header, names=('x', 'y')) raise ValueError(msg) + # OK, now run the basic tests on filled versions + return utils.assert_array_compare(comparison, + x.filled(fill_value), y.filled(fill_value), + err_msg=err_msg, + verbose=verbose, header=header) - if (x.dtype.char != "O") and (x.dtype.char != "S"): - x = x.astype(float_) - if isinstance(x, N.ndarray) and x.size > 1: - x[N.isnan(x)] = 0 - elif N.isnan(x): - x = 0 - if (y.dtype.char != "O") and (y.dtype.char != "S"): - y = y.astype(float_) - if isinstance(y, N.ndarray) and y.size > 1: - y[N.isnan(y)] = 0 - elif N.isnan(y): - y = 0 - try: - cond = (x.shape==() or y.shape==()) or x.shape == y.shape - if not cond: - msg = build_err_msg([x, y], - err_msg - + '\n(shapes %s, %s mismatch)' % (x.shape, - y.shape), - header=header, - names=('x', 'y')) - assert cond, msg - val = comparison(x,y) - if m is not nomask and fill_value: - val = masked_array(val, mask=m, copy=False) - if isinstance(val, bool): - cond = val - reduced = [0] - else: - reduced = val.ravel() - cond = reduced.all() - reduced = reduced.tolist() - if not cond: - match = 100-100.0*reduced.count(1)/len(reduced) - msg = build_err_msg([x, y], - err_msg - + '\n(mismatch %s%%)' % (match,), - header=header, - names=('x', 'y')) - assert cond, msg - except ValueError: - msg = build_err_msg([x, y], err_msg, header=header, names=('x', 'y')) - raise ValueError(msg) -#............................ -def assert_array_equal(x, y, err_msg=''): + +def assert_array_equal(x, y, err_msg='', verbose=True): """Checks the elementwise equality of two masked arrays.""" - assert_array_compare(equal, x, y, err_msg=err_msg, + assert_array_compare(equal, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not equal') -##............................ -def fail_if_array_equal(x, y, err_msg=''): + + +def fail_if_array_equal(x, y, err_msg='', verbose=True): "Raises an assertion error if two masked arrays are not equal (elementwise)." def compare(x,y): - - return (not N.alltrue(approx(x, y))) - assert_array_compare(compare, x, y, err_msg=err_msg, + return (not np.alltrue(approx(x, y))) + assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not equal') -#............................ -def assert_array_approx_equal(x, y, decimal=6, err_msg=''): + + +def assert_array_approx_equal(x, y, decimal=6, err_msg='', verbose=True): """Checks the elementwise equality of two masked arrays, up to a given number of decimals.""" def compare(x, y): "Returns the result of the loose comparison between x and y)." return approx(x,y, rtol=10.**-decimal) - assert_array_compare(compare, x, y, err_msg=err_msg, + assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not almost equal') -#............................ -def assert_array_almost_equal(x, y, decimal=6, err_msg=''): + + +def assert_array_almost_equal(x, y, decimal=6, err_msg='', verbose=True): """Checks the elementwise equality of two masked arrays, up to a given number of decimals.""" def compare(x, y): "Returns the result of the loose comparison between x and y)." return almost(x,y,decimal) - assert_array_compare(compare, x, y, err_msg=err_msg, + assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not almost equal') -#............................ -def assert_array_less(x, y, err_msg=''): + + +def assert_array_less(x, y, err_msg='', verbose=True): "Checks that x is smaller than y elementwise." - assert_array_compare(less, x, y, err_msg=err_msg, + assert_array_compare(less, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not less-ordered') -#............................ -assert_close = assert_almost_equal -#............................ + + def assert_mask_equal(m1, m2): """Asserts the equality of two masks.""" if m1 is nomask: @@ -240,6 +218,3 @@ if m2 is nomask: assert(m1 is nomask) assert_array_equal(m1, m2) - -if __name__ == '__main__': - pass Copied: branches/cdavid/numpy/numarray/SConscript (from rev 5301, trunk/numpy/numarray/SConscript) Deleted: branches/cdavid/numpy/numarray/SConstruct =================================================================== --- branches/cdavid/numpy/numarray/SConstruct 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/numarray/SConstruct 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,9 +0,0 @@ -# Last Change: Tue May 20 05:00 PM 2008 J -# vim:syntax=python -from numscons import GetNumpyEnvironment, scons_get_paths - -env = GetNumpyEnvironment(ARGUMENTS) -env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) -env.Append(CPPPATH = env['src_dir']) - -_capi = env.NumpyPythonExtension('_capi', source = ['_capi.c']) Copied: branches/cdavid/numpy/numarray/SConstruct (from rev 5301, trunk/numpy/numarray/SConstruct) Modified: branches/cdavid/numpy/numarray/__init__.py =================================================================== --- branches/cdavid/numpy/numarray/__init__.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/numarray/__init__.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -24,3 +24,7 @@ del functions del ufuncs del compat + +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().bench Modified: branches/cdavid/numpy/oldnumeric/__init__.py =================================================================== --- branches/cdavid/numpy/oldnumeric/__init__.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/oldnumeric/__init__.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -39,3 +39,7 @@ del precision del ufuncs del misc + +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().bench Modified: branches/cdavid/numpy/oldnumeric/tests/test_oldnumeric.py =================================================================== --- branches/cdavid/numpy/oldnumeric/tests/test_oldnumeric.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/oldnumeric/tests/test_oldnumeric.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -6,7 +6,7 @@ int16, int32, int64, uint, uint8, uint16, uint32, uint64 class test_oldtypes(NumPyTestCase): - def check_oldtypes(self, level=1): + def test_oldtypes(self, level=1): a1 = array([0,1,0], Float) a2 = array([0,1,0], float) assert_array_equal(a1, a2) @@ -83,4 +83,4 @@ if __name__ == "__main__": - NumPyTest().run() + nose.run(argv=['', __file__]) Copied: branches/cdavid/numpy/random/SConscript (from rev 5301, trunk/numpy/random/SConscript) Deleted: branches/cdavid/numpy/random/SConstruct =================================================================== --- branches/cdavid/numpy/random/SConstruct 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/random/SConstruct 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,46 +0,0 @@ -# Last Change: Tue May 20 05:00 PM 2008 J -# vim:syntax=python -import os - -from numscons import GetNumpyEnvironment, scons_get_paths, \ - scons_get_mathlib - -def CheckWincrypt(context): - from copy import deepcopy - src = """\ -/* check to see if _WIN32 is defined */ -int main(int argc, char *argv[]) -{ -#ifdef _WIN32 - return 0; -#else - return 1; -#endif -} -""" - - context.Message("Checking if using wincrypt ... ") - st = context.env.TryRun(src, '.C') - if st[0] == 0: - context.Result('No') - else: - context.Result('Yes') - return st[0] - -env = GetNumpyEnvironment(ARGUMENTS) -env.Append(CPPPATH = scons_get_paths(env['include_bootstrap'])) - -mlib = scons_get_mathlib(env) -env.AppendUnique(LIBS = mlib) - -# On windows, see if we should use Advapi32 -if os.name == 'nt': - config = env.NumpyConfigure(custom_tests = {'CheckWincrypt' : CheckWincrypt}) - if config.CheckWincrypt: - config.env.AppendUnique(LIBS = 'Advapi32') - -sources = [os.path.join('mtrand', x) for x in - ['mtrand.c', 'randomkit.c', 'initarray.c', 'distributions.c']] - -# XXX: Pyrex dependency -mtrand = env.NumpyPythonExtension('mtrand', source = sources) Copied: branches/cdavid/numpy/random/SConstruct (from rev 5301, trunk/numpy/random/SConstruct) Modified: branches/cdavid/numpy/random/__init__.py =================================================================== --- branches/cdavid/numpy/random/__init__.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/random/__init__.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -13,6 +13,6 @@ """ return RandomState() -def test(level=1, verbosity=1): - from numpy.testing import NumpyTest - return NumpyTest().test(level, verbosity) +from numpy.testing.pkgtester import Tester +test = Tester().test +bench = Tester().bench Modified: branches/cdavid/numpy/random/tests/test_random.py =================================================================== --- branches/cdavid/numpy/random/tests/test_random.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/random/tests/test_random.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -2,7 +2,7 @@ from numpy import random import numpy as np -class TestMultinomial(NumpyTestCase): +class TestMultinomial(TestCase): def test_basic(self): random.multinomial(100, [0.2, 0.8]) @@ -16,7 +16,7 @@ assert np.all(x < -1) -class TestSetState(NumpyTestCase): +class TestSetState(TestCase): def setUp(self): self.seed = 1234567890 self.prng = random.RandomState(self.seed) @@ -62,4 +62,4 @@ if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/testing/__init__.py =================================================================== --- branches/cdavid/numpy/testing/__init__.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/testing/__init__.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,5 +1,15 @@ +"""Common test support for all numpy test scripts. -from info import __doc__ +This single module should provide all the common functionality for numpy tests +in a single location, so that test scripts can just import it and work right +away. +""" + +#import unittest +from unittest import TestCase + +import decorators as dec +from utils import * from numpytest import * -from utils import * -from parametric import ParametricTestCase +from pkgtester import Tester +test = Tester().test Copied: branches/cdavid/numpy/testing/decorators.py (from rev 5301, trunk/numpy/testing/decorators.py) Deleted: branches/cdavid/numpy/testing/info.py =================================================================== --- branches/cdavid/numpy/testing/info.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/testing/info.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,30 +0,0 @@ -""" -Numpy testing tools -=================== - -Numpy-style unit-testing ------------------------- - - NumpyTest -- Numpy tests site manager - NumpyTestCase -- unittest.TestCase with measure method - IgnoreException -- raise when checking disabled feature, it'll be ignored - set_package_path -- prepend package build directory to path - set_local_path -- prepend local directory (to tests files) to path - restore_path -- restore path after set_package_path - -Utility functions ------------------ - - jiffies -- return 1/100ths of a second that the current process has used - memusage -- virtual memory size in bytes of the running python [linux] - rand -- array of random numbers from given shape - assert_equal -- assert equality - assert_almost_equal -- assert equality with decimal tolerance - assert_approx_equal -- assert equality with significant digits tolerance - assert_array_equal -- assert arrays equality - assert_array_almost_equal -- assert arrays equality with decimal tolerance - assert_array_less -- assert arrays less-ordering - -""" - -global_symbols = ['ScipyTest','NumpyTest'] Copied: branches/cdavid/numpy/testing/nosetester.py (from rev 5301, trunk/numpy/testing/nosetester.py) Copied: branches/cdavid/numpy/testing/nulltester.py (from rev 5301, trunk/numpy/testing/nulltester.py) Modified: branches/cdavid/numpy/testing/numpytest.py =================================================================== --- branches/cdavid/numpy/testing/numpytest.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/testing/numpytest.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -10,10 +10,7 @@ import warnings __all__ = ['set_package_path', 'set_local_path', 'restore_path', - 'IgnoreException', 'NumpyTestCase', 'NumpyTest', - 'ScipyTestCase', 'ScipyTest', # for backward compatibility - 'importall', - ] + 'IgnoreException', 'importall',] DEBUG=0 from numpy.testing.utils import jiffies @@ -109,98 +106,11 @@ self.data.append(message) def writeln(self,message): self.write(message+'\n') + def flush(self): + self.stream.flush() -class NumpyTestCase (unittest.TestCase): - def measure(self,code_str,times=1): - """ Return elapsed time for executing code_str in the - namespace of the caller for given times. - """ - frame = get_frame(1) - locs,globs = frame.f_locals,frame.f_globals - code = compile(code_str, - 'NumpyTestCase runner for '+self.__class__.__name__, - 'exec') - i = 0 - elapsed = jiffies() - while i>sys.stderr,yellow_text('Warning: %s' % (message)) - sys.stderr.flush() - def info(self, message): - print>>sys.stdout, message - sys.stdout.flush() - - def rundocs(self, filename=None): - """ Run doc string tests found in filename. - """ - import doctest - if filename is None: - f = get_frame(1) - filename = f.f_globals['__file__'] - name = os.path.splitext(os.path.basename(filename))[0] - path = [os.path.dirname(filename)] - file, pathname, description = imp.find_module(name, path) - try: - m = imp.load_module(name, file, pathname, description) - finally: - file.close() - if sys.version[:3]<'2.4': - doctest.testmod(m, verbose=False) - else: - tests = doctest.DocTestFinder().find(m) - runner = doctest.DocTestRunner(verbose=False) - for test in tests: - runner.run(test) - return - -class ScipyTestCase(NumpyTestCase): - def __init__(self, package=None): - warnings.warn("ScipyTestCase is now called NumpyTestCase; please update your code", - DeprecationWarning, stacklevel=2) - NumpyTestCase.__init__(self, package) - - def _get_all_method_names(cls): names = dir(cls) if sys.version[:3]<='2.1': @@ -212,461 +122,7 @@ # for debug build--check for memory leaks during the test. -class _NumPyTextTestResult(unittest._TextTestResult): - def startTest(self, test): - unittest._TextTestResult.startTest(self, test) - if self.showAll: - N = len(sys.getobjects(0)) - self._totnumobj = N - self._totrefcnt = sys.gettotalrefcount() - return - def stopTest(self, test): - if self.showAll: - N = len(sys.getobjects(0)) - self.stream.write("objects: %d ===> %d; " % (self._totnumobj, N)) - self.stream.write("refcnts: %d ===> %d\n" % (self._totrefcnt, - sys.gettotalrefcount())) - return - -class NumPyTextTestRunner(unittest.TextTestRunner): - def _makeResult(self): - return _NumPyTextTestResult(self.stream, self.descriptions, self.verbosity) - - -class NumpyTest: - """ Numpy tests site manager. - - Usage: NumpyTest().test(level=1,verbosity=1) - - is package name or its module object. - - Package is supposed to contain a directory tests/ with test_*.py - files where * refers to the names of submodules. See .rename() - method to redefine name mapping between test_*.py files and names of - submodules. Pattern test_*.py can be overwritten by redefining - .get_testfile() method. - - test_*.py files are supposed to define a classes, derived from - NumpyTestCase or unittest.TestCase, with methods having names - starting with test or bench or check. The names of TestCase classes - must have a prefix test. This can be overwritten by redefining - .check_testcase_name() method. - - And that is it! No need to implement test or test_suite functions - in each .py file. - - Old-style test_suite(level=1) hooks are also supported. - """ - _check_testcase_name = re.compile(r'test.*|Test.*').match - def check_testcase_name(self, name): - """ Return True if name matches TestCase class. - """ - return not not self._check_testcase_name(name) - - testfile_patterns = ['test_%(modulename)s.py'] - def get_testfile(self, module, verbosity = 0): - """ Return path to module test file. - """ - mstr = self._module_str - short_module_name = self._get_short_module_name(module) - d = os.path.split(module.__file__)[0] - test_dir = os.path.join(d,'tests') - local_test_dir = os.path.join(os.getcwd(),'tests') - if os.path.basename(os.path.dirname(local_test_dir)) \ - == os.path.basename(os.path.dirname(test_dir)): - test_dir = local_test_dir - for pat in self.testfile_patterns: - fn = os.path.join(test_dir, pat % {'modulename':short_module_name}) - if os.path.isfile(fn): - return fn - if verbosity>1: - self.warn('No test file found in %s for module %s' \ - % (test_dir, mstr(module))) - return - - def __init__(self, package=None): - if package is None: - from numpy.distutils.misc_util import get_frame - f = get_frame(1) - package = f.f_locals.get('__name__',f.f_globals.get('__name__',None)) - assert package is not None - self.package = package - self._rename_map = {} - - def rename(self, **kws): - """Apply renaming submodule test file test_.py to - test_.py. - - Usage: self.rename(name='newname') before calling the - self.test() method. - - If 'newname' is None, then no tests will be executed for a given - module. - """ - for k,v in kws.items(): - self._rename_map[k] = v - return - - def _module_str(self, module): - filename = module.__file__[-30:] - if filename!=module.__file__: - filename = '...'+filename - return '' % (module.__name__, filename) - - def _get_method_names(self,clsobj,level): - names = [] - for mthname in _get_all_method_names(clsobj): - if mthname[:5] not in ['bench','check'] \ - and mthname[:4] not in ['test']: - continue - mth = getattr(clsobj, mthname) - if type(mth) is not types.MethodType: - continue - d = mth.im_func.func_defaults - if d is not None: - mthlevel = d[0] - else: - mthlevel = 1 - if level>=mthlevel: - if mthname not in names: - names.append(mthname) - for base in clsobj.__bases__: - for n in self._get_method_names(base,level): - if n not in names: - names.append(n) - return names - - def _get_short_module_name(self, module): - d,f = os.path.split(module.__file__) - short_module_name = os.path.splitext(os.path.basename(f))[0] - if short_module_name=='__init__': - short_module_name = module.__name__.split('.')[-1] - short_module_name = self._rename_map.get(short_module_name,short_module_name) - return short_module_name - - def _get_module_tests(self, module, level, verbosity): - mstr = self._module_str - - short_module_name = self._get_short_module_name(module) - if short_module_name is None: - return [] - - test_file = self.get_testfile(module, verbosity) - - if test_file is None: - return [] - - if not os.path.isfile(test_file): - if short_module_name[:5]=='info_' \ - and short_module_name[5:]==module.__name__.split('.')[-2]: - return [] - if short_module_name in ['__cvs_version__','__svn_version__']: - return [] - if short_module_name[-8:]=='_version' \ - and short_module_name[:-8]==module.__name__.split('.')[-2]: - return [] - if verbosity>1: - self.warn(test_file) - self.warn(' !! No test file %r found for %s' \ - % (os.path.basename(test_file), mstr(module))) - return [] - - if test_file in self.test_files: - return [] - - parent_module_name = '.'.join(module.__name__.split('.')[:-1]) - test_module_name,ext = os.path.splitext(os.path.basename(test_file)) - test_dir_module = parent_module_name+'.tests' - test_module_name = test_dir_module+'.'+test_module_name - - if test_dir_module not in sys.modules: - sys.modules[test_dir_module] = imp.new_module(test_dir_module) - - old_sys_path = sys.path[:] - try: - f = open(test_file,'r') - test_module = imp.load_module(test_module_name, f, - test_file, ('.py', 'r', 1)) - f.close() - except: - sys.path[:] = old_sys_path - self.warn('FAILURE importing tests for %s' % (mstr(module))) - output_exception(sys.stderr) - return [] - sys.path[:] = old_sys_path - - self.test_files.append(test_file) - - return self._get_suite_list(test_module, level, module.__name__) - - def _get_suite_list(self, test_module, level, module_name='__main__', - verbosity=1): - suite_list = [] - if hasattr(test_module, 'test_suite'): - suite_list.extend(test_module.test_suite(level)._tests) - for name in dir(test_module): - obj = getattr(test_module, name) - if type(obj) is not type(unittest.TestCase) \ - or not issubclass(obj, unittest.TestCase) \ - or not self.check_testcase_name(obj.__name__): - continue - for mthname in self._get_method_names(obj,level): - suite = obj(mthname) - if getattr(suite,'isrunnable',lambda mthname:1)(mthname): - suite_list.append(suite) - matched_suite_list = [suite for suite in suite_list \ - if self.testcase_match(suite.id()\ - .replace('__main__.',''))] - if verbosity>=0: - self.info(' Found %s/%s tests for %s' \ - % (len(matched_suite_list), len(suite_list), module_name)) - return matched_suite_list - - def _test_suite_from_modules(self, this_package, level, verbosity): - package_name = this_package.__name__ - modules = [] - for name, module in sys.modules.items(): - if not name.startswith(package_name) or module is None: - continue - if not hasattr(module,'__file__'): - continue - if os.path.basename(os.path.dirname(module.__file__))=='tests': - continue - modules.append((name, module)) - - modules.sort() - modules = [m[1] for m in modules] - - self.test_files = [] - suites = [] - for module in modules: - suites.extend(self._get_module_tests(module, abs(level), verbosity)) - - suites.extend(self._get_suite_list(sys.modules[package_name], - abs(level), verbosity=verbosity)) - return unittest.TestSuite(suites) - - def _test_suite_from_all_tests(self, this_package, level, verbosity): - importall(this_package) - package_name = this_package.__name__ - - # Find all tests/ directories under the package - test_dirs_names = {} - for name, module in sys.modules.items(): - if not name.startswith(package_name) or module is None: - continue - if not hasattr(module, '__file__'): - continue - d = os.path.dirname(module.__file__) - if os.path.basename(d)=='tests': - continue - d = os.path.join(d, 'tests') - if not os.path.isdir(d): - continue - if d in test_dirs_names: - continue - test_dir_module = '.'.join(name.split('.')[:-1]+['tests']) - test_dirs_names[d] = test_dir_module - - test_dirs = test_dirs_names.keys() - test_dirs.sort() - - # For each file in each tests/ directory with a test case in it, - # import the file, and add the test cases to our list - suite_list = [] - testcase_match = re.compile(r'\s*class\s+\w+\s*\(.*TestCase').match - for test_dir in test_dirs: - test_dir_module = test_dirs_names[test_dir] - - if test_dir_module not in sys.modules: - sys.modules[test_dir_module] = imp.new_module(test_dir_module) - - for fn in os.listdir(test_dir): - base, ext = os.path.splitext(fn) - if ext != '.py': - continue - f = os.path.join(test_dir, fn) - - # check that file contains TestCase class definitions: - fid = open(f, 'r') - skip = True - for line in fid: - if testcase_match(line): - skip = False - break - fid.close() - if skip: - continue - - # import the test file - n = test_dir_module + '.' + base - # in case test files import local modules - sys.path.insert(0, test_dir) - fo = None - try: - try: - fo = open(f) - test_module = imp.load_module(n, fo, f, - ('.py', 'U', 1)) - except Exception, msg: - print 'Failed importing %s: %s' % (f,msg) - continue - finally: - if fo: - fo.close() - del sys.path[0] - - suites = self._get_suite_list(test_module, level, - module_name=n, - verbosity=verbosity) - suite_list.extend(suites) - - all_tests = unittest.TestSuite(suite_list) - return all_tests - - def test(self, level=1, verbosity=1, all=False, sys_argv=[], - testcase_pattern='.*'): - """Run Numpy module test suite with level and verbosity. - - level: - None --- do nothing, return None - < 0 --- scan for tests of level=abs(level), - don't run them, return TestSuite-list - > 0 --- scan for tests of level, run them, - return TestRunner - > 10 --- run all tests (same as specifying all=True). - (backward compatibility). - - verbosity: - >= 0 --- show information messages - > 1 --- show warnings on missing tests - - all: - True --- run all test files (like self.testall()) - False (default) --- only run test files associated with a module - - sys_argv --- replacement of sys.argv[1:] during running - tests. - - testcase_pattern --- run only tests that match given pattern. - - It is assumed (when all=False) that package tests suite follows - the following convention: for each package module, there exists - file /tests/test_.py that defines - TestCase classes (with names having prefix 'test_') with methods - (with names having prefixes 'check_' or 'bench_'); each of these - methods are called when running unit tests. - """ - if level is None: # Do nothing. - return - - if isinstance(self.package, str): - exec 'import %s as this_package' % (self.package) - else: - this_package = self.package - - self.testcase_match = re.compile(testcase_pattern).match - - if all: - all_tests = self._test_suite_from_all_tests(this_package, - level, verbosity) - else: - all_tests = self._test_suite_from_modules(this_package, - level, verbosity) - - if level < 0: - return all_tests - - runner = unittest.TextTestRunner(verbosity=verbosity) - old_sys_argv = sys.argv[1:] - sys.argv[1:] = sys_argv - # Use the builtin displayhook. If the tests are being run - # under IPython (for instance), any doctest test suites will - # fail otherwise. - old_displayhook = sys.displayhook - sys.displayhook = sys.__displayhook__ - try: - r = runner.run(all_tests) - finally: - sys.displayhook = old_displayhook - sys.argv[1:] = old_sys_argv - return r - - def testall(self, level=1,verbosity=1): - """ Run Numpy module test suite with level and verbosity. - - level: - None --- do nothing, return None - < 0 --- scan for tests of level=abs(level), - don't run them, return TestSuite-list - > 0 --- scan for tests of level, run them, - return TestRunner - - verbosity: - >= 0 --- show information messages - > 1 --- show warnings on missing tests - - Different from .test(..) method, this method looks for - TestCase classes from all files in /tests/ - directory and no assumptions are made for naming the - TestCase classes or their methods. - """ - return self.test(level=level, verbosity=verbosity, all=True) - - def run(self): - """ Run Numpy module test suite with level and verbosity - taken from sys.argv. Requires optparse module. - """ - try: - from optparse import OptionParser - except ImportError: - self.warn('Failed to import optparse module, ignoring.') - return self.test() - usage = r'usage: %prog [-v ] [-l ]'\ - r' [-s ""]'\ - r' [-t ""]' - parser = OptionParser(usage) - parser.add_option("-v", "--verbosity", - action="store", - dest="verbosity", - default=1, - type='int') - parser.add_option("-l", "--level", - action="store", - dest="level", - default=1, - type='int') - parser.add_option("-s", "--sys-argv", - action="store", - dest="sys_argv", - default='', - type='string') - parser.add_option("-t", "--testcase-pattern", - action="store", - dest="testcase_pattern", - default=r'.*', - type='string') - (options, args) = parser.parse_args() - return self.test(options.level,options.verbosity, - sys_argv=shlex.split(options.sys_argv or ''), - testcase_pattern=options.testcase_pattern) - - def warn(self, message): - from numpy.distutils.misc_util import yellow_text - print>>sys.stderr,yellow_text('Warning: %s' % (message)) - sys.stderr.flush() - def info(self, message): - print>>sys.stdout, message - sys.stdout.flush() - -class ScipyTest(NumpyTest): - def __init__(self, package=None): - warnings.warn("ScipyTest is now called NumpyTest; please update your code", - DeprecationWarning, stacklevel=2) - NumpyTest.__init__(self, package) - - def importall(package): """ Try recursively to import all subpackages under package. Deleted: branches/cdavid/numpy/testing/parametric.py =================================================================== --- branches/cdavid/numpy/testing/parametric.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/testing/parametric.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,300 +0,0 @@ -"""Support for parametric tests in unittest. - -:Author: Fernando Perez - -Purpose -======= - -Briefly, the main class in this module allows you to easily and cleanly -(without the gross name-mangling hacks that are normally needed) to write -unittest TestCase classes that have parametrized tests. That is, tests which -consist of multiple sub-tests that scan for example a parameter range, but -where you want each sub-test to: - -* count as a separate test in the statistics. - -* be run even if others in the group error out or fail. - - -The class offers a simple name-based convention to create such tests (see -simple example at the end), in one of two ways: - -* Each sub-test in a group can be run fully independently, with the - setUp/tearDown methods being called each time. - -* The whole group can be run with setUp/tearDown being called only once for the - group. This lets you conveniently reuse state that may be very expensive to - compute for multiple tests. Be careful not to corrupt it!!! - - -Caveats -======= - -This code relies on implementation details of the unittest module (some key -methods are heavily modified versions of those, after copying them in). So it -may well break either if you make sophisticated use of the unittest APIs, or if -unittest itself changes in the future. I have only tested this with Python -2.5. - -""" -__docformat__ = "restructuredtext en" - -import unittest - -class ParametricTestCase(unittest.TestCase): - """TestCase subclass with support for parametric tests. - - Subclasses of this class can implement test methods that return a list of - tests and arguments to call those with, to do parametric testing (often - also called 'data driven' testing.""" - - #: Prefix for tests with independent state. These methods will be run with - #: a separate setUp/tearDown call for each test in the group. - _indepParTestPrefix = 'testip' - - #: Prefix for tests with shared state. These methods will be run with - #: a single setUp/tearDown call for the whole group. This is useful when - #: writing a group of tests for which the setup is expensive and one wants - #: to actually share that state. Use with care (especially be careful not - #: to mutate the state you are using, which will alter later tests). - _shareParTestPrefix = 'testsp' - - def exec_test(self,test,args,result): - """Execute a single test. Returns a success boolean""" - - ok = False - try: - test(*args) - ok = True - except self.failureException: - result.addFailure(self, self._exc_info()) - except KeyboardInterrupt: - raise - except: - result.addError(self, self._exc_info()) - - return ok - - def set_testMethodDoc(self,doc): - self._testMethodDoc = doc - self._TestCase__testMethodDoc = doc - - def get_testMethodDoc(self): - return self._testMethodDoc - - testMethodDoc = property(fset=set_testMethodDoc, fget=get_testMethodDoc) - - def get_testMethodName(self): - try: - return getattr(self,"_testMethodName") - except: - return getattr(self,"_TestCase__testMethodName") - - testMethodName = property(fget=get_testMethodName) - - def run_test(self, testInfo,result): - """Run one test with arguments""" - - test,args = testInfo[0],testInfo[1:] - - # Reset the doc attribute to be the docstring of this particular test, - # so that in error messages it prints the actual test's docstring and - # not that of the test factory. - self.testMethodDoc = test.__doc__ - result.startTest(self) - try: - try: - self.setUp() - except KeyboardInterrupt: - raise - except: - result.addError(self, self._exc_info()) - return - - ok = self.exec_test(test,args,result) - - try: - self.tearDown() - except KeyboardInterrupt: - raise - except: - result.addError(self, self._exc_info()) - ok = False - if ok: result.addSuccess(self) - finally: - result.stopTest(self) - - def run_tests(self, tests,result): - """Run many tests with a common setUp/tearDown. - - The entire set of tests is run with a single setUp/tearDown call.""" - - try: - self.setUp() - except KeyboardInterrupt: - raise - except: - result.testsRun += 1 - result.addError(self, self._exc_info()) - return - - saved_doc = self.testMethodDoc - - try: - # Run all the tests specified - for testInfo in tests: - test,args = testInfo[0],testInfo[1:] - - # Set the doc argument for this test. Note that even if we do - # this, the fail/error tracebacks still print the docstring for - # the parent factory, because they only generate the message at - # the end of the run, AFTER we've restored it. There is no way - # to tell the unittest system (without overriding a lot of - # stuff) to extract this information right away, the logic is - # hardcoded to pull it later, since unittest assumes it doesn't - # change. - self.testMethodDoc = test.__doc__ - result.startTest(self) - ok = self.exec_test(test,args,result) - if ok: result.addSuccess(self) - - finally: - # Restore docstring info and run tearDown once only. - self.testMethodDoc = saved_doc - try: - self.tearDown() - except KeyboardInterrupt: - raise - except: - result.addError(self, self._exc_info()) - - def run(self, result=None): - """Test runner.""" - - #print - #print '*** run for method:',self._testMethodName # dbg - #print '*** doc:',self._testMethodDoc # dbg - - if result is None: result = self.defaultTestResult() - - # Independent tests: each gets its own setup/teardown - if self.testMethodName.startswith(self._indepParTestPrefix): - for t in getattr(self,self.testMethodName)(): - self.run_test(t,result) - # Shared-state test: single setup/teardown for all - elif self.testMethodName.startswith(self._shareParTestPrefix): - tests = getattr(self,self.testMethodName,'runTest')() - self.run_tests(tests,result) - # Normal unittest Test methods - else: - unittest.TestCase.run(self,result) - -############################################################################# -# Quick and dirty interactive example/test -if __name__ == '__main__': - - class ExampleTestCase(ParametricTestCase): - - #------------------------------------------------------------------- - # An instrumented setUp method so we can see when it gets called and - # how many times per instance - counter = 0 - - def setUp(self): - self.counter += 1 - print 'setUp count: %2s for: %s' % (self.counter, - self.testMethodDoc) - - #------------------------------------------------------------------- - # A standard test method, just like in the unittest docs. - def test_foo(self): - """Normal test for feature foo.""" - pass - - #------------------------------------------------------------------- - # Testing methods that need parameters. These can NOT be named test*, - # since they would be picked up by unittest and called without - # arguments. Instead, call them anything else (I use tst*) and then - # load them via the factories below. - def tstX(self,i): - "Test feature X with parameters." - print 'tstX, i=',i - if i==1 or i==3: - # Test fails - self.fail('i is bad, bad: %s' % i) - - def tstY(self,i): - "Test feature Y with parameters." - print 'tstY, i=',i - if i==1: - # Force an error - 1/0 - - def tstXX(self,i,j): - "Test feature XX with parameters." - print 'tstXX, i=',i,'j=',j - if i==1: - # Test fails - self.fail('i is bad, bad: %s' % i) - - def tstYY(self,i): - "Test feature YY with parameters." - print 'tstYY, i=',i - if i==2: - # Force an error - 1/0 - - def tstZZ(self): - """Test feature ZZ without parameters, needs multiple runs. - - This could be a random test that you want to run multiple times.""" - pass - - #------------------------------------------------------------------- - # Parametric test factories that create the test groups to call the - # above tst* methods with their required arguments. - def testip(self): - """Independent parametric test factory. - - A separate setUp() call is made for each test returned by this - method. - - You must return an iterable (list or generator is fine) containing - tuples with the actual method to be called as the first argument, - and the arguments for that call later.""" - return [(self.tstX,i) for i in range(5)] - - def testip2(self): - """Another independent parametric test factory""" - return [(self.tstY,i) for i in range(5)] - - def testip3(self): - """Test factory combining different subtests. - - This one shows how to assemble calls to different tests.""" - return [(self.tstX,3),(self.tstX,9),(self.tstXX,4,10), - (self.tstZZ,),(self.tstZZ,)] - - def testsp(self): - """Shared parametric test factory - - A single setUp() call is made for all the tests returned by this - method. - """ - return [(self.tstXX,i,i+1) for i in range(5)] - - def testsp2(self): - """Another shared parametric test factory""" - return [(self.tstYY,i) for i in range(5)] - - def testsp3(self): - """Another shared parametric test factory. - - This one simply calls the same test multiple times, without any - arguments. Note that you must still return tuples, even if there - are no arguments.""" - return [(self.tstZZ,) for i in range(10)] - - - # This test class runs normally under unittest's default runner - unittest.main() Copied: branches/cdavid/numpy/testing/pkgtester.py (from rev 5301, trunk/numpy/testing/pkgtester.py) Modified: branches/cdavid/numpy/testing/tests/test_utils.py =================================================================== --- branches/cdavid/numpy/testing/tests/test_utils.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/testing/tests/test_utils.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,9 +1,7 @@ import numpy as N -from numpy.testing.utils import * - +from numpy.testing import * import unittest - class _GenericTest(object): def _test_equal(self, a, b): self._assert_func(a, b) @@ -163,5 +161,6 @@ else: raise AssertionError("should have raised an AssertionError") + if __name__ == '__main__': - unittest.main() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/testing/utils.py =================================================================== --- branches/cdavid/numpy/testing/utils.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/testing/utils.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -10,8 +10,8 @@ __all__ = ['assert_equal', 'assert_almost_equal','assert_approx_equal', 'assert_array_equal', 'assert_array_less', 'assert_string_equal', - 'assert_array_almost_equal', 'jiffies', 'memusage', 'rand', - 'runstring', 'raises'] + 'assert_array_almost_equal', 'build_err_msg', 'jiffies', 'memusage', + 'raises', 'rand', 'rundocs', 'runstring'] def rand(*args): """Returns an array of random numbers with the given shape. @@ -140,7 +140,7 @@ return from numpy.core import ndarray if isinstance(actual, ndarray) or isinstance(desired, ndarray): - return assert_array_equal(actual, desired, err_msg) + return assert_array_equal(actual, desired, err_msg, verbose) msg = build_err_msg([actual, desired], err_msg, verbose=verbose) assert desired == actual, msg @@ -295,6 +295,30 @@ assert actual==desired, msg +def rundocs(filename=None): + """ Run doc string tests found in filename. + """ + import doctest, imp + if filename is None: + f = sys._getframe(1) + filename = f.f_globals['__file__'] + name = os.path.splitext(os.path.basename(filename))[0] + path = [os.path.dirname(filename)] + file, pathname, description = imp.find_module(name, path) + try: + m = imp.load_module(name, file, pathname, description) + finally: + file.close() + if sys.version[:3]<'2.4': + doctest.testmod(m, verbose=False) + else: + tests = doctest.DocTestFinder().find(m) + runner = doctest.DocTestRunner(verbose=False) + for test in tests: + runner.run(test) + return + + def raises(*exceptions): """ Assert that a test function raises one of the specified exceptions to pass. Modified: branches/cdavid/numpy/tests/test_ctypeslib.py =================================================================== --- branches/cdavid/numpy/tests/test_ctypeslib.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/tests/test_ctypeslib.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -2,8 +2,8 @@ from numpy.ctypeslib import ndpointer, load_library from numpy.testing import * -class TestLoadLibrary(NumpyTestCase): - def check_basic(self): +class TestLoadLibrary(TestCase): + def test_basic(self): try: cdll = load_library('multiarray', np.core.multiarray.__file__) @@ -12,8 +12,24 @@ " (import error was: %s)" % str(e) print msg -class TestNdpointer(NumpyTestCase): - def check_dtype(self): + def test_basic2(self): + """Regression for #801: load_library with a full library name + (including extension) does not work.""" + try: + try: + from distutils import sysconfig + so = sysconfig.get_config_var('SO') + cdll = load_library('multiarray%s' % so, + np.core.multiarray.__file__) + except ImportError: + print "No distutils available, skipping test." + except ImportError, e: + msg = "ctypes is not available on this python: skipping the test" \ + " (import error was: %s)" % str(e) + print msg + +class TestNdpointer(TestCase): + def test_dtype(self): dt = np.intc p = ndpointer(dtype=dt) self.assert_(p.from_param(np.array([1], dt))) @@ -40,7 +56,7 @@ else: self.assert_(p.from_param(np.zeros((10,), dt2))) - def check_ndim(self): + def test_ndim(self): p = ndpointer(ndim=0) self.assert_(p.from_param(np.array(1))) self.assertRaises(TypeError, p.from_param, np.array([1])) @@ -50,14 +66,14 @@ p = ndpointer(ndim=2) self.assert_(p.from_param(np.array([[1]]))) - def check_shape(self): + def test_shape(self): p = ndpointer(shape=(1,2)) self.assert_(p.from_param(np.array([[1,2]]))) self.assertRaises(TypeError, p.from_param, np.array([[1],[2]])) p = ndpointer(shape=()) self.assert_(p.from_param(np.array(1))) - def check_flags(self): + def test_flags(self): x = np.array([[1,2,3]], order='F') p = ndpointer(flags='FORTRAN') self.assert_(p.from_param(x)) @@ -67,5 +83,6 @@ self.assert_(p.from_param(x)) self.assertRaises(TypeError, p.from_param, np.array([[1,2,3]])) + if __name__ == "__main__": - NumpyTest().run() + nose.run(argv=['', __file__]) Modified: branches/cdavid/numpy/version.py =================================================================== --- branches/cdavid/numpy/version.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/numpy/version.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,4 +1,4 @@ -version='1.1.0' +version='1.2.0' release=False if not release: Modified: branches/cdavid/setup.py =================================================================== --- branches/cdavid/setup.py 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/setup.py 2008-06-20 05:59:26 UTC (rev 5302) @@ -20,7 +20,7 @@ import sys CLASSIFIERS = """\ -Development Status :: 4 - Beta +Development Status :: 5 - Production/Stable Intended Audience :: Science/Research Intended Audience :: Developers License :: OSI Approved @@ -79,7 +79,7 @@ maintainer_email = "numpy-discussion at lists.sourceforge.net", description = DOCLINES[0], long_description = "\n".join(DOCLINES[2:]), - url = "http://numeric.scipy.org", + url = "http://numpy.scipy.org", download_url = "http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=175103", license = 'BSD', classifiers=filter(None, CLASSIFIERS.split('\n')), Copied: branches/cdavid/tools/win32build/README.txt (from rev 5301, trunk/tools/win32build/README.txt) Copied: branches/cdavid/tools/win32build/cpuid (from rev 5301, trunk/tools/win32build/cpuid) Deleted: branches/cdavid/tools/win32build/cpuid/SConstruct =================================================================== --- trunk/tools/win32build/cpuid/SConstruct 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/tools/win32build/cpuid/SConstruct 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,5 +0,0 @@ -env = Environment(tools = ['mingw']) - -#libcpuid = env.SharedLibrary('cpuid', source = ['cpuid.c']) -#test = env.Program('test', source = ['test.c'], LIBS = libcpuid, RPATH = ['.']) -test = env.Program('test', source = ['test.c', 'cpuid.c']) Copied: branches/cdavid/tools/win32build/cpuid/SConstruct (from rev 5301, trunk/tools/win32build/cpuid/SConstruct) Deleted: branches/cdavid/tools/win32build/cpuid/cpuid.c =================================================================== --- trunk/tools/win32build/cpuid/cpuid.c 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/tools/win32build/cpuid/cpuid.c 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,169 +0,0 @@ -/* - * TODO: - * - test for cpuid availability - * - test for OS support (tricky) - */ - -#include -#include -#include - -#include "cpuid.h" - -#ifndef __GNUC__ -#error "Sorry, this code can only be compiled with gcc for now" -#endif - -/* - * SIMD: SSE 1, 2 and 3, MMX - */ -#define CPUID_FLAG_MMX 1 << 23 /* in edx */ -#define CPUID_FLAG_SSE 1 << 25 /* in edx */ -#define CPUID_FLAG_SSE2 1 << 26 /* in edx */ -#define CPUID_FLAG_SSE3 1 << 0 /* in ecx */ - -/* - * long mode (AMD64 instruction set) - */ -#define CPUID_FLAGS_LONG_MODE 1 << 29 /* in edx */ - -/* - * struct reprensenting the cpuid flags as put in the register - */ -typedef struct { - uint32_t eax; - uint32_t ebx; - uint32_t ecx; - uint32_t edx; -} cpuid_t; - -/* - * Union to read bytes in 32 (intel) bits registers - */ -union _le_reg { - uint8_t ccnt[4]; - uint32_t reg; -} __attribute__ ((packed)); -typedef union _le_reg le_reg_t ; - -/* - * can_cpuid and read_cpuid are the two only functions using asm - */ -static int can_cpuid(void) -{ - int has_cpuid = 0 ; - - /* - * See intel doc on cpuid (pdf) - */ - asm volatile ( - "pushfl \n\t" - "popl %%eax \n\t" - "movl %%eax, %%ecx \n\t" - "xorl $0x200000, %%eax \n\t" - "pushl %%eax \n\t" - "popfl \n\t" - "pushfl \n\t" - "popl %%eax \n\t" - "xorl %%ecx, %%eax \n\t" - "andl $0x200000, %%eax \n\t" - "movl %%eax,%0 \n\t" - :"=m" (has_cpuid) - : /*no input*/ - : "eax","ecx","cc"); - - return (has_cpuid != 0) ; -} - -/* - * func is the "level" of cpuid. See for cpuid.txt - */ -static cpuid_t read_cpuid(unsigned int func) -{ - cpuid_t res; - - /* we save ebx because it is used when compiled by -fPIC */ - asm volatile( - "pushl %%ebx \n\t" /* save %ebx */ - "cpuid \n\t" - "movl %%ebx, %1 \n\t" /* save what cpuid just put in %ebx */ - "popl %%ebx \n\t" /* restore the old %ebx */ - : "=a"(res.eax), "=r"(res.ebx), - "=c"(res.ecx), "=d"(res.edx) - : "a"(func) - : "cc"); - - return res; -} - -static uint32_t get_max_func() -{ - cpuid_t cpuid; - - cpuid = read_cpuid(0); - return cpuid.eax; -} - -/* - * vendor should have at least CPUID_VENDOR_STRING_LEN characters - */ -static int get_vendor_string(cpuid_t cpuid, char vendor[]) -{ - int i; - le_reg_t treg; - - treg.reg = cpuid.ebx; - for (i = 0; i < 4; ++i) { - vendor[i] = treg.ccnt[i]; - } - - treg.reg = cpuid.edx; - for (i = 0; i < 4; ++i) { - vendor[i+4] = treg.ccnt[i]; - } - - treg.reg = cpuid.ecx; - for (i = 0; i < 4; ++i) { - vendor[i+8] = treg.ccnt[i]; - } - vendor[12] = '\0'; - return 0; -} - -int cpuid_get_caps(cpu_caps_t *cpu) -{ - cpuid_t cpuid; - int max; - - memset(cpu, 0, sizeof(*cpu)); - - if (!can_cpuid()) { - return 0; - } - - max = get_max_func(); - - /* Read vendor string */ - cpuid = read_cpuid(0); - get_vendor_string(cpuid, cpu->vendor); - - if (max < 0x00000001) { - return 0; - } - cpuid = read_cpuid(0x00000001); - - /* We can read mmx, sse 1 2 and 3 when cpuid level >= 0x00000001 */ - if (cpuid.edx & CPUID_FLAG_MMX) { - cpu->has_mmx = 1; - } - if (cpuid.edx & CPUID_FLAG_SSE) { - cpu->has_sse = 1; - } - if (cpuid.edx & CPUID_FLAG_SSE2) { - cpu->has_sse2 = 1; - } - if (cpuid.ecx & CPUID_FLAG_SSE3) { - cpu->has_sse3 = 1; - } - return 0; -} Copied: branches/cdavid/tools/win32build/cpuid/cpuid.c (from rev 5301, trunk/tools/win32build/cpuid/cpuid.c) Deleted: branches/cdavid/tools/win32build/cpuid/cpuid.h =================================================================== --- trunk/tools/win32build/cpuid/cpuid.h 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/tools/win32build/cpuid/cpuid.h 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,20 +0,0 @@ -#ifndef _GABOU_CPUID_H -#define _GABOU_CPUID_H - -#include - -#define CPUID_VENDOR_STRING_LEN 12 - -struct _cpu_caps { - int has_cpuid; - int has_mmx; - int has_sse; - int has_sse2; - int has_sse3; - char vendor[CPUID_VENDOR_STRING_LEN+1]; -}; -typedef struct _cpu_caps cpu_caps_t; - -int cpuid_get_caps(cpu_caps_t *cpuinfo); - -#endif Copied: branches/cdavid/tools/win32build/cpuid/cpuid.h (from rev 5301, trunk/tools/win32build/cpuid/cpuid.h) Deleted: branches/cdavid/tools/win32build/cpuid/test.c =================================================================== --- trunk/tools/win32build/cpuid/test.c 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/tools/win32build/cpuid/test.c 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,44 +0,0 @@ -#include - -#include "cpuid.h" - -int main() -{ - cpu_caps_t *cpuinfo; - - cpuinfo = malloc(sizeof(*cpuinfo)); - - if (cpuinfo == NULL) { - fprintf(stderr, "Error allocating\n"); - } - - cpuid_get_caps(cpuinfo); - printf("This cpu string is %s\n", cpuinfo->vendor); - - if (cpuinfo->has_mmx) { - printf("This cpu has mmx instruction set\n"); - } else { - printf("This cpu does NOT have mmx instruction set\n"); - } - - if (cpuinfo->has_sse) { - printf("This cpu has sse instruction set\n"); - } else { - printf("This cpu does NOT have sse instruction set\n"); - } - - if (cpuinfo->has_sse2) { - printf("This cpu has sse2 instruction set\n"); - } else { - printf("This cpu does NOT have sse2 instruction set\n"); - } - - if (cpuinfo->has_sse3) { - printf("This cpu has sse3 instruction set\n"); - } else { - printf("This cpu does NOT have sse3 instruction set\n"); - } - - free(cpuinfo); - return 0; -} Copied: branches/cdavid/tools/win32build/cpuid/test.c (from rev 5301, trunk/tools/win32build/cpuid/test.c) Copied: branches/cdavid/tools/win32build/nsis_scripts (from rev 5301, trunk/tools/win32build/nsis_scripts) Deleted: branches/cdavid/tools/win32build/nsis_scripts/numpy-superinstaller-2.4.nsi =================================================================== --- trunk/tools/win32build/nsis_scripts/numpy-superinstaller-2.4.nsi 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/tools/win32build/nsis_scripts/numpy-superinstaller-2.4.nsi 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,120 +0,0 @@ -;-------------------------------- -;Include Modern UI - -!include "MUI2.nsh" - -;SetCompress off ; Useful to disable compression under development - -;-------------------------------- -;General - -;Name and file -Name "Numpy super installer" -OutFile "numpy-1.1.0-win32-superpack-python2.4.exe" - -;Default installation folder -InstallDir "$TEMP" - -;-------------------------------- -;Interface Settings - -!define MUI_ABORTWARNING - -;-------------------------------- -;Pages - -;!insertmacro MUI_PAGE_LICENSE "${NSISDIR}\Docs\Modern UI\License.txt" -;!insertmacro MUI_PAGE_COMPONENTS -;!insertmacro MUI_PAGE_DIRECTORY -;!insertmacro MUI_PAGE_INSTFILES - -;!insertmacro MUI_UNPAGE_CONFIRM -;!insertmacro MUI_UNPAGE_INSTFILES - -;-------------------------------- -;Languages - -!insertmacro MUI_LANGUAGE "English" - -;-------------------------------- -;Component Sections - -!include 'Sections.nsh' -!include LogicLib.nsh - -Var HasSSE2 -Var HasSSE3 -Var CPUSSE - -Section "Core" SecCore - - ;SectionIn RO - SetOutPath "$INSTDIR" - - ;Create uninstaller - ;WriteUninstaller "$INSTDIR\Uninstall.exe" - - DetailPrint "Install dir for actual installers is $INSTDIR" - - StrCpy $CPUSSE "0" - CpuCaps::hasSSE2 - Pop $0 - StrCpy $HasSSE2 $0 - - CpuCaps::hasSSE3 - Pop $0 - StrCpy $HasSSE3 $0 - - ; Debug - StrCmp $HasSSE2 "Y" include_sse2 no_include_sse2 - include_sse2: - DetailPrint '"Target CPU handles SSE2"' - StrCpy $CPUSSE "2" - goto done_sse2 - no_include_sse2: - DetailPrint '"Target CPU does NOT handle SSE2"' - goto done_sse2 - done_sse2: - - StrCmp $HasSSE3 "Y" include_sse3 no_include_sse3 - include_sse3: - DetailPrint '"Target CPU handles SSE3"' - StrCpy $CPUSSE "3" - goto done_sse3 - no_include_sse3: - DetailPrint '"Target CPU does NOT handle SSE3"' - goto done_sse3 - done_sse3: - - ClearErrors - - ; Install files conditionaly on detected cpu - ${Switch} $CPUSSE - ${Case} "3" - DetailPrint '"Install SSE 3"' - File "numpy-1.1.0-sse3.exe" - ExecWait '"$INSTDIR\numpy-1.1.0-sse3.exe"' - ${Break} - ${Case} "2" - DetailPrint '"Install SSE 2"' - File "numpy-1.1.0-sse2.exe" - ExecWait '"$INSTDIR\numpy-1.1.0-sse2.exe"' - ${Break} - ${Default} - DetailPrint '"Install NO SSE"' - File "numpy-1.1.0-nosse.exe" - ExecWait '"$INSTDIR\numpy-1.1.0-nosse.exe"' - ${Break} - ${EndSwitch} - - ; Handle errors when executing installers - IfErrors error no_error - - error: - messageBox MB_OK "Executing numpy installer failed" - goto done - no_error: - goto done - done: - -SectionEnd Copied: branches/cdavid/tools/win32build/nsis_scripts/numpy-superinstaller-2.4.nsi (from rev 5301, trunk/tools/win32build/nsis_scripts/numpy-superinstaller-2.4.nsi) Deleted: branches/cdavid/tools/win32build/nsis_scripts/numpy-superinstaller-2.5.nsi =================================================================== --- trunk/tools/win32build/nsis_scripts/numpy-superinstaller-2.5.nsi 2008-06-20 04:17:54 UTC (rev 5301) +++ branches/cdavid/tools/win32build/nsis_scripts/numpy-superinstaller-2.5.nsi 2008-06-20 05:59:26 UTC (rev 5302) @@ -1,120 +0,0 @@ -;-------------------------------- -;Include Modern UI - -!include "MUI2.nsh" - -;SetCompress off ; Useful to disable compression under development - -;-------------------------------- -;General - -;Name and file -Name "Numpy super installer" -OutFile "numpy-1.1.0-win32-superpack-python2.5.exe" - -;Default installation folder -InstallDir "$TEMP" - -;-------------------------------- -;Interface Settings - -!define MUI_ABORTWARNING - -;-------------------------------- -;Pages - -;!insertmacro MUI_PAGE_LICENSE "${NSISDIR}\Docs\Modern UI\License.txt" -;!insertmacro MUI_PAGE_COMPONENTS -;!insertmacro MUI_PAGE_DIRECTORY -;!insertmacro MUI_PAGE_INSTFILES - -;!insertmacro MUI_UNPAGE_CONFIRM -;!insertmacro MUI_UNPAGE_INSTFILES - -;-------------------------------- -;Languages - -!insertmacro MUI_LANGUAGE "English" - -;-------------------------------- -;Component Sections - -!include 'Sections.nsh' -!include LogicLib.nsh - -Var HasSSE2 -Var HasSSE3 -Var CPUSSE - -Section "Core" SecCore - - ;SectionIn RO - SetOutPath "$INSTDIR" - - ;Create uninstaller - ;WriteUninstaller "$INSTDIR\Uninstall.exe" - - DetailPrint "Install dir for actual installers is $INSTDIR" - - StrCpy $CPUSSE "0" - CpuCaps::hasSSE2 - Pop $0 - StrCpy $HasSSE2 $0 - - CpuCaps::hasSSE3 - Pop $0 - StrCpy $HasSSE3 $0 - - ; Debug - StrCmp $HasSSE2 "Y" include_sse2 no_include_sse2 - include_sse2: - DetailPrint '"Target CPU handles SSE2"' - StrCpy $CPUSSE "2" - goto done_sse2 - no_include_sse2: - DetailPrint '"Target CPU does NOT handle SSE2"' - goto done_sse2 - done_sse2: - - StrCmp $HasSSE3 "Y" include_sse3 no_include_sse3 - include_sse3: - DetailPrint '"Target CPU handles SSE3"' - StrCpy $CPUSSE "3" - goto done_sse3 - no_include_sse3: - DetailPrint '"Target CPU does NOT handle SSE3"' - goto done_sse3 - done_sse3: - - ClearErrors - - ; Install files conditionaly on detected cpu - ${Switch} $CPUSSE - ${Case} "3" - DetailPrint '"Install SSE 3"' - File "numpy-1.1.0-sse3.exe" - ExecWait '"$INSTDIR\numpy-1.1.0-sse3.exe"' - ${Break} - ${Case} "2" - DetailPrint '"Install SSE 2"' - File "numpy-1.1.0-sse2.exe" - ExecWait '"$INSTDIR\numpy-1.1.0-sse2.exe"' - ${Break} - ${Default} - DetailPrint '"Install NO SSE"' - File "numpy-1.1.0-nosse.exe" - ExecWait '"$INSTDIR\numpy-1.1.0-nosse.exe"' - ${Break} - ${EndSwitch} - - ; Handle errors when executing installers - IfErrors error no_error - - error: - messageBox MB_OK "Executing numpy installer failed" - goto done - no_error: - goto done - done: - -SectionEnd Copied: branches/cdavid/tools/win32build/nsis_scripts/numpy-superinstaller-2.5.nsi (from rev 5301, trunk/tools/win32build/nsis_scripts/numpy-superinstaller-2.5.nsi) From numpy-svn at scipy.org Fri Jun 20 14:24:13 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 20 Jun 2008 13:24:13 -0500 (CDT) Subject: [Numpy-svn] r5303 - trunk/numpy/doc/cython Message-ID: <20080620182413.6634039C501@scipy.org> Author: fperez Date: 2008-06-20 13:24:10 -0500 (Fri, 20 Jun 2008) New Revision: 5303 Modified: trunk/numpy/doc/cython/c_numpy.pxd trunk/numpy/doc/cython/numpyx.pyx Log: Put import_array() back into the .pyx file. M. Brett noticed that if it's only in the .pxd file, it does NOT get included in the auto-generated C code, and will thus not be called at module initialization time. Modified: trunk/numpy/doc/cython/c_numpy.pxd =================================================================== --- trunk/numpy/doc/cython/c_numpy.pxd 2008-06-20 05:59:26 UTC (rev 5302) +++ trunk/numpy/doc/cython/c_numpy.pxd 2008-06-20 18:24:10 UTC (rev 5303) @@ -134,11 +134,3 @@ void PyArray_ITER_NEXT(flatiter it) void import_array() - -######################################################################## -# Other code (mostly initialization) - -# NumPy must be initialized before any user code is called in the extension -# module. By doing so here, we ensure the users don't have to explicitly -# remember this themselves, and provide a cleaner Cython API. -import_array() Modified: trunk/numpy/doc/cython/numpyx.pyx =================================================================== --- trunk/numpy/doc/cython/numpyx.pyx 2008-06-20 05:59:26 UTC (rev 5302) +++ trunk/numpy/doc/cython/numpyx.pyx 2008-06-20 18:24:10 UTC (rev 5303) @@ -2,18 +2,28 @@ """Cython access to Numpy arrays - simple example. """ -# Load the pieces of the Python C API we need to use (from c_python.pxd). Note -# that a 'cimport' is similart to a Python 'import' statement, but it provides -# access to the C part of a library instead of its Python-visible API. Please -# consult the Pyrex/Cython documentation for further details. +############################################################################# +# Load C APIs declared in .pxd files via cimport +# +# A 'cimport' is similar to a Python 'import' statement, but it provides access +# to the C part of a library instead of its Python-visible API. See the +# Pyrex/Cython documentation for details. + cimport c_python as py -# (C)Import the NumPy C API (from c_numpy.pxd) cimport c_numpy as cnp -# Import the NumPy module for access to its usual Python API +# NOTE: numpy MUST be initialized before any other code is executed. +cnp.import_array() + +############################################################################# +# Load Python modules via normal import statements + import numpy as np +############################################################################# +# Regular code section begins + # A 'def' function is visible in the Python-imported module def print_array_info(cnp.ndarray arr): """Simple information printer about an array. From numpy-svn at scipy.org Sat Jun 21 07:08:46 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 21 Jun 2008 06:08:46 -0500 (CDT) Subject: [Numpy-svn] r5304 - trunk/numpy/core Message-ID: <20080621110846.ACE42C7C01F@scipy.org> Author: cdavid Date: 2008-06-21 06:08:37 -0500 (Sat, 21 Jun 2008) New Revision: 5304 Modified: trunk/numpy/core/SConscript Log: Temporary workaround for a numscons bug. Modified: trunk/numpy/core/SConscript =================================================================== --- trunk/numpy/core/SConscript 2008-06-20 18:24:10 UTC (rev 5303) +++ trunk/numpy/core/SConscript 2008-06-21 11:08:37 UTC (rev 5304) @@ -263,5 +263,12 @@ #---------------------- if build_blasdot: dotblas_src = [pjoin('blasdot', i) for i in ['_dotblas.c']] - dotblas = env.DistutilsPythonExtension('_dotblas', source = dotblas_src) - env.Depends(dotblas, pjoin("blasdot", "cblas.h")) + # because _dotblas does #include CBLAS_HEADER instead of #include + # "cblas.h", scons does not detect the dependency + # XXX: PythonExtension builder does not take the Depends on extension into + # account for some reason, so we first build the object, with forced + # dependency, and then builds the extension. This is more likely a bug in + # our PythonExtension builder, but I cannot see how to solve it. + dotblas_o = env.PythonObject('_dotblas', source = dotblas_src) + env.Depends(dotblas_o, pjoin("blasdot", "cblas.h")) + dotblas = env.DistutilsPythonExtension('_dotblas', dotblas_o) From numpy-svn at scipy.org Sat Jun 21 11:51:07 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 21 Jun 2008 10:51:07 -0500 (CDT) Subject: [Numpy-svn] r5305 - in trunk/numpy: . core core/tests distutils distutils/tests distutils/tests/f2py_ext/tests distutils/tests/f2py_f90_ext/tests distutils/tests/gen_ext/tests distutils/tests/pyrex_ext/tests distutils/tests/swig_ext/tests doc f2py/lib/parser f2py/lib/tests f2py/tests/array_from_pyobj/tests fft fft/tests lib lib/tests linalg linalg/tests ma ma/tests numarray oldnumeric oldnumeric/tests random random/tests testing testing/tests tests Message-ID: <20080621155107.3E49A39C6A8@scipy.org> Author: alan.mcintyre Date: 2008-06-21 10:50:17 -0500 (Sat, 21 Jun 2008) New Revision: 5305 Removed: trunk/numpy/testing/pkgtester.py Modified: trunk/numpy/__init__.py trunk/numpy/core/__init__.py trunk/numpy/core/tests/test_defmatrix.py trunk/numpy/core/tests/test_errstate.py trunk/numpy/core/tests/test_memmap.py trunk/numpy/core/tests/test_multiarray.py trunk/numpy/core/tests/test_numeric.py trunk/numpy/core/tests/test_numerictypes.py trunk/numpy/core/tests/test_records.py trunk/numpy/core/tests/test_regression.py trunk/numpy/core/tests/test_scalarmath.py trunk/numpy/core/tests/test_ufunc.py trunk/numpy/core/tests/test_umath.py trunk/numpy/core/tests/test_unicode.py trunk/numpy/distutils/__init__.py trunk/numpy/distutils/tests/f2py_ext/tests/test_fib2.py trunk/numpy/distutils/tests/f2py_f90_ext/tests/test_foo.py trunk/numpy/distutils/tests/gen_ext/tests/test_fib3.py trunk/numpy/distutils/tests/pyrex_ext/tests/test_primes.py trunk/numpy/distutils/tests/swig_ext/tests/test_example.py trunk/numpy/distutils/tests/swig_ext/tests/test_example2.py trunk/numpy/distutils/tests/test_fcompiler_gnu.py trunk/numpy/distutils/tests/test_misc_util.py trunk/numpy/doc/DISTUTILS.txt trunk/numpy/f2py/lib/parser/test_Fortran2003.py trunk/numpy/f2py/lib/parser/test_parser.py trunk/numpy/f2py/lib/tests/test_derived_scalar.py trunk/numpy/f2py/lib/tests/test_module_module.py trunk/numpy/f2py/lib/tests/test_module_scalar.py trunk/numpy/f2py/lib/tests/test_scalar_function_in.py trunk/numpy/f2py/lib/tests/test_scalar_in_out.py trunk/numpy/f2py/tests/array_from_pyobj/tests/test_array_from_pyobj.py trunk/numpy/fft/__init__.py trunk/numpy/fft/tests/test_fftpack.py trunk/numpy/fft/tests/test_helper.py trunk/numpy/lib/__init__.py trunk/numpy/lib/tests/test__datasource.py trunk/numpy/lib/tests/test_arraysetops.py trunk/numpy/lib/tests/test_financial.py trunk/numpy/lib/tests/test_format.py trunk/numpy/lib/tests/test_function_base.py trunk/numpy/lib/tests/test_getlimits.py trunk/numpy/lib/tests/test_index_tricks.py trunk/numpy/lib/tests/test_io.py trunk/numpy/lib/tests/test_machar.py trunk/numpy/lib/tests/test_polynomial.py trunk/numpy/lib/tests/test_regression.py trunk/numpy/lib/tests/test_shape_base.py trunk/numpy/lib/tests/test_twodim_base.py trunk/numpy/lib/tests/test_type_check.py trunk/numpy/lib/tests/test_ufunclike.py trunk/numpy/linalg/__init__.py trunk/numpy/linalg/tests/test_linalg.py trunk/numpy/linalg/tests/test_regression.py trunk/numpy/ma/__init__.py trunk/numpy/ma/tests/test_core.py trunk/numpy/ma/tests/test_extras.py trunk/numpy/ma/tests/test_mrecords.py trunk/numpy/ma/tests/test_old_ma.py trunk/numpy/ma/tests/test_subclassing.py trunk/numpy/numarray/__init__.py trunk/numpy/oldnumeric/__init__.py trunk/numpy/oldnumeric/tests/test_oldnumeric.py trunk/numpy/random/__init__.py trunk/numpy/random/tests/test_random.py trunk/numpy/testing/__init__.py trunk/numpy/testing/nosetester.py trunk/numpy/testing/numpytest.py trunk/numpy/testing/tests/test_utils.py trunk/numpy/tests/test_ctypeslib.py Log: Restore old test framework classes. Added numpy.testing.run_module_suite to simplify "if __name__ == '__main__'" boilerplate code in test modules. Removed numpy/testing/pkgtester.py since it just consisted of an import statement after porting SciPy r4424. Allow numpy.*.test() to accept the old keyword arguments (but issue a deprecation warning when old arguments are seen). numpy.*.test() returns a test result object as before. Fixed typo in distutils doc. Modified: trunk/numpy/__init__.py =================================================================== --- trunk/numpy/__init__.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/__init__.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -95,7 +95,7 @@ pkgload.__doc__ = PackageLoader.__call__.__doc__ - from testing.pkgtester import Tester + from testing import Tester test = Tester().test bench = Tester().bench Modified: trunk/numpy/core/__init__.py =================================================================== --- trunk/numpy/core/__init__.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/core/__init__.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -31,6 +31,6 @@ __all__ += char.__all__ -from numpy.testing.pkgtester import Tester +from numpy.testing import Tester test = Tester().test bench = Tester().bench Modified: trunk/numpy/core/tests/test_defmatrix.py =================================================================== --- trunk/numpy/core/tests/test_defmatrix.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/core/tests/test_defmatrix.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -260,4 +260,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/core/tests/test_errstate.py =================================================================== --- trunk/numpy/core/tests/test_errstate.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/core/tests/test_errstate.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -54,4 +54,4 @@ """ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/core/tests/test_memmap.py =================================================================== --- trunk/numpy/core/tests/test_memmap.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/core/tests/test_memmap.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -47,4 +47,5 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() + Modified: trunk/numpy/core/tests/test_multiarray.py =================================================================== --- trunk/numpy/core/tests/test_multiarray.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/core/tests/test_multiarray.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -893,4 +893,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/core/tests/test_numeric.py =================================================================== --- trunk/numpy/core/tests/test_numeric.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/core/tests/test_numeric.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -773,4 +773,5 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() + Modified: trunk/numpy/core/tests/test_numerictypes.py =================================================================== --- trunk/numpy/core/tests/test_numerictypes.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/core/tests/test_numerictypes.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -356,4 +356,5 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() + Modified: trunk/numpy/core/tests/test_records.py =================================================================== --- trunk/numpy/core/tests/test_records.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/core/tests/test_records.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -116,4 +116,5 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() + Modified: trunk/numpy/core/tests/test_regression.py =================================================================== --- trunk/numpy/core/tests/test_regression.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/core/tests/test_regression.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -1154,4 +1154,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/core/tests/test_scalarmath.py =================================================================== --- trunk/numpy/core/tests/test_scalarmath.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/core/tests/test_scalarmath.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -113,4 +113,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/core/tests/test_ufunc.py =================================================================== --- trunk/numpy/core/tests/test_ufunc.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/core/tests/test_ufunc.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -234,4 +234,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/core/tests/test_umath.py =================================================================== --- trunk/numpy/core/tests/test_umath.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/core/tests/test_umath.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -222,4 +222,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/core/tests/test_unicode.py =================================================================== --- trunk/numpy/core/tests/test_unicode.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/core/tests/test_unicode.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -320,5 +320,5 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/distutils/__init__.py =================================================================== --- trunk/numpy/distutils/__init__.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/distutils/__init__.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -15,6 +15,6 @@ _INSTALLED = False if _INSTALLED: - from numpy.testing.pkgtester import Tester + from numpy.testing import Tester test = Tester().test bench = Tester().bench Modified: trunk/numpy/distutils/tests/f2py_ext/tests/test_fib2.py =================================================================== --- trunk/numpy/distutils/tests/f2py_ext/tests/test_fib2.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/distutils/tests/f2py_ext/tests/test_fib2.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -10,4 +10,4 @@ assert_array_equal(fib2.fib(6),[0,1,1,2,3,5]) if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/distutils/tests/f2py_f90_ext/tests/test_foo.py =================================================================== --- trunk/numpy/distutils/tests/f2py_f90_ext/tests/test_foo.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/distutils/tests/f2py_f90_ext/tests/test_foo.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -10,4 +10,4 @@ assert_equal(foo.foo_free.bar13(),13) if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/distutils/tests/gen_ext/tests/test_fib3.py =================================================================== --- trunk/numpy/distutils/tests/gen_ext/tests/test_fib3.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/distutils/tests/gen_ext/tests/test_fib3.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -10,4 +10,4 @@ assert_array_equal(fib3.fib(6),[0,1,1,2,3,5]) if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/distutils/tests/pyrex_ext/tests/test_primes.py =================================================================== --- trunk/numpy/distutils/tests/pyrex_ext/tests/test_primes.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/distutils/tests/pyrex_ext/tests/test_primes.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -10,4 +10,4 @@ l = primes(10) assert_equal(l, [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]) if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/distutils/tests/swig_ext/tests/test_example.py =================================================================== --- trunk/numpy/distutils/tests/swig_ext/tests/test_example.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/distutils/tests/swig_ext/tests/test_example.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -15,4 +15,4 @@ assert_equal(example.cvar.My_variable,5.0) if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/distutils/tests/swig_ext/tests/test_example2.py =================================================================== --- trunk/numpy/distutils/tests/swig_ext/tests/test_example2.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/distutils/tests/swig_ext/tests/test_example2.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -14,4 +14,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/distutils/tests/test_fcompiler_gnu.py =================================================================== --- trunk/numpy/distutils/tests/test_fcompiler_gnu.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/distutils/tests/test_fcompiler_gnu.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -49,4 +49,4 @@ if __name__ == '__main__': - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/distutils/tests/test_misc_util.py =================================================================== --- trunk/numpy/distutils/tests/test_misc_util.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/distutils/tests/test_misc_util.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -58,4 +58,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/doc/DISTUTILS.txt =================================================================== --- trunk/numpy/doc/DISTUTILS.txt 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/doc/DISTUTILS.txt 2008-06-21 15:50:17 UTC (rev 5305) @@ -471,7 +471,7 @@ automatically picked up by the test machinery. A minimal example of a ``test_yyy.py`` file that implements tests for -a Scipy package module ``numpy.xxx.yyy`` containing a function +a NumPy package module ``numpy.xxx.yyy`` containing a function ``zzz()``, is shown below:: import sys @@ -493,7 +493,7 @@ #... if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_tests(file) Note that all classes that are inherited from ``TestCase`` class, are automatically picked up by the test runner. Modified: trunk/numpy/f2py/lib/parser/test_Fortran2003.py =================================================================== --- trunk/numpy/f2py/lib/parser/test_Fortran2003.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/f2py/lib/parser/test_Fortran2003.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -2098,4 +2098,4 @@ print '-----' if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/f2py/lib/parser/test_parser.py =================================================================== --- trunk/numpy/f2py/lib/parser/test_parser.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/f2py/lib/parser/test_parser.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -494,4 +494,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/f2py/lib/tests/test_derived_scalar.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_derived_scalar.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/f2py/lib/tests/test_derived_scalar.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -71,4 +71,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/f2py/lib/tests/test_module_module.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_module_module.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/f2py/lib/tests/test_module_module.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -58,4 +58,4 @@ foo() if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/f2py/lib/tests/test_module_scalar.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_module_scalar.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/f2py/lib/tests/test_module_scalar.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -55,4 +55,4 @@ assert_equal(r,4) if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/f2py/lib/tests/test_scalar_function_in.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_scalar_function_in.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/f2py/lib/tests/test_scalar_function_in.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -530,4 +530,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/f2py/lib/tests/test_scalar_in_out.py =================================================================== --- trunk/numpy/f2py/lib/tests/test_scalar_in_out.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/f2py/lib/tests/test_scalar_in_out.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -527,4 +527,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/f2py/tests/array_from_pyobj/tests/test_array_from_pyobj.py =================================================================== --- trunk/numpy/f2py/tests/array_from_pyobj/tests/test_array_from_pyobj.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/f2py/tests/array_from_pyobj/tests/test_array_from_pyobj.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -513,4 +513,4 @@ ''' % (t,t,t) if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/fft/__init__.py =================================================================== --- trunk/numpy/fft/__init__.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/fft/__init__.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -4,6 +4,6 @@ from fftpack import * from helper import * -from numpy.testing.pkgtester import Tester +from numpy.testing import Tester test = Tester().test bench = Tester().bench Modified: trunk/numpy/fft/tests/test_fftpack.py =================================================================== --- trunk/numpy/fft/tests/test_fftpack.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/fft/tests/test_fftpack.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -23,4 +23,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/fft/tests/test_helper.py =================================================================== --- trunk/numpy/fft/tests/test_helper.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/fft/tests/test_helper.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -42,4 +42,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/lib/__init__.py =================================================================== --- trunk/numpy/lib/__init__.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/__init__.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -34,7 +34,7 @@ __all__ += io.__all__ __all__ += financial.__all__ -from numpy.testing.pkgtester import Tester +from numpy.testing import Tester test = Tester().test bench = Tester().bench Modified: trunk/numpy/lib/tests/test__datasource.py =================================================================== --- trunk/numpy/lib/tests/test__datasource.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test__datasource.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -304,4 +304,5 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() + Modified: trunk/numpy/lib/tests/test_arraysetops.py =================================================================== --- trunk/numpy/lib/tests/test_arraysetops.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test_arraysetops.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -172,4 +172,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/lib/tests/test_financial.py =================================================================== --- trunk/numpy/lib/tests/test_financial.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test_financial.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -37,4 +37,4 @@ doctest.testmod() if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/lib/tests/test_format.py =================================================================== --- trunk/numpy/lib/tests/test_format.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test_format.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -509,4 +509,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/lib/tests/test_function_base.py =================================================================== --- trunk/numpy/lib/tests/test_function_base.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test_function_base.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -624,4 +624,4 @@ assert y == 0 if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/lib/tests/test_getlimits.py =================================================================== --- trunk/numpy/lib/tests/test_getlimits.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test_getlimits.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -52,4 +52,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/lib/tests/test_index_tricks.py =================================================================== --- trunk/numpy/lib/tests/test_index_tricks.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test_index_tricks.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -57,4 +57,5 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() + Modified: trunk/numpy/lib/tests/test_io.py =================================================================== --- trunk/numpy/lib/tests/test_io.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test_io.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -198,4 +198,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/lib/tests/test_machar.py =================================================================== --- trunk/numpy/lib/tests/test_machar.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test_machar.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -28,4 +28,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/lib/tests/test_polynomial.py =================================================================== --- trunk/numpy/lib/tests/test_polynomial.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test_polynomial.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -112,4 +112,5 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() + Modified: trunk/numpy/lib/tests/test_regression.py =================================================================== --- trunk/numpy/lib/tests/test_regression.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test_regression.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -30,4 +30,5 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() + Modified: trunk/numpy/lib/tests/test_shape_base.py =================================================================== --- trunk/numpy/lib/tests/test_shape_base.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test_shape_base.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -446,4 +446,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/lib/tests/test_twodim_base.py =================================================================== --- trunk/numpy/lib/tests/test_twodim_base.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test_twodim_base.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -199,4 +199,5 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() + Modified: trunk/numpy/lib/tests/test_type_check.py =================================================================== --- trunk/numpy/lib/tests/test_type_check.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test_type_check.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -278,4 +278,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/lib/tests/test_ufunclike.py =================================================================== --- trunk/numpy/lib/tests/test_ufunclike.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/lib/tests/test_ufunclike.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -59,10 +59,10 @@ from numpy.testing import * -class TestDocs(TestCase): - def test_doctests(self): - return rundocs() +def test(): + return rundocs() if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() + Modified: trunk/numpy/linalg/__init__.py =================================================================== --- trunk/numpy/linalg/__init__.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/linalg/__init__.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -3,6 +3,6 @@ from linalg import * -from numpy.testing.pkgtester import Tester +from numpy.testing import Tester test = Tester().test bench = Tester().test Modified: trunk/numpy/linalg/tests/test_linalg.py =================================================================== --- trunk/numpy/linalg/tests/test_linalg.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/linalg/tests/test_linalg.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -205,4 +205,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/linalg/tests/test_regression.py =================================================================== --- trunk/numpy/linalg/tests/test_regression.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/linalg/tests/test_regression.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -56,4 +56,4 @@ if __name__ == '__main__': - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/ma/__init__.py =================================================================== --- trunk/numpy/ma/__init__.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/ma/__init__.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -21,6 +21,6 @@ __all__ += core.__all__ __all__ += extras.__all__ -from numpy.testing.pkgtester import Tester +from numpy.testing import Tester test = Tester().test bench = Tester().bench Modified: trunk/numpy/ma/tests/test_core.py =================================================================== --- trunk/numpy/ma/tests/test_core.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/ma/tests/test_core.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -2147,4 +2147,4 @@ ############################################################################### #------------------------------------------------------------------------------ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/ma/tests/test_extras.py =================================================================== --- trunk/numpy/ma/tests/test_extras.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/ma/tests/test_extras.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -385,4 +385,4 @@ ############################################################################### #------------------------------------------------------------------------------ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/ma/tests/test_mrecords.py =================================================================== --- trunk/numpy/ma/tests/test_mrecords.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/ma/tests/test_mrecords.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -429,4 +429,4 @@ ############################################################################### #------------------------------------------------------------------------------ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/ma/tests/test_old_ma.py =================================================================== --- trunk/numpy/ma/tests/test_old_ma.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/ma/tests/test_old_ma.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -866,4 +866,4 @@ #testinplace.test_name = 'Inplace operations' if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/ma/tests/test_subclassing.py =================================================================== --- trunk/numpy/ma/tests/test_subclassing.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/ma/tests/test_subclassing.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -154,7 +154,7 @@ ################################################################################ if __name__ == '__main__': - nose.run(argv=['', __file__]) + run_module_suite() if 0: x = array(arange(5), mask=[0]+[1]*4) Modified: trunk/numpy/numarray/__init__.py =================================================================== --- trunk/numpy/numarray/__init__.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/numarray/__init__.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -25,6 +25,6 @@ del ufuncs del compat -from numpy.testing.pkgtester import Tester +from numpy.testing import Tester test = Tester().test bench = Tester().bench Modified: trunk/numpy/oldnumeric/__init__.py =================================================================== --- trunk/numpy/oldnumeric/__init__.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/oldnumeric/__init__.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -40,6 +40,6 @@ del ufuncs del misc -from numpy.testing.pkgtester import Tester +from numpy.testing import Tester test = Tester().test bench = Tester().bench Modified: trunk/numpy/oldnumeric/tests/test_oldnumeric.py =================================================================== --- trunk/numpy/oldnumeric/tests/test_oldnumeric.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/oldnumeric/tests/test_oldnumeric.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -83,4 +83,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/random/__init__.py =================================================================== --- trunk/numpy/random/__init__.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/random/__init__.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -13,6 +13,6 @@ """ return RandomState() -from numpy.testing.pkgtester import Tester +from numpy.testing import Tester test = Tester().test bench = Tester().bench Modified: trunk/numpy/random/tests/test_random.py =================================================================== --- trunk/numpy/random/tests/test_random.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/random/tests/test_random.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -62,4 +62,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/testing/__init__.py =================================================================== --- trunk/numpy/testing/__init__.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/testing/__init__.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -10,6 +10,8 @@ import decorators as dec from utils import * +from parametric import ParametricTestCase from numpytest import * -from pkgtester import Tester +from nosetester import NoseTester as Tester +from nosetester import run_module_suite test = Tester().test Modified: trunk/numpy/testing/nosetester.py =================================================================== --- trunk/numpy/testing/nosetester.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/testing/nosetester.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -6,6 +6,7 @@ import os import sys import re +import warnings def import_nose(): """ Import nose only when needed. @@ -27,6 +28,15 @@ return nose +def run_module_suite(file_to_run = None): + if file_to_run is None: + f = sys._getframe(1) + file_to_run = f.f_locals.get('__file__', None) + assert file_to_run is not None + + import_nose().run(argv=['',file_to_run]) + + class NoseTester(object): """ Nose test runner. @@ -39,15 +49,10 @@ >>> test = NoseTester().test - In practice, because nose may not be importable, the __init__ - files actually have: + This class is made available as numpy.testing.Tester: - >>> from scipy.testing.pkgtester import Tester + >>> from scipy.testing import Tester >>> test = Tester().test - - The pkgtester module checks for the presence of nose on the path, - returning this class if nose is present, and a null class - otherwise. """ def __init__(self, package=None): @@ -69,6 +74,26 @@ package = os.path.dirname(package.__file__) self.package_path = package + # find the package name under test; this name is used to limit coverage + # reporting (if enabled) + pkg_temp = package + pkg_name = [] + while 'site-packages' in pkg_temp: + pkg_temp, p2 = os.path.split(pkg_temp) + if p2 == 'site-packages': + break + pkg_name.append(p2) + + # if package name determination failed, just default to numpy/scipy + if not pkg_name: + if 'scipy' in self.package_path: + self.package_name = 'scipy' + else: + self.package_name = 'numpy' + else: + pkg_name.reverse() + self.package_name = '.'.join(pkg_name) + def _add_doc(testtype): ''' Decorator to add docstring to functions using test labels @@ -123,22 +148,51 @@ return argv @_add_doc('test') - def test(self, label='fast', verbose=1, extra_argv=None, doctests=False, - coverage=False): + def test(self, label='fast', verbose=1, extra_argv=None, doctests=False, + coverage=False, **kwargs): ''' Run tests for module using nose %(test_header)s doctests : boolean If True, run doctests in module, default False + coverage : boolean + If True, report coverage of NumPy code, default False + (Requires the coverage module: + http://nedbatchelder.com/code/modules/coverage.html) ''' - nose = import_nose() + old_args = set(['level', 'verbosity', 'all', 'sys_argv', 'testcase_pattern']) + unexpected_args = set(kwargs.keys()) - old_args + if len(unexpected_args) > 0: + ua = ', '.join(unexpected_args) + raise TypeError("test() got unexpected arguments: %s" % ua) + + # issue a deprecation warning if any of the pre-1.2 arguments to + # test are given + if old_args.intersection(kwargs.keys()): + warnings.warn("This method's signature will change in the next release; the level, verbosity, all, sys_argv, and testcase_pattern keyword arguments will be removed. Please update your code.", + DeprecationWarning, stacklevel=2) + + # Use old arguments if given (where it makes sense) + # For the moment, level and sys_argv are ignored + + # replace verbose with verbosity + if kwargs.get('verbosity') is not None: + verbose = kwargs.get('verbosity') + # cap verbosity at 3 because nose becomes *very* verbose beyond that + verbose = min(verbose, 3) + + # if all evaluates as True, omit attribute filter and run doctests + if kwargs.get('all'): + label = '' + doctests = True + argv = self._test_argv(label, verbose, extra_argv) if doctests: argv+=['--with-doctest','--doctest-tests'] if coverage: - argv+=['--cover-package=numpy','--with-coverage', - '--cover-tests','--cover-inclusive','--cover-erase'] + argv+=['--cover-package=%s' % self.package_name, '--with-coverage', + '--cover-tests', '--cover-inclusive', '--cover-erase'] # bypass these samples under distutils argv += ['--exclude','f2py_ext'] @@ -147,8 +201,29 @@ argv += ['--exclude','pyrex_ext'] argv += ['--exclude','swig_ext'] - nose.run(argv=argv) + nose = import_nose() + # Because nose currently discards the test result object, but we need to + # return it to the user, override TestProgram.runTests to retain the result + class NumpyTestProgram(nose.core.TestProgram): + def runTests(self): + """Run Tests. Returns true on success, false on failure, and sets + self.success to the same value. + """ + if self.testRunner is None: + self.testRunner = nose.core.TextTestRunner(stream=self.config.stream, + verbosity=self.config.verbosity, + config=self.config) + plug_runner = self.config.plugins.prepareTestRunner(self.testRunner) + if plug_runner is not None: + self.testRunner = plug_runner + self.result = self.testRunner.run(self.test) + self.success = self.result.wasSuccessful() + return self.success + + t = NumpyTestProgram(argv=argv, exit=False) + return t.result + @_add_doc('benchmark') def bench(self, label='fast', verbose=1, extra_argv=None): ''' Run benchmarks for module using nose @@ -157,4 +232,4 @@ nose = import_nose() argv = self._test_argv(label, verbose, extra_argv) argv += ['--match', r'(?:^|[\\b_\\.%s-])[Bb]ench' % os.sep] - nose.run(argv=argv) + return nose.run(argv=argv) Modified: trunk/numpy/testing/numpytest.py =================================================================== --- trunk/numpy/testing/numpytest.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/testing/numpytest.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -10,7 +10,7 @@ import warnings __all__ = ['set_package_path', 'set_local_path', 'restore_path', - 'IgnoreException', 'importall',] + 'IgnoreException', 'NumpyTestCase', 'NumpyTest', 'importall',] DEBUG=0 from numpy.testing.utils import jiffies @@ -110,7 +110,94 @@ self.stream.flush() +class NumpyTestCase (unittest.TestCase): + def __init__(self, *args, **kwds): + warnings.warn("NumpyTestCase will be removed in the next release; please update your code to use nose or unittest", + DeprecationWarning, stacklevel=2) + unittest.TestCase.__init__(self, *args, **kwds) + def measure(self,code_str,times=1): + """ Return elapsed time for executing code_str in the + namespace of the caller for given times. + """ + frame = get_frame(1) + locs,globs = frame.f_locals,frame.f_globals + code = compile(code_str, + 'NumpyTestCase runner for '+self.__class__.__name__, + 'exec') + i = 0 + elapsed = jiffies() + while i>sys.stderr,yellow_text('Warning: %s' % (message)) + sys.stderr.flush() + def info(self, message): + print>>sys.stdout, message + sys.stdout.flush() + + def rundocs(self, filename=None): + """ Run doc string tests found in filename. + """ + import doctest + if filename is None: + f = get_frame(1) + filename = f.f_globals['__file__'] + name = os.path.splitext(os.path.basename(filename))[0] + path = [os.path.dirname(filename)] + file, pathname, description = imp.find_module(name, path) + try: + m = imp.load_module(name, file, pathname, description) + finally: + file.close() + if sys.version[:3]<'2.4': + doctest.testmod(m, verbose=False) + else: + tests = doctest.DocTestFinder().find(m) + runner = doctest.DocTestRunner(verbose=False) + for test in tests: + runner.run(test) + return + + def _get_all_method_names(cls): names = dir(cls) if sys.version[:3]<='2.1': @@ -122,7 +209,456 @@ # for debug build--check for memory leaks during the test. +class _NumPyTextTestResult(unittest._TextTestResult): + def startTest(self, test): + unittest._TextTestResult.startTest(self, test) + if self.showAll: + N = len(sys.getobjects(0)) + self._totnumobj = N + self._totrefcnt = sys.gettotalrefcount() + return + def stopTest(self, test): + if self.showAll: + N = len(sys.getobjects(0)) + self.stream.write("objects: %d ===> %d; " % (self._totnumobj, N)) + self.stream.write("refcnts: %d ===> %d\n" % (self._totrefcnt, + sys.gettotalrefcount())) + return + +class NumPyTextTestRunner(unittest.TextTestRunner): + def _makeResult(self): + return _NumPyTextTestResult(self.stream, self.descriptions, self.verbosity) + + +class NumpyTest: + """ Numpy tests site manager. + + Usage: NumpyTest().test(level=1,verbosity=1) + + is package name or its module object. + + Package is supposed to contain a directory tests/ with test_*.py + files where * refers to the names of submodules. See .rename() + method to redefine name mapping between test_*.py files and names of + submodules. Pattern test_*.py can be overwritten by redefining + .get_testfile() method. + + test_*.py files are supposed to define a classes, derived from + NumpyTestCase or unittest.TestCase, with methods having names + starting with test or bench or check. The names of TestCase classes + must have a prefix test. This can be overwritten by redefining + .check_testcase_name() method. + + And that is it! No need to implement test or test_suite functions + in each .py file. + + Old-style test_suite(level=1) hooks are also supported. + """ + _check_testcase_name = re.compile(r'test.*|Test.*').match + def check_testcase_name(self, name): + """ Return True if name matches TestCase class. + """ + return not not self._check_testcase_name(name) + + testfile_patterns = ['test_%(modulename)s.py'] + def get_testfile(self, module, verbosity = 0): + """ Return path to module test file. + """ + mstr = self._module_str + short_module_name = self._get_short_module_name(module) + d = os.path.split(module.__file__)[0] + test_dir = os.path.join(d,'tests') + local_test_dir = os.path.join(os.getcwd(),'tests') + if os.path.basename(os.path.dirname(local_test_dir)) \ + == os.path.basename(os.path.dirname(test_dir)): + test_dir = local_test_dir + for pat in self.testfile_patterns: + fn = os.path.join(test_dir, pat % {'modulename':short_module_name}) + if os.path.isfile(fn): + return fn + if verbosity>1: + self.warn('No test file found in %s for module %s' \ + % (test_dir, mstr(module))) + return + + def __init__(self, package=None): + warnings.warn("NumpyTest will be removed in the next release; please update your code to use nose or unittest", + DeprecationWarning, stacklevel=2) + if package is None: + from numpy.distutils.misc_util import get_frame + f = get_frame(1) + package = f.f_locals.get('__name__',f.f_globals.get('__name__',None)) + assert package is not None + self.package = package + self._rename_map = {} + + def rename(self, **kws): + """Apply renaming submodule test file test_.py to + test_.py. + + Usage: self.rename(name='newname') before calling the + self.test() method. + + If 'newname' is None, then no tests will be executed for a given + module. + """ + for k,v in kws.items(): + self._rename_map[k] = v + return + + def _module_str(self, module): + filename = module.__file__[-30:] + if filename!=module.__file__: + filename = '...'+filename + return '' % (module.__name__, filename) + + def _get_method_names(self,clsobj,level): + names = [] + for mthname in _get_all_method_names(clsobj): + if mthname[:5] not in ['bench','check'] \ + and mthname[:4] not in ['test']: + continue + mth = getattr(clsobj, mthname) + if type(mth) is not types.MethodType: + continue + d = mth.im_func.func_defaults + if d is not None: + mthlevel = d[0] + else: + mthlevel = 1 + if level>=mthlevel: + if mthname not in names: + names.append(mthname) + for base in clsobj.__bases__: + for n in self._get_method_names(base,level): + if n not in names: + names.append(n) + return names + + def _get_short_module_name(self, module): + d,f = os.path.split(module.__file__) + short_module_name = os.path.splitext(os.path.basename(f))[0] + if short_module_name=='__init__': + short_module_name = module.__name__.split('.')[-1] + short_module_name = self._rename_map.get(short_module_name,short_module_name) + return short_module_name + + def _get_module_tests(self, module, level, verbosity): + mstr = self._module_str + + short_module_name = self._get_short_module_name(module) + if short_module_name is None: + return [] + + test_file = self.get_testfile(module, verbosity) + + if test_file is None: + return [] + + if not os.path.isfile(test_file): + if short_module_name[:5]=='info_' \ + and short_module_name[5:]==module.__name__.split('.')[-2]: + return [] + if short_module_name in ['__cvs_version__','__svn_version__']: + return [] + if short_module_name[-8:]=='_version' \ + and short_module_name[:-8]==module.__name__.split('.')[-2]: + return [] + if verbosity>1: + self.warn(test_file) + self.warn(' !! No test file %r found for %s' \ + % (os.path.basename(test_file), mstr(module))) + return [] + + if test_file in self.test_files: + return [] + + parent_module_name = '.'.join(module.__name__.split('.')[:-1]) + test_module_name,ext = os.path.splitext(os.path.basename(test_file)) + test_dir_module = parent_module_name+'.tests' + test_module_name = test_dir_module+'.'+test_module_name + + if test_dir_module not in sys.modules: + sys.modules[test_dir_module] = imp.new_module(test_dir_module) + + old_sys_path = sys.path[:] + try: + f = open(test_file,'r') + test_module = imp.load_module(test_module_name, f, + test_file, ('.py', 'r', 1)) + f.close() + except: + sys.path[:] = old_sys_path + self.warn('FAILURE importing tests for %s' % (mstr(module))) + output_exception(sys.stderr) + return [] + sys.path[:] = old_sys_path + + self.test_files.append(test_file) + + return self._get_suite_list(test_module, level, module.__name__) + + def _get_suite_list(self, test_module, level, module_name='__main__', + verbosity=1): + suite_list = [] + if hasattr(test_module, 'test_suite'): + suite_list.extend(test_module.test_suite(level)._tests) + for name in dir(test_module): + obj = getattr(test_module, name) + if type(obj) is not type(unittest.TestCase) \ + or not issubclass(obj, unittest.TestCase) \ + or not self.check_testcase_name(obj.__name__): + continue + for mthname in self._get_method_names(obj,level): + suite = obj(mthname) + if getattr(suite,'isrunnable',lambda mthname:1)(mthname): + suite_list.append(suite) + matched_suite_list = [suite for suite in suite_list \ + if self.testcase_match(suite.id()\ + .replace('__main__.',''))] + if verbosity>=0: + self.info(' Found %s/%s tests for %s' \ + % (len(matched_suite_list), len(suite_list), module_name)) + return matched_suite_list + + def _test_suite_from_modules(self, this_package, level, verbosity): + package_name = this_package.__name__ + modules = [] + for name, module in sys.modules.items(): + if not name.startswith(package_name) or module is None: + continue + if not hasattr(module,'__file__'): + continue + if os.path.basename(os.path.dirname(module.__file__))=='tests': + continue + modules.append((name, module)) + + modules.sort() + modules = [m[1] for m in modules] + + self.test_files = [] + suites = [] + for module in modules: + suites.extend(self._get_module_tests(module, abs(level), verbosity)) + + suites.extend(self._get_suite_list(sys.modules[package_name], + abs(level), verbosity=verbosity)) + return unittest.TestSuite(suites) + + def _test_suite_from_all_tests(self, this_package, level, verbosity): + importall(this_package) + package_name = this_package.__name__ + + # Find all tests/ directories under the package + test_dirs_names = {} + for name, module in sys.modules.items(): + if not name.startswith(package_name) or module is None: + continue + if not hasattr(module, '__file__'): + continue + d = os.path.dirname(module.__file__) + if os.path.basename(d)=='tests': + continue + d = os.path.join(d, 'tests') + if not os.path.isdir(d): + continue + if d in test_dirs_names: + continue + test_dir_module = '.'.join(name.split('.')[:-1]+['tests']) + test_dirs_names[d] = test_dir_module + + test_dirs = test_dirs_names.keys() + test_dirs.sort() + + # For each file in each tests/ directory with a test case in it, + # import the file, and add the test cases to our list + suite_list = [] + testcase_match = re.compile(r'\s*class\s+\w+\s*\(.*TestCase').match + for test_dir in test_dirs: + test_dir_module = test_dirs_names[test_dir] + + if test_dir_module not in sys.modules: + sys.modules[test_dir_module] = imp.new_module(test_dir_module) + + for fn in os.listdir(test_dir): + base, ext = os.path.splitext(fn) + if ext != '.py': + continue + f = os.path.join(test_dir, fn) + + # check that file contains TestCase class definitions: + fid = open(f, 'r') + skip = True + for line in fid: + if testcase_match(line): + skip = False + break + fid.close() + if skip: + continue + + # import the test file + n = test_dir_module + '.' + base + # in case test files import local modules + sys.path.insert(0, test_dir) + fo = None + try: + try: + fo = open(f) + test_module = imp.load_module(n, fo, f, + ('.py', 'U', 1)) + except Exception, msg: + print 'Failed importing %s: %s' % (f,msg) + continue + finally: + if fo: + fo.close() + del sys.path[0] + + suites = self._get_suite_list(test_module, level, + module_name=n, + verbosity=verbosity) + suite_list.extend(suites) + + all_tests = unittest.TestSuite(suite_list) + return all_tests + + def test(self, level=1, verbosity=1, all=True, sys_argv=[], + testcase_pattern='.*'): + """Run Numpy module test suite with level and verbosity. + + level: + None --- do nothing, return None + < 0 --- scan for tests of level=abs(level), + don't run them, return TestSuite-list + > 0 --- scan for tests of level, run them, + return TestRunner + > 10 --- run all tests (same as specifying all=True). + (backward compatibility). + + verbosity: + >= 0 --- show information messages + > 1 --- show warnings on missing tests + + all: + True --- run all test files (like self.testall()) + False (default) --- only run test files associated with a module + + sys_argv --- replacement of sys.argv[1:] during running + tests. + + testcase_pattern --- run only tests that match given pattern. + + It is assumed (when all=False) that package tests suite follows + the following convention: for each package module, there exists + file /tests/test_.py that defines + TestCase classes (with names having prefix 'test_') with methods + (with names having prefixes 'check_' or 'bench_'); each of these + methods are called when running unit tests. + """ + if level is None: # Do nothing. + return + + if isinstance(self.package, str): + exec 'import %s as this_package' % (self.package) + else: + this_package = self.package + + self.testcase_match = re.compile(testcase_pattern).match + + if all: + all_tests = self._test_suite_from_all_tests(this_package, + level, verbosity) + else: + all_tests = self._test_suite_from_modules(this_package, + level, verbosity) + + if level < 0: + return all_tests + + runner = unittest.TextTestRunner(verbosity=verbosity) + old_sys_argv = sys.argv[1:] + sys.argv[1:] = sys_argv + # Use the builtin displayhook. If the tests are being run + # under IPython (for instance), any doctest test suites will + # fail otherwise. + old_displayhook = sys.displayhook + sys.displayhook = sys.__displayhook__ + try: + r = runner.run(all_tests) + finally: + sys.displayhook = old_displayhook + sys.argv[1:] = old_sys_argv + return r + + def testall(self, level=1,verbosity=1): + """ Run Numpy module test suite with level and verbosity. + + level: + None --- do nothing, return None + < 0 --- scan for tests of level=abs(level), + don't run them, return TestSuite-list + > 0 --- scan for tests of level, run them, + return TestRunner + + verbosity: + >= 0 --- show information messages + > 1 --- show warnings on missing tests + + Different from .test(..) method, this method looks for + TestCase classes from all files in /tests/ + directory and no assumptions are made for naming the + TestCase classes or their methods. + """ + return self.test(level=level, verbosity=verbosity, all=True) + + def run(self): + """ Run Numpy module test suite with level and verbosity + taken from sys.argv. Requires optparse module. + """ + try: + from optparse import OptionParser + except ImportError: + self.warn('Failed to import optparse module, ignoring.') + return self.test() + usage = r'usage: %prog [-v ] [-l ]'\ + r' [-s ""]'\ + r' [-t ""]' + parser = OptionParser(usage) + parser.add_option("-v", "--verbosity", + action="store", + dest="verbosity", + default=1, + type='int') + parser.add_option("-l", "--level", + action="store", + dest="level", + default=1, + type='int') + parser.add_option("-s", "--sys-argv", + action="store", + dest="sys_argv", + default='', + type='string') + parser.add_option("-t", "--testcase-pattern", + action="store", + dest="testcase_pattern", + default=r'.*', + type='string') + (options, args) = parser.parse_args() + return self.test(options.level,options.verbosity, + sys_argv=shlex.split(options.sys_argv or ''), + testcase_pattern=options.testcase_pattern) + + def warn(self, message): + from numpy.distutils.misc_util import yellow_text + print>>sys.stderr,yellow_text('Warning: %s' % (message)) + sys.stderr.flush() + def info(self, message): + print>>sys.stdout, message + sys.stdout.flush() + def importall(package): """ Try recursively to import all subpackages under package. Deleted: trunk/numpy/testing/pkgtester.py =================================================================== --- trunk/numpy/testing/pkgtester.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/testing/pkgtester.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -1,14 +0,0 @@ -''' Define test function for scipy package - -Module tests for presence of useful version of nose. If present -returns NoseTester, otherwise returns a placeholder test routine -reporting lack of nose and inability to run tests. Typical use is in -module __init__: - -from scipy.testing.pkgtester import Tester -test = Tester().test - -See nosetester module for test implementation - -''' -from numpy.testing.nosetester import NoseTester as Tester Modified: trunk/numpy/testing/tests/test_utils.py =================================================================== --- trunk/numpy/testing/tests/test_utils.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/testing/tests/test_utils.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -163,4 +163,4 @@ if __name__ == '__main__': - nose.run(argv=['', __file__]) + run_module_suite() Modified: trunk/numpy/tests/test_ctypeslib.py =================================================================== --- trunk/numpy/tests/test_ctypeslib.py 2008-06-21 11:08:37 UTC (rev 5304) +++ trunk/numpy/tests/test_ctypeslib.py 2008-06-21 15:50:17 UTC (rev 5305) @@ -85,4 +85,4 @@ if __name__ == "__main__": - nose.run(argv=['', __file__]) + run_module_suite() From numpy-svn at scipy.org Sat Jun 21 12:20:57 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 21 Jun 2008 11:20:57 -0500 (CDT) Subject: [Numpy-svn] r5306 - trunk/numpy/testing Message-ID: <20080621162057.21DB939C79E@scipy.org> Author: alan.mcintyre Date: 2008-06-21 11:20:54 -0500 (Sat, 21 Jun 2008) New Revision: 5306 Added: trunk/numpy/testing/parametric.py Log: Restored parametric.py Added: trunk/numpy/testing/parametric.py =================================================================== --- trunk/numpy/testing/parametric.py 2008-06-21 15:50:17 UTC (rev 5305) +++ trunk/numpy/testing/parametric.py 2008-06-21 16:20:54 UTC (rev 5306) @@ -0,0 +1,305 @@ +"""Support for parametric tests in unittest. + +:Author: Fernando Perez + +Purpose +======= + +Briefly, the main class in this module allows you to easily and cleanly +(without the gross name-mangling hacks that are normally needed) to write +unittest TestCase classes that have parametrized tests. That is, tests which +consist of multiple sub-tests that scan for example a parameter range, but +where you want each sub-test to: + +* count as a separate test in the statistics. + +* be run even if others in the group error out or fail. + + +The class offers a simple name-based convention to create such tests (see +simple example at the end), in one of two ways: + +* Each sub-test in a group can be run fully independently, with the + setUp/tearDown methods being called each time. + +* The whole group can be run with setUp/tearDown being called only once for the + group. This lets you conveniently reuse state that may be very expensive to + compute for multiple tests. Be careful not to corrupt it!!! + + +Caveats +======= + +This code relies on implementation details of the unittest module (some key +methods are heavily modified versions of those, after copying them in). So it +may well break either if you make sophisticated use of the unittest APIs, or if +unittest itself changes in the future. I have only tested this with Python +2.5. + +""" +__docformat__ = "restructuredtext en" + +import unittest + +class _ParametricTestCase(unittest.TestCase): + """TestCase subclass with support for parametric tests. + + Subclasses of this class can implement test methods that return a list of + tests and arguments to call those with, to do parametric testing (often + also called 'data driven' testing.""" + + #: Prefix for tests with independent state. These methods will be run with + #: a separate setUp/tearDown call for each test in the group. + _indepParTestPrefix = 'testip' + + #: Prefix for tests with shared state. These methods will be run with + #: a single setUp/tearDown call for the whole group. This is useful when + #: writing a group of tests for which the setup is expensive and one wants + #: to actually share that state. Use with care (especially be careful not + #: to mutate the state you are using, which will alter later tests). + _shareParTestPrefix = 'testsp' + + def exec_test(self,test,args,result): + """Execute a single test. Returns a success boolean""" + + ok = False + try: + test(*args) + ok = True + except self.failureException: + result.addFailure(self, self._exc_info()) + except KeyboardInterrupt: + raise + except: + result.addError(self, self._exc_info()) + + return ok + + def set_testMethodDoc(self,doc): + self._testMethodDoc = doc + self._TestCase__testMethodDoc = doc + + def get_testMethodDoc(self): + return self._testMethodDoc + + testMethodDoc = property(fset=set_testMethodDoc, fget=get_testMethodDoc) + + def get_testMethodName(self): + try: + return getattr(self,"_testMethodName") + except: + return getattr(self,"_TestCase__testMethodName") + + testMethodName = property(fget=get_testMethodName) + + def run_test(self, testInfo,result): + """Run one test with arguments""" + + test,args = testInfo[0],testInfo[1:] + + # Reset the doc attribute to be the docstring of this particular test, + # so that in error messages it prints the actual test's docstring and + # not that of the test factory. + self.testMethodDoc = test.__doc__ + result.startTest(self) + try: + try: + self.setUp() + except KeyboardInterrupt: + raise + except: + result.addError(self, self._exc_info()) + return + + ok = self.exec_test(test,args,result) + + try: + self.tearDown() + except KeyboardInterrupt: + raise + except: + result.addError(self, self._exc_info()) + ok = False + if ok: result.addSuccess(self) + finally: + result.stopTest(self) + + def run_tests(self, tests,result): + """Run many tests with a common setUp/tearDown. + + The entire set of tests is run with a single setUp/tearDown call.""" + + try: + self.setUp() + except KeyboardInterrupt: + raise + except: + result.testsRun += 1 + result.addError(self, self._exc_info()) + return + + saved_doc = self.testMethodDoc + + try: + # Run all the tests specified + for testInfo in tests: + test,args = testInfo[0],testInfo[1:] + + # Set the doc argument for this test. Note that even if we do + # this, the fail/error tracebacks still print the docstring for + # the parent factory, because they only generate the message at + # the end of the run, AFTER we've restored it. There is no way + # to tell the unittest system (without overriding a lot of + # stuff) to extract this information right away, the logic is + # hardcoded to pull it later, since unittest assumes it doesn't + # change. + self.testMethodDoc = test.__doc__ + result.startTest(self) + ok = self.exec_test(test,args,result) + if ok: result.addSuccess(self) + + finally: + # Restore docstring info and run tearDown once only. + self.testMethodDoc = saved_doc + try: + self.tearDown() + except KeyboardInterrupt: + raise + except: + result.addError(self, self._exc_info()) + + def run(self, result=None): + """Test runner.""" + + #print + #print '*** run for method:',self._testMethodName # dbg + #print '*** doc:',self._testMethodDoc # dbg + + if result is None: result = self.defaultTestResult() + + # Independent tests: each gets its own setup/teardown + if self.testMethodName.startswith(self._indepParTestPrefix): + for t in getattr(self,self.testMethodName)(): + self.run_test(t,result) + # Shared-state test: single setup/teardown for all + elif self.testMethodName.startswith(self._shareParTestPrefix): + tests = getattr(self,self.testMethodName,'runTest')() + self.run_tests(tests,result) + # Normal unittest Test methods + else: + unittest.TestCase.run(self,result) + +# The underscore was added to the class name to keep nose from trying +# to run the test class (nose ignores class names that begin with an +# underscore by default). +ParametricTestCase = _ParametricTestCase + +############################################################################# +# Quick and dirty interactive example/test +if __name__ == '__main__': + + class ExampleTestCase(ParametricTestCase): + + #------------------------------------------------------------------- + # An instrumented setUp method so we can see when it gets called and + # how many times per instance + counter = 0 + + def setUp(self): + self.counter += 1 + print 'setUp count: %2s for: %s' % (self.counter, + self.testMethodDoc) + + #------------------------------------------------------------------- + # A standard test method, just like in the unittest docs. + def test_foo(self): + """Normal test for feature foo.""" + pass + + #------------------------------------------------------------------- + # Testing methods that need parameters. These can NOT be named test*, + # since they would be picked up by unittest and called without + # arguments. Instead, call them anything else (I use tst*) and then + # load them via the factories below. + def tstX(self,i): + "Test feature X with parameters." + print 'tstX, i=',i + if i==1 or i==3: + # Test fails + self.fail('i is bad, bad: %s' % i) + + def tstY(self,i): + "Test feature Y with parameters." + print 'tstY, i=',i + if i==1: + # Force an error + 1/0 + + def tstXX(self,i,j): + "Test feature XX with parameters." + print 'tstXX, i=',i,'j=',j + if i==1: + # Test fails + self.fail('i is bad, bad: %s' % i) + + def tstYY(self,i): + "Test feature YY with parameters." + print 'tstYY, i=',i + if i==2: + # Force an error + 1/0 + + def tstZZ(self): + """Test feature ZZ without parameters, needs multiple runs. + + This could be a random test that you want to run multiple times.""" + pass + + #------------------------------------------------------------------- + # Parametric test factories that create the test groups to call the + # above tst* methods with their required arguments. + def testip(self): + """Independent parametric test factory. + + A separate setUp() call is made for each test returned by this + method. + + You must return an iterable (list or generator is fine) containing + tuples with the actual method to be called as the first argument, + and the arguments for that call later.""" + return [(self.tstX,i) for i in range(5)] + + def testip2(self): + """Another independent parametric test factory""" + return [(self.tstY,i) for i in range(5)] + + def testip3(self): + """Test factory combining different subtests. + + This one shows how to assemble calls to different tests.""" + return [(self.tstX,3),(self.tstX,9),(self.tstXX,4,10), + (self.tstZZ,),(self.tstZZ,)] + + def testsp(self): + """Shared parametric test factory + + A single setUp() call is made for all the tests returned by this + method. + """ + return [(self.tstXX,i,i+1) for i in range(5)] + + def testsp2(self): + """Another shared parametric test factory""" + return [(self.tstYY,i) for i in range(5)] + + def testsp3(self): + """Another shared parametric test factory. + + This one simply calls the same test multiple times, without any + arguments. Note that you must still return tuples, even if there + are no arguments.""" + return [(self.tstZZ,) for i in range(10)] + + + # This test class runs normally under unittest's default runner + unittest.main() From numpy-svn at scipy.org Sat Jun 21 21:15:08 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 21 Jun 2008 20:15:08 -0500 (CDT) Subject: [Numpy-svn] r5307 - trunk/numpy/core/tests Message-ID: <20080622011508.D4E3AC7C042@scipy.org> Author: charris Date: 2008-06-21 20:15:05 -0500 (Sat, 21 Jun 2008) New Revision: 5307 Modified: trunk/numpy/core/tests/test_ufunc.py Log: Small cleanup. Modified: trunk/numpy/core/tests/test_ufunc.py =================================================================== --- trunk/numpy/core/tests/test_ufunc.py 2008-06-21 16:20:54 UTC (rev 5306) +++ trunk/numpy/core/tests/test_ufunc.py 2008-06-22 01:15:05 UTC (rev 5307) @@ -6,8 +6,8 @@ L = 6 x = np.arange(L) idx = np.array(zip(np.arange(L-2), np.arange(L-2)+2)).ravel() - assert_array_equal(np.add.reduceat(x,idx)[::2], - [1,3,5,7]) + assert_array_equal(np.add.reduceat(x,idx)[::2], [1,3,5,7]) + def test_generic_loops(self) : """Test generic loops. @@ -152,75 +152,72 @@ The list of ufuncs comes from generate_umath.py and is as follows: - ===== ============= =============== ======================== - done function types notes - ===== ============= =============== ======================== - n add bool + nums + O boolean + is || - n subtract bool + nums + O boolean - is ^ - n multiply bool + nums + O boolean * is & - n divide nums + O - n floor_divide nums + O - n true_divide nums + O bBhH -> f, iIlLqQ -> d - n conjugate nums + O - n fmod nums + M - n square nums + O - n reciprocal nums + O - n ones_like nums + O - n power nums + O - n absolute nums + O complex -> real - n negative nums + O - n sign nums + O -> int - n greater bool + nums + O -> bool - n greater_equal bool + nums + O -> bool - n less bool + nums + O -> bool - n less_equal bool + nums + O -> bool - n equal bool + nums + O -> bool - n not_equal bool + nums + O -> bool - n logical_and bool + nums + M -> bool - n logical_not bool + nums + M -> bool - n logical_or bool + nums + M -> bool - n logical_xor bool + nums + M -> bool - n maximum bool + nums + O - n minimum bool + nums + O - n bitwise_and bool + ints + O flts raise an error - n bitwise_or bool + ints + O flts raise an error - n bitwise_xor bool + ints + O flts raise an error - n invert bool + ints + O flts raise an error - n left_shift ints + O flts raise an error - n right_shift ints + O flts raise an error - n degrees real + M cmplx raise an error - n radians real + M cmplx raise an error - n arccos flts + M - n arccosh flts + M - n arcsin flts + M - n arcsinh flts + M - n arctan flts + M - n arctanh flts + M - n cos flts + M - n sin flts + M - n tan flts + M - n cosh flts + M - n sinh flts + M - n tanh flts + M - n exp flts + M - n expm1 flts + M - n log flts + M - n log10 flts + M - n log1p flts + M - n sqrt flts + M real x < 0 raises error - n ceil real + M - n floor real + M - n fabs real + M - n rint flts + M - n arctan2 real + M - n remainder ints + real + O - n hypot real + M - n isnan flts -> bool - n isinf flts -> bool - n isfinite flts -> bool - n signbit real -> bool - n modf real -> (frac, int) - ===== ============= =============== ======================== + ===== ==== ============= =============== ======================== + done args function types notes + ===== ==== ============= =============== ======================== + n 1 conjugate nums + O + n 1 absolute nums + O complex -> real + n 1 negative nums + O + n 1 sign nums + O -> int + n 1 invert bool + ints + O flts raise an error + n 1 degrees real + M cmplx raise an error + n 1 radians real + M cmplx raise an error + n 1 arccos flts + M + n 1 arccosh flts + M + n 1 arcsin flts + M + n 1 arcsinh flts + M + n 1 arctan flts + M + n 1 arctanh flts + M + n 1 cos flts + M + n 1 sin flts + M + n 1 tan flts + M + n 1 cosh flts + M + n 1 sinh flts + M + n 1 tanh flts + M + n 1 exp flts + M + n 1 expm1 flts + M + n 1 log flts + M + n 1 log10 flts + M + n 1 log1p flts + M + n 1 sqrt flts + M real x < 0 raises error + n 1 ceil real + M + n 1 floor real + M + n 1 fabs real + M + n 1 rint flts + M + n 1 isnan flts -> bool + n 1 isinf flts -> bool + n 1 isfinite flts -> bool + n 1 signbit real -> bool + n 1 modf real -> (frac, int) + n 1 logical_not bool + nums + M -> bool + n 2 left_shift ints + O flts raise an error + n 2 right_shift ints + O flts raise an error + n 2 add bool + nums + O boolean + is || + n 2 subtract bool + nums + O boolean - is ^ + n 2 multiply bool + nums + O boolean * is & + n 2 divide nums + O + n 2 floor_divide nums + O + n 2 true_divide nums + O bBhH -> f, iIlLqQ -> d + n 2 fmod nums + M + n 2 power nums + O + n 2 greater bool + nums + O -> bool + n 2 greater_equal bool + nums + O -> bool + n 2 less bool + nums + O -> bool + n 2 less_equal bool + nums + O -> bool + n 2 equal bool + nums + O -> bool + n 2 not_equal bool + nums + O -> bool + n 2 logical_and bool + nums + M -> bool + n 2 logical_or bool + nums + M -> bool + n 2 logical_xor bool + nums + M -> bool + n 2 maximum bool + nums + O + n 2 minimum bool + nums + O + n 2 bitwise_and bool + ints + O flts raise an error + n 2 bitwise_or bool + ints + O flts raise an error + n 2 bitwise_xor bool + ints + O flts raise an error + n 2 arctan2 real + M + n 2 remainder ints + real + O + n 2 hypot real + M + ===== ==== ============= =============== ======================== Types other than those listed will be accepted, but they are cast to the smallest compatible type for which the function is defined. The From numpy-svn at scipy.org Sat Jun 21 21:23:33 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 21 Jun 2008 20:23:33 -0500 (CDT) Subject: [Numpy-svn] r5308 - trunk/numpy/core/src Message-ID: <20080622012333.8DE6239C61F@scipy.org> Author: charris Date: 2008-06-21 20:23:31 -0500 (Sat, 21 Jun 2008) New Revision: 5308 Modified: trunk/numpy/core/src/ufuncobject.c Log: Small code cleanup. Added commented out alternate TypeError return in ufunc_generic_call. Modified: trunk/numpy/core/src/ufuncobject.c =================================================================== --- trunk/numpy/core/src/ufuncobject.c 2008-06-22 01:15:05 UTC (rev 5307) +++ trunk/numpy/core/src/ufuncobject.c 2008-06-22 01:23:31 UTC (rev 5308) @@ -1795,7 +1795,7 @@ PyErr_SetString(PyExc_ValueError, "function not supported"); return NULL; } - if ((loop = _pya_malloc(sizeof(PyUFuncLoopObject)))==NULL) { + if ((loop = _pya_malloc(sizeof(PyUFuncLoopObject))) == NULL) { PyErr_NoMemory(); return loop; } @@ -1814,30 +1814,30 @@ name = self->name ? self->name : ""; - /* Extract sig= keyword and - extobj= keyword if present - Raise an error if anything else present in the keyword dictionary - */ + /* + * Extract sig= keyword and extobj= keyword if present. + * Raise an error if anything else is present in the + * keyword dictionary + */ if (kwds != NULL) { PyObject *key, *value; Py_ssize_t pos=0; while (PyDict_Next(kwds, &pos, &key, &value)) { - if (!PyString_Check(key)) { - PyErr_SetString(PyExc_TypeError, - "invalid keyword"); + char *keystring = PyString_AsString(key); + if (keystring == NULL) { + PyErr_Clear(); + PyErr_SetString(PyExc_TypeError, "invalid keyword"); goto fail; } - if (strncmp(PyString_AS_STRING(key),"extobj",6) == 0) { + if (strncmp(keystring,"extobj",6) == 0) { extobj = value; } - else if (strncmp(PyString_AS_STRING(key),"sig",5)==0) { + else if (strncmp(keystring,"sig",3) == 0) { typetup = value; } else { - PyErr_Format(PyExc_TypeError, - "'%s' is an invalid keyword " \ - "to %s", - PyString_AS_STRING(key), name); + char *format = "'%s' is an invalid keyword to %s"; + PyErr_Format(PyExc_TypeError,format,keystring, name); goto fail; } } @@ -3321,6 +3321,10 @@ if (errval == -1) return NULL; else { + /* + * PyErr_SetString(PyExc_TypeError,""); + * return NULL; + */ Py_INCREF(Py_NotImplemented); return Py_NotImplemented; } From numpy-svn at scipy.org Mon Jun 23 03:12:38 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 23 Jun 2008 02:12:38 -0500 (CDT) Subject: [Numpy-svn] r5309 - trunk/numpy/core Message-ID: <20080623071238.DD5A039C089@scipy.org> Author: alan.mcintyre Date: 2008-06-23 02:12:36 -0500 (Mon, 23 Jun 2008) New Revision: 5309 Modified: trunk/numpy/core/records.py Log: Added "import numpy as N", and adjusted whitespace to allow doctests to run correctly. Modified: trunk/numpy/core/records.py =================================================================== --- trunk/numpy/core/records.py 2008-06-22 01:23:31 UTC (rev 5308) +++ trunk/numpy/core/records.py 2008-06-23 07:12:36 UTC (rev 5309) @@ -343,6 +343,7 @@ names=None, titles=None, aligned=False, byteorder=None): """ create a record array from a (flat) list of arrays + >>> import numpy as N >>> x1=N.array([1,2,3,4]) >>> x2=N.array(['a','dd','xyz','12']) >>> x3=N.array([1.1,2,3,4]) @@ -430,7 +431,7 @@ >>> r.col1 array([456, 2]) >>> r.col2 - chararray(['dbe', 'de'], + chararray(['dbe', 'de'], dtype='|S3') >>> import cPickle >>> print cPickle.loads(cPickle.dumps(r)) @@ -510,6 +511,7 @@ to be a file object. >>> from tempfile import TemporaryFile + >>> import numpy as N >>> a = N.empty(10,dtype='f8,i4,a5') >>> a[5] = (0.5,10,'abcde') >>> From numpy-svn at scipy.org Mon Jun 23 04:15:01 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 23 Jun 2008 03:15:01 -0500 (CDT) Subject: [Numpy-svn] r5310 - trunk/numpy/core Message-ID: <20080623081501.ABFCA39C604@scipy.org> Author: alan.mcintyre Date: 2008-06-23 03:14:59 -0500 (Mon, 23 Jun 2008) New Revision: 5310 Modified: trunk/numpy/core/arrayprint.py trunk/numpy/core/numerictypes.py Log: Fixed imports in doctests. Modified: trunk/numpy/core/arrayprint.py =================================================================== --- trunk/numpy/core/arrayprint.py 2008-06-23 07:12:36 UTC (rev 5309) +++ trunk/numpy/core/arrayprint.py 2008-06-23 08:14:59 UTC (rev 5310) @@ -218,6 +218,7 @@ Examples -------- + >>> import numpy as N >>> x = N.array([1e-16,1,2,3]) >>> print array2string(x,precision=2,separator=',',suppress_small=True) [ 0., 1., 2., 3.] Modified: trunk/numpy/core/numerictypes.py =================================================================== --- trunk/numpy/core/numerictypes.py 2008-06-23 07:12:36 UTC (rev 5309) +++ trunk/numpy/core/numerictypes.py 2008-06-23 08:14:59 UTC (rev 5310) @@ -110,7 +110,7 @@ Examples -------- - >>> from numpy.lib.utils import english_lower + >>> from numpy.core.numerictypes import english_lower >>> english_lower('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_') 'abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz0123456789_' >>> english_upper('') @@ -137,7 +137,7 @@ Examples -------- - >>> from numpy.lib.utils import english_upper + >>> from numpy.core.numerictypes import english_upper >>> english_upper('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_') 'ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_' >>> english_upper('') @@ -163,7 +163,7 @@ Examples -------- - >>> from numpy.lib.utils import english_capitalize + >>> from numpy.core.numerictypes import english_capitalize >>> english_capitalize('int8') 'Int8' >>> english_capitalize('Int8') From numpy-svn at scipy.org Mon Jun 23 04:24:52 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 23 Jun 2008 03:24:52 -0500 (CDT) Subject: [Numpy-svn] r5311 - trunk/numpy/core Message-ID: <20080623082452.9A26F39C8B3@scipy.org> Author: alan.mcintyre Date: 2008-06-23 03:24:48 -0500 (Mon, 23 Jun 2008) New Revision: 5311 Modified: trunk/numpy/core/defmatrix.py trunk/numpy/core/fromnumeric.py Log: Fixed imports for doctests. Removed ">>>" from sample code in defmatrix.py:bmat that was intended only as an example, not as a doctest. Modified: trunk/numpy/core/defmatrix.py =================================================================== --- trunk/numpy/core/defmatrix.py 2008-06-23 08:14:59 UTC (rev 5310) +++ trunk/numpy/core/defmatrix.py 2008-06-23 08:24:48 UTC (rev 5311) @@ -84,6 +84,7 @@ Examples -------- + >>> from numpy import array >>> matrix_power(array([[0,1],[-1,0]]),10) array([[-1, 0], [ 0, -1]]) @@ -539,9 +540,9 @@ Examples -------- - >>> F = bmat('A, B; C, D') - >>> F = bmat([[A,B],[C,D]]) - >>> F = bmat(r_[c_[A,B],c_[C,D]]) + F = bmat('A, B; C, D') + F = bmat([[A,B],[C,D]]) + F = bmat(r_[c_[A,B],c_[C,D]]) All of these produce the same matrix:: Modified: trunk/numpy/core/fromnumeric.py =================================================================== --- trunk/numpy/core/fromnumeric.py 2008-06-23 08:14:59 UTC (rev 5310) +++ trunk/numpy/core/fromnumeric.py 2008-06-23 08:24:48 UTC (rev 5311) @@ -246,8 +246,9 @@ Examples -------- - >>> x = np.arange(5) - >>> np.put(x,[0,2,4],[-1,-2,-3]) + >>> import numpy + >>> x = numpy.arange(5) + >>> numpy.put(x,[0,2,4],[-1,-2,-3]) >>> print x [-1 1 -2 3 -3] @@ -269,20 +270,21 @@ Examples -------- - >>> x = np.array([[1,2,3]]) - >>> np.swapaxes(x,0,1) + >>> import numpy + >>> x = numpy.array([[1,2,3]]) + >>> numpy.swapaxes(x,0,1) array([[1], [2], [3]]) - >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) + >>> x = numpy.array([[[0,1],[2,3]],[[4,5],[6,7]]]) >>> x array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) - >>> np.swapaxes(x,0,2) + >>> numpy.swapaxes(x,0,2) array([[[0, 4], [2, 6]], @@ -310,16 +312,17 @@ Examples -------- - >>> x = np.arange(4).reshape((2,2)) + >>> import numpy + >>> x = numpy.arange(4).reshape((2,2)) >>> x array([[0, 1], [2, 3]]) - >>> np.transpose(x) + >>> numpy.transpose(x) array([[0, 2], [1, 3]]) - >>> np.transpose(x,(0,1)) # no change, axes are kept in current order + >>> numpy.transpose(x,(0,1)) # no change, axes are kept in current order array([[0, 1], [2, 3]]) @@ -470,7 +473,8 @@ index_array : {integer_array} Examples - -------- + -------- + >>> from numpy import * >>> a = arange(6).reshape(2,3) >>> argmax(a) 5 @@ -504,6 +508,7 @@ Examples -------- + >>> from numpy import * >>> a = arange(6).reshape(2,3) >>> argmin(a) 0 @@ -677,6 +682,7 @@ Examples -------- + >>> from numpy import * >>> a = arange(4).reshape(2,2) >>> a array([[0, 1], @@ -742,6 +748,7 @@ Examples -------- + >>> from numpy import * >>> trace(eye(3)) 3.0 >>> a = arange(8).reshape((2,2,2)) @@ -777,10 +784,11 @@ Examples -------- + >>> from numpy import * >>> x = array([[1,2,3],[4,5,6]]) >>> x array([[1, 2, 3], - [4, 5, 6]]) + [4, 5, 6]]) >>> ravel(x) array([1, 2, 3, 4, 5, 6]) @@ -801,6 +809,7 @@ Examples -------- + >>> from numpy import * >>> eye(3)[nonzero(eye(3))] array([ 1., 1., 1.]) >>> nonzero(eye(3)) @@ -835,6 +844,7 @@ Examples -------- + >>> from numpy import * >>> shape(eye(3)) (3, 3) >>> shape([[1,2]]) @@ -872,13 +882,14 @@ Examples -------- - >>> a = np.array([[1, 2], [3, 4]]) - >>> np.compress([0, 1], a, axis=0) + >>> import numpy + >>> a = numpy.array([[1, 2], [3, 4]]) + >>> numpy.compress([0, 1], a, axis=0) array([[3, 4]]) - >>> np.compress([1], a, axis=1) + >>> numpy.compress([1], a, axis=1) array([[1], [3]]) - >>> np.compress([0,1,1], a) + >>> numpy.compress([0,1,1], a) array([2, 3]) """ @@ -912,12 +923,13 @@ Examples -------- - >>> a = np.arange(10) - >>> np.clip(a, 1, 8) + >>> import numpy + >>> a = numpy.arange(10) + >>> numpy.clip(a, 1, 8) array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - >>> np.clip(a, 3, 6, out=a) + >>> numpy.clip(a, 3, 6, out=a) array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) >>> a array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) @@ -964,9 +976,10 @@ Examples -------- + >>> from numpy import * >>> sum([0.5, 1.5]) 2.0 - >>> sum([0.5, 1.5], dtype=N.int32) + >>> sum([0.5, 1.5], dtype=int32) 1 >>> sum([[0, 1], [0, 5]]) 6 @@ -1029,6 +1042,7 @@ Examples -------- + >>> from numpy import * >>> product([1.,2.]) 2.0 >>> product([1.,2.], dtype=int32) @@ -1074,6 +1088,7 @@ Examples -------- + >>> import numpy >>> b = numpy.array([True, False, True, True]) >>> numpy.sometrue(b) True @@ -1266,13 +1281,14 @@ Examples -------- - >>> x = np.arange(4).reshape((2,2)) + >>> import numpy + >>> x = numpy.arange(4).reshape((2,2)) >>> x array([[0, 1], [2, 3]]) - >>> np.ptp(x,0) + >>> numpy.ptp(x,0) array([2, 2]) - >>> np.ptp(x,1) + >>> numpy.ptp(x,1) array([1, 1]) """ @@ -1304,13 +1320,14 @@ Examples -------- - >>> x = np.arange(4).reshape((2,2)) + >>> import numpy + >>> x = numpy.arange(4).reshape((2,2)) >>> x array([[0, 1], [2, 3]]) - >>> np.amax(x,0) + >>> numpy.amax(x,0) array([2, 3]) - >>> np.amax(x,1) + >>> numpy.amax(x,1) array([1, 3]) """ @@ -1342,13 +1359,14 @@ Examples -------- - >>> x = np.arange(4).reshape((2,2)) + >>> import numpy + >>> x = numpy.arange(4).reshape((2,2)) >>> x array([[0, 1], [2, 3]]) - >>> np.amin(x,0) + >>> numpy.amin(x,0) array([0, 1]) - >>> np.amin(x,1) + >>> numpy.amin(x,1) array([0, 2]) """ @@ -1375,6 +1393,7 @@ Examples -------- + >>> import numpy >>> z = numpy.zeros((7,4,5)) >>> z.shape[0] 7 @@ -1423,6 +1442,7 @@ Examples -------- + >>> from numpy import * >>> prod([1.,2.]) 2.0 >>> prod([1.,2.], dtype=int32) @@ -1483,6 +1503,7 @@ Examples -------- + >>> import numpy >>> a=numpy.array([[1,2,3],[4,5,6]]) >>> a=numpy.array([1,2,3]) >>> numpy.cumprod(a) # intermediate results 1, 1*2 From numpy-svn at scipy.org Mon Jun 23 23:43:56 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 23 Jun 2008 22:43:56 -0500 (CDT) Subject: [Numpy-svn] r5312 - trunk/numpy/core/src Message-ID: <20080624034356.792E739C3D8@scipy.org> Author: charris Date: 2008-06-23 22:43:53 -0500 (Mon, 23 Jun 2008) New Revision: 5312 Modified: trunk/numpy/core/src/ufuncobject.c Log: Code style cleanups. Whitespace. Fix typo rrlshift. Modified: trunk/numpy/core/src/ufuncobject.c =================================================================== --- trunk/numpy/core/src/ufuncobject.c 2008-06-23 08:24:48 UTC (rev 5311) +++ trunk/numpy/core/src/ufuncobject.c 2008-06-24 03:43:53 UTC (rev 5312) @@ -1073,7 +1073,7 @@ int ret=-1; obj = NULL; /* Look through all the registered loops for all the user-defined - types to find a match. + types to find a match. */ while (ret == -1) { if (userdef_ind >= self->nin) break; @@ -1277,26 +1277,26 @@ return 0; } -#define _GETATTR_(str, rstr) if (strcmp(name, #str) == 0) { \ - return PyObject_HasAttrString(op, "__" #rstr "__");} +#define _GETATTR_(str, rstr) do {if (strcmp(name, #str) == 0) \ + return PyObject_HasAttrString(op, "__" #rstr "__");} while (0); static int _has_reflected_op(PyObject *op, char *name) { - _GETATTR_(add, radd) - _GETATTR_(subtract, rsub) - _GETATTR_(multiply, rmul) - _GETATTR_(divide, rdiv) - _GETATTR_(true_divide, rtruediv) - _GETATTR_(floor_divide, rfloordiv) - _GETATTR_(remainder, rmod) - _GETATTR_(power, rpow) - _GETATTR_(left_shift, rrlshift) - _GETATTR_(right_shift, rrshift) - _GETATTR_(bitwise_and, rand) - _GETATTR_(bitwise_xor, rxor) - _GETATTR_(bitwise_or, ror) - return 0; + _GETATTR_(add, radd); + _GETATTR_(subtract, rsub); + _GETATTR_(multiply, rmul); + _GETATTR_(divide, rdiv); + _GETATTR_(true_divide, rtruediv); + _GETATTR_(floor_divide, rfloordiv); + _GETATTR_(remainder, rmod); + _GETATTR_(power, rpow); + _GETATTR_(left_shift, rlshift); + _GETATTR_(right_shift, rrshift); + _GETATTR_(bitwise_and, rand); + _GETATTR_(bitwise_xor, rxor); + _GETATTR_(bitwise_or, ror); + return 0; } #undef _GETATTR_ @@ -1310,13 +1310,13 @@ int arg_types[NPY_MAXARGS]; PyArray_SCALARKIND scalars[NPY_MAXARGS]; PyArray_SCALARKIND maxarrkind, maxsckind, new; - PyUFuncObject *self=loop->ufunc; - Bool allscalars=TRUE; - PyTypeObject *subtype=&PyArray_Type; - PyObject *context=NULL; + PyUFuncObject *self = loop->ufunc; + Bool allscalars = TRUE; + PyTypeObject *subtype = &PyArray_Type; + PyObject *context = NULL; PyObject *obj; - int flexible=0; - int object=0; + int flexible = 0; + int object = 0; /* Check number of arguments */ nargs = PyTuple_Size(args); @@ -1329,15 +1329,19 @@ /* Get each input argument */ maxarrkind = PyArray_NOSCALAR; maxsckind = PyArray_NOSCALAR; - for(i=0; inin; i++) { + for(i = 0; i < self->nin; i++) { obj = PyTuple_GET_ITEM(args,i); if (!PyArray_Check(obj) && !PyArray_IsScalar(obj, Generic)) { context = Py_BuildValue("OOi", self, args, i); } - else context = NULL; + else { + context = NULL; + } mps[i] = (PyArrayObject *)PyArray_FromAny(obj, NULL, 0, 0, 0, context); Py_XDECREF(context); - if (mps[i] == NULL) return -1; + if (mps[i] == NULL) { + return -1; + } arg_types[i] = PyArray_TYPE(mps[i]); if (!flexible && PyTypeNum_ISFLEXIBLE(arg_types[i])) { flexible = 1; @@ -1345,22 +1349,23 @@ if (!object && PyTypeNum_ISOBJECT(arg_types[i])) { object = 1; } + /* debug + * fprintf(stderr, "array %d has reference %d\n", i, + * (mps[i])->ob_refcnt); + */ + /* - fprintf(stderr, "array %d has reference %d\n", i, - (mps[i])->ob_refcnt); - */ + * Scalars are 0-dimensional arrays at this point + */ - /* Scalars are 0-dimensional arrays - at this point - */ + /* + * We need to keep track of whether or not scalars + * are mixed with arrays of different kinds. + */ - /* We need to keep track of whether or not scalars - are mixed with arrays of different kinds. - */ - if (mps[i]->nd > 0) { scalars[i] = PyArray_NOSCALAR; - allscalars=FALSE; + allscalars = FALSE; new = PyArray_ScalarKind(arg_types[i], NULL); maxarrkind = NPY_MAX(new, maxarrkind); } @@ -1370,15 +1375,18 @@ } } + /* We don't do strings */ if (flexible && !object) { loop->notimplemented = 1; return nargs; } - /* If everything is a scalar, or scalars mixed with arrays of - different kinds of lesser types then use normal coercion rules */ + /* + * If everything is a scalar, or scalars mixed with arrays of + * different kinds of lesser kinds then use normal coercion rules + */ if (allscalars || (maxsckind > maxarrkind)) { - for(i=0; inin; i++) { + for(i = 0; i < self->nin; i++) { scalars[i] = PyArray_NOSCALAR; } } @@ -1388,11 +1396,12 @@ &(loop->funcdata), scalars, typetup) == -1) return -1; - /* FAIL with NotImplemented if the other object has - the __r__ method and has __array_priority__ as - an attribute (signalling it can handle ndarray's) - and is not already an ndarray - */ + /* + * FAIL with NotImplemented if the other object has + * the __r__ method and has __array_priority__ as + * an attribute (signalling it can handle ndarray's) + * and is not already an ndarray + */ if ((arg_types[1] == PyArray_OBJECT) && \ (loop->ufunc->nin==2) && (loop->ufunc->nout == 1)) { PyObject *_obj = PyTuple_GET_ITEM(args, 1); @@ -1404,24 +1413,31 @@ } } - /* Create copies for some of the arrays if they are small - enough and not already contiguous */ - if (_create_copies(loop, arg_types, mps) < 0) return -1; + /* + * Create copies for some of the arrays if they are small + * enough and not already contiguous + */ + if (_create_copies(loop, arg_types, mps) < 0) { + return -1; + } /* Create Iterators for the Inputs */ - for(i=0; inin; i++) { + for(i = 0; i < self->nin; i++) { loop->iters[i] = (PyArrayIterObject *) \ PyArray_IterNew((PyObject *)mps[i]); - if (loop->iters[i] == NULL) return -1; + if (loop->iters[i] == NULL) { + return -1; + } } /* Broadcast the result */ loop->numiter = self->nin; - if (PyArray_Broadcast((PyArrayMultiIterObject *)loop) < 0) + if (PyArray_Broadcast((PyArrayMultiIterObject *)loop) < 0) { return -1; + } /* Get any return arguments */ - for(i=self->nin; inin; i < nargs; i++) { mps[i] = (PyArrayObject *)PyTuple_GET_ITEM(args, i); if (((PyObject *)mps[i])==Py_None) { mps[i] = NULL; @@ -1464,8 +1480,7 @@ } /* construct any missing return arrays and make output iterators */ - - for(i=self->nin; inargs; i++) { + for(i = self->nin; i < self->nargs; i++) { PyArray_Descr *ntype; if (mps[i] == NULL) { @@ -1475,12 +1490,15 @@ arg_types[i], NULL, NULL, 0, 0, NULL); - if (mps[i] == NULL) return -1; + if (mps[i] == NULL) { + return -1; + } } - /* reset types for outputs that are equivalent - -- no sense casting uselessly - */ + /* + * reset types for outputs that are equivalent + * -- no sense casting uselessly + */ else { if (mps[i]->descr->type_num != arg_types[i]) { PyArray_Descr *atype; @@ -1497,15 +1515,18 @@ !PyArray_ISBEHAVED_RO(mps[i])) { if (loop->size < loop->bufsize) { PyObject *new; - /* Copy the array to a temporary copy - and set the UPDATEIFCOPY flag - */ + /* + * Copy the array to a temporary copy + * and set the UPDATEIFCOPY flag + */ ntype = PyArray_DescrFromType(arg_types[i]); new = PyArray_FromAny((PyObject *)mps[i], ntype, 0, 0, FORCECAST | ALIGNED | UPDATEIFCOPY, NULL); - if (new == NULL) return -1; + if (new == NULL) { + return -1; + } Py_DECREF(mps[i]); mps[i] = (PyArrayObject *)new; } @@ -1514,23 +1535,27 @@ loop->iters[i] = (PyArrayIterObject *) \ PyArray_IterNew((PyObject *)mps[i]); - if (loop->iters[i] == NULL) return -1; + if (loop->iters[i] == NULL) { + return -1; + } } - /* If any of different type, or misaligned or swapped - then must use buffers */ - + /* + * If any of different type, or misaligned or swapped + * then must use buffers + */ loop->bufcnt = 0; loop->obj = 0; /* Determine looping method needed */ loop->meth = NO_UFUNCLOOP; - if (loop->size == 0) return nargs; + if (loop->size == 0) { + return nargs; + } - - for(i=0; inargs; i++) { + for(i = 0; i < self->nargs; i++) { loop->needbuffer[i] = 0; if (arg_types[i] != mps[i]->descr->type_num || !PyArray_ISBEHAVED_RO(mps[i])) { @@ -1544,16 +1569,16 @@ } if (loop->meth == NO_UFUNCLOOP) { - loop->meth = ONE_UFUNCLOOP; /* All correct type and BEHAVED */ /* Check for non-uniform stridedness */ - - for(i=0; inargs; i++) { + for(i = 0; i < self->nargs; i++) { if (!(loop->iters[i]->contiguous)) { - /* may still have uniform stride - if (broadcated result) <= 1-d */ + /* + * May still have uniform stride + * if (broadcast result) <= 1-d + */ if (mps[i]->nd != 0 && \ (loop->iters[i]->nd_m1 > 0)) { loop->meth = NOBUFFER_UFUNCLOOP; @@ -1562,7 +1587,7 @@ } } if (loop->meth == ONE_UFUNCLOOP) { - for(i=0; inargs; i++) { + for(i = 0; i < self->nargs; i++) { loop->bufptr[i] = mps[i]->data; } } @@ -1581,29 +1606,30 @@ /* Fix iterators */ - /* Optimize axis the iteration takes place over + /* + * Optimize axis the iteration takes place over + * + * The first thought was to have the loop go + * over the largest dimension to minimize the number of loops + * + * However, on processors with slow memory bus and cache, + * the slowest loops occur when the memory access occurs for + * large strides. + * + * Thus, choose the axis for which strides of the last iterator is + * smallest but non-zero. + */ - The first thought was to have the loop go - over the largest dimension to minimize the number of loops - - However, on processors with slow memory bus and cache, - the slowest loops occur when the memory access occurs for - large strides. - - Thus, choose the axis for which strides of the last iterator is - smallest but non-zero. - */ - - for(i=0; ind; i++) { + for(i = 0; i < loop->nd; i++) { stride_sum[i] = 0; - for(j=0; jnumiter; j++) { + for(j = 0; j < loop->numiter; j++) { stride_sum[i] += loop->iters[j]->strides[i]; } } ldim = loop->nd - 1; minsum = stride_sum[loop->nd-1]; - for(i=loop->nd - 2; i>=0; i--) { + for(i = loop->nd - 2; i >= 0; i--) { if (stride_sum[i] < minsum ) { ldim = i; minsum = stride_sum[i]; @@ -1615,35 +1641,38 @@ loop->bufcnt = maxdim; loop->lastdim = ldim; - /* Fix the iterators so the inner loop occurs over the - largest dimensions -- This can be done by - setting the size to 1 in that dimension - (just in the iterators) - */ - - for(i=0; inumiter; i++) { + /* + * Fix the iterators so the inner loop occurs over the + * largest dimensions -- This can be done by + * setting the size to 1 in that dimension + * (just in the iterators) + */ + for(i = 0; i < loop->numiter; i++) { it = loop->iters[i]; it->contiguous = 0; it->size /= (it->dims_m1[ldim]+1); it->dims_m1[ldim] = 0; it->backstrides[ldim] = 0; - /* (won't fix factors because we - don't use PyArray_ITER_GOTO1D - so don't change them) */ - - /* Set the steps to the strides in that dimension */ + /* + * (won't fix factors because we + * don't use PyArray_ITER_GOTO1D + * so don't change them) + * + * Set the steps to the strides in that dimension + */ loop->steps[i] = it->strides[ldim]; } - /* fix up steps where we will be copying data to - buffers and calculate the ninnerloops and leftover - values -- if step size is already zero that is not changed... - */ + /* + * fix up steps where we will be copying data to + * buffers and calculate the ninnerloops and leftover + * values -- if step size is already zero that is not changed... + */ if (loop->meth == BUFFER_UFUNCLOOP) { loop->leftover = maxdim % loop->bufsize; loop->ninnerloops = (maxdim / loop->bufsize) + 1; - for(i=0; inargs; i++) { + for(i = 0; i < self->nargs; i++) { if (loop->needbuffer[i] && loop->steps[i]) { loop->steps[i] = mps[i]->descr->elsize; } @@ -1651,8 +1680,9 @@ } } } - else { /* uniformly-strided case ONE_UFUNCLOOP */ - for(i=0; inargs; i++) { + else { + /* uniformly-strided case ONE_UFUNCLOOP */ + for(i = 0; i < self->nargs; i++) { if (PyArray_SIZE(mps[i]) == 1) loop->steps[i] = 0; else @@ -1663,8 +1693,10 @@ /* Finally, create memory for buffers if we need them */ - /* buffers for scalars are specially made small -- scalars are - not copied multiple times */ + /* + * Buffers for scalars are specially made small -- scalars are + * not copied multiple times + */ if (loop->meth == BUFFER_UFUNCLOOP) { int cnt = 0, cntcast = 0; /* keeps track of bytes to allocate */ int scnt = 0, scntcast = 0; @@ -1679,17 +1711,20 @@ PyArray_Descr *descr; /* compute the element size */ - for(i=0; inargs;i++) { - if (!loop->needbuffer[i]) continue; + for(i = 0; i < self->nargs; i++) { + if (!loop->needbuffer[i]) { + continue; + } if (arg_types[i] != mps[i]->descr->type_num) { descr = PyArray_DescrFromType(arg_types[i]); - if (loop->steps[i]) + if (loop->steps[i]) { cntcast += descr->elsize; - else + } + else { scntcast += descr->elsize; + } if (i < self->nin) { - loop->cast[i] = \ - PyArray_GetCastFunc(mps[i]->descr, + loop->cast[i] = PyArray_GetCastFunc(mps[i]->descr, arg_types[i]); } else { @@ -1697,26 +1732,40 @@ (descr, mps[i]->descr->type_num); } Py_DECREF(descr); - if (!loop->cast[i]) return -1; + if (!loop->cast[i]) { + return -1; + } } loop->swap[i] = !(PyArray_ISNOTSWAPPED(mps[i])); - if (loop->steps[i]) + if (loop->steps[i]) { cnt += mps[i]->descr->elsize; - else + } + else { scnt += mps[i]->descr->elsize; + } } memsize = loop->bufsize*(cnt+cntcast) + scbufsize*(scnt+scntcast); loop->buffer[0] = PyDataMem_NEW(memsize); - /* fprintf(stderr, "Allocated buffer at %p of size %d, cnt=%d, cntcast=%d\n", loop->buffer[0], loop->bufsize * (cnt + cntcast), cnt, cntcast); */ + /* debug + * fprintf(stderr, "Allocated buffer at %p of size %d, cnt=%d, cntcast=%d\n", + * loop->buffer[0], loop->bufsize * (cnt + cntcast), cnt, cntcast); + */ - if (loop->buffer[0] == NULL) {PyErr_NoMemory(); return -1;} - if (loop->obj) memset(loop->buffer[0], 0, memsize); + if (loop->buffer[0] == NULL) { + PyErr_NoMemory(); + return -1; + } + if (loop->obj) { + memset(loop->buffer[0], 0, memsize); + } castptr = loop->buffer[0] + loop->bufsize*cnt + scbufsize*scnt; bufptr = loop->buffer[0]; loop->objfunc = 0; - for(i=0; inargs; i++) { - if (!loop->needbuffer[i]) continue; + for(i = 0; i < self->nargs; i++) { + if (!loop->needbuffer[i]) { + continue; + } loop->buffer[i] = bufptr + (last_was_scalar ? scbufsize : \ loop->bufsize)*oldbufsize; last_was_scalar = (loop->steps[i] == 0); From numpy-svn at scipy.org Tue Jun 24 01:10:42 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Tue, 24 Jun 2008 00:10:42 -0500 (CDT) Subject: [Numpy-svn] r5313 - trunk/numpy/core/src Message-ID: <20080624051042.B3EF639C9F8@scipy.org> Author: charris Date: 2008-06-24 00:10:37 -0500 (Tue, 24 Jun 2008) New Revision: 5313 Modified: trunk/numpy/core/src/scalartypes.inc.src Log: Basic reindentation. Modified: trunk/numpy/core/src/scalartypes.inc.src =================================================================== --- trunk/numpy/core/src/scalartypes.inc.src 2008-06-24 03:43:53 UTC (rev 5312) +++ trunk/numpy/core/src/scalartypes.inc.src 2008-06-24 05:10:37 UTC (rev 5313) @@ -6,8 +6,8 @@ #include "numpy/arrayscalars.h" static PyBoolScalarObject _PyArrayScalar_BoolValues[2] = { - {PyObject_HEAD_INIT(&PyBoolArrType_Type) 0}, - {PyObject_HEAD_INIT(&PyBoolArrType_Type) 1}, + {PyObject_HEAD_INIT(&PyBoolArrType_Type) 0}, + {PyObject_HEAD_INIT(&PyBoolArrType_Type) 1}, }; /* Inheritance established later when tp_bases is set (or tp_base for @@ -20,109 +20,110 @@ */ static PyTypeObject Py at NAME@ArrType_Type = { - PyObject_HEAD_INIT(NULL) - 0, /*ob_size*/ - "numpy. at name@", /*tp_name*/ - sizeof(PyObject), /*tp_basicsize*/ + PyObject_HEAD_INIT(NULL) + 0, /*ob_size*/ + "numpy. at name@", /*tp_name*/ + sizeof(PyObject), /*tp_basicsize*/ }; /**end repeat**/ static void * scalar_value(PyObject *scalar, PyArray_Descr *descr) { - int type_num; - int align; - intp memloc; - if (descr == NULL) { - descr = PyArray_DescrFromScalar(scalar); - type_num = descr->type_num; - Py_DECREF(descr); - } else { - type_num = descr->type_num; - } - switch (type_num) { + int type_num; + int align; + intp memloc; + if (descr == NULL) { + descr = PyArray_DescrFromScalar(scalar); + type_num = descr->type_num; + Py_DECREF(descr); + } + else { + type_num = descr->type_num; + } + switch (type_num) { #define CASE(ut,lt) case NPY_##ut: return &(((Py##lt##ScalarObject *)scalar)->obval) - CASE(BOOL, Bool); - CASE(BYTE, Byte); - CASE(UBYTE, UByte); - CASE(SHORT, Short); - CASE(USHORT, UShort); - CASE(INT, Int); - CASE(UINT, UInt); - CASE(LONG, Long); - CASE(ULONG, ULong); - CASE(LONGLONG, LongLong); - CASE(ULONGLONG, ULongLong); - CASE(FLOAT, Float); - CASE(DOUBLE, Double); - CASE(LONGDOUBLE, LongDouble); - CASE(CFLOAT, CFloat); - CASE(CDOUBLE, CDouble); - CASE(CLONGDOUBLE, CLongDouble); - CASE(OBJECT, Object); + CASE(BOOL, Bool); + CASE(BYTE, Byte); + CASE(UBYTE, UByte); + CASE(SHORT, Short); + CASE(USHORT, UShort); + CASE(INT, Int); + CASE(UINT, UInt); + CASE(LONG, Long); + CASE(ULONG, ULong); + CASE(LONGLONG, LongLong); + CASE(ULONGLONG, ULongLong); + CASE(FLOAT, Float); + CASE(DOUBLE, Double); + CASE(LONGDOUBLE, LongDouble); + CASE(CFLOAT, CFloat); + CASE(CDOUBLE, CDouble); + CASE(CLONGDOUBLE, CLongDouble); + CASE(OBJECT, Object); #undef CASE case NPY_STRING: return (void *)PyString_AS_STRING(scalar); case NPY_UNICODE: return (void *)PyUnicode_AS_DATA(scalar); case NPY_VOID: return ((PyVoidScalarObject *)scalar)->obval; - } + } - /* Must be a user-defined type --- check to see which - scalar it inherits from. */ + /* Must be a user-defined type --- check to see which + scalar it inherits from. */ #define _CHK(cls) (PyObject_IsInstance(scalar, \ - (PyObject *)&Py##cls##ArrType_Type)) + (PyObject *)&Py##cls##ArrType_Type)) #define _OBJ(lt) &(((Py##lt##ScalarObject *)scalar)->obval) #define _IFCASE(cls) if _CHK(cls) return _OBJ(cls) - if _CHK(Number) { - if _CHK(Integer) { - if _CHK(SignedInteger) { - _IFCASE(Byte); - _IFCASE(Short); - _IFCASE(Int); - _IFCASE(Long); - _IFCASE(LongLong); - } - else { /* Unsigned Integer */ - _IFCASE(UByte); - _IFCASE(UShort); - _IFCASE(UInt); - _IFCASE(ULong); - _IFCASE(ULongLong); - } - } - else { /* Inexact */ - if _CHK(Floating) { - _IFCASE(Float); - _IFCASE(Double); - _IFCASE(LongDouble); - } - else { /*ComplexFloating */ - _IFCASE(CFloat); - _IFCASE(CDouble); - _IFCASE(CLongDouble); - } - } + if _CHK(Number) { + if _CHK(Integer) { + if _CHK(SignedInteger) { + _IFCASE(Byte); + _IFCASE(Short); + _IFCASE(Int); + _IFCASE(Long); + _IFCASE(LongLong); + } + else { /* Unsigned Integer */ + _IFCASE(UByte); + _IFCASE(UShort); + _IFCASE(UInt); + _IFCASE(ULong); + _IFCASE(ULongLong); + } } - else if _CHK(Bool) return _OBJ(Bool); - else if _CHK(Flexible) { - if _CHK(String) return (void *)PyString_AS_STRING(scalar); - if _CHK(Unicode) return (void *)PyUnicode_AS_DATA(scalar); - if _CHK(Void) return ((PyVoidScalarObject *)scalar)->obval; + else { /* Inexact */ + if _CHK(Floating) { + _IFCASE(Float); + _IFCASE(Double); + _IFCASE(LongDouble); + } + else { /*ComplexFloating */ + _IFCASE(CFloat); + _IFCASE(CDouble); + _IFCASE(CLongDouble); + } } - else _IFCASE(Object); + } + else if _CHK(Bool) return _OBJ(Bool); + else if _CHK(Flexible) { + if _CHK(String) return (void *)PyString_AS_STRING(scalar); + if _CHK(Unicode) return (void *)PyUnicode_AS_DATA(scalar); + if _CHK(Void) return ((PyVoidScalarObject *)scalar)->obval; + } + else _IFCASE(Object); - /* Use the alignment flag to figure out where the data begins - after a PyObject_HEAD - */ - memloc = (intp)scalar; - memloc += sizeof(PyObject); - /* now round-up to the nearest alignment value - */ - align = descr->alignment; - if (align > 1) memloc = ((memloc + align - 1)/align)*align; - return (void *)memloc; + /* Use the alignment flag to figure out where the data begins + after a PyObject_HEAD + */ + memloc = (intp)scalar; + memloc += sizeof(PyObject); + /* now round-up to the nearest alignment value + */ + align = descr->alignment; + if (align > 1) memloc = ((memloc + align - 1)/align)*align; + return (void *)memloc; #undef _IFCASE #undef _OBJ #undef _CHK @@ -137,19 +138,19 @@ static void PyArray_ScalarAsCtype(PyObject *scalar, void *ctypeptr) { - PyArray_Descr *typecode; - void *newptr; - typecode = PyArray_DescrFromScalar(scalar); - newptr = scalar_value(scalar, typecode); + PyArray_Descr *typecode; + void *newptr; + typecode = PyArray_DescrFromScalar(scalar); + newptr = scalar_value(scalar, typecode); - if (PyTypeNum_ISEXTENDED(typecode->type_num)) { - void **ct = (void **)ctypeptr; - *ct = newptr; - } else { - memcpy(ctypeptr, newptr, typecode->elsize); - } - Py_DECREF(typecode); - return; + if (PyTypeNum_ISEXTENDED(typecode->type_num)) { + void **ct = (void **)ctypeptr; + *ct = newptr; + } else { + memcpy(ctypeptr, newptr, typecode->elsize); + } + Py_DECREF(typecode); + return; } /* The output buffer must be large-enough to receive the value */ @@ -167,34 +168,37 @@ PyArray_CastScalarToCtype(PyObject *scalar, void *ctypeptr, PyArray_Descr *outcode) { - PyArray_Descr* descr; - PyArray_VectorUnaryFunc* castfunc; + PyArray_Descr* descr; + PyArray_VectorUnaryFunc* castfunc; - descr = PyArray_DescrFromScalar(scalar); - castfunc = PyArray_GetCastFunc(descr, outcode->type_num); - if (castfunc == NULL) return -1; - if (PyTypeNum_ISEXTENDED(descr->type_num) || + descr = PyArray_DescrFromScalar(scalar); + castfunc = PyArray_GetCastFunc(descr, outcode->type_num); + if (castfunc == NULL) return -1; + if (PyTypeNum_ISEXTENDED(descr->type_num) || PyTypeNum_ISEXTENDED(outcode->type_num)) { - PyArrayObject *ain, *aout; + PyArrayObject *ain, *aout; - ain = (PyArrayObject *)PyArray_FromScalar(scalar, NULL); - if (ain == NULL) {Py_DECREF(descr); return -1;} - aout = (PyArrayObject *) - PyArray_NewFromDescr(&PyArray_Type, - outcode, - 0, NULL, - NULL, ctypeptr, - CARRAY, NULL); - if (aout == NULL) {Py_DECREF(ain); return -1;} - castfunc(ain->data, aout->data, 1, ain, aout); - Py_DECREF(ain); - Py_DECREF(aout); + ain = (PyArrayObject *)PyArray_FromScalar(scalar, NULL); + if (ain == NULL) { + Py_DECREF(descr); + return -1; } - else { - castfunc(scalar_value(scalar, descr), ctypeptr, 1, NULL, NULL); - } - Py_DECREF(descr); - return 0; + aout = (PyArrayObject *) + PyArray_NewFromDescr(&PyArray_Type, + outcode, + 0, NULL, + NULL, ctypeptr, + CARRAY, NULL); + if (aout == NULL) {Py_DECREF(ain); return -1;} + castfunc(ain->data, aout->data, 1, ain, aout); + Py_DECREF(ain); + Py_DECREF(aout); + } + else { + castfunc(scalar_value(scalar, descr), ctypeptr, 1, NULL, NULL); + } + Py_DECREF(descr); + return 0; } /*NUMPY_API @@ -204,13 +208,13 @@ PyArray_CastScalarDirect(PyObject *scalar, PyArray_Descr *indescr, void *ctypeptr, int outtype) { - PyArray_VectorUnaryFunc* castfunc; - void *ptr; - castfunc = PyArray_GetCastFunc(indescr, outtype); - if (castfunc == NULL) return -1; - ptr = scalar_value(scalar, indescr); - castfunc(ptr, ctypeptr, 1, NULL, NULL); - return 0; + PyArray_VectorUnaryFunc* castfunc; + void *ptr; + castfunc = PyArray_GetCastFunc(indescr, outtype); + if (castfunc == NULL) return -1; + ptr = scalar_value(scalar, indescr); + castfunc(ptr, ctypeptr, 1, NULL, NULL); + return 0; } /* 0-dim array from array-scalar object */ @@ -226,70 +230,70 @@ static PyObject * PyArray_FromScalar(PyObject *scalar, PyArray_Descr *outcode) { - PyArray_Descr *typecode; - PyObject *r; - char *memptr; - PyObject *ret; + PyArray_Descr *typecode; + PyObject *r; + char *memptr; + PyObject *ret; - /* convert to 0-dim array of scalar typecode */ - typecode = PyArray_DescrFromScalar(scalar); - if ((typecode->type_num == PyArray_VOID) && + /* convert to 0-dim array of scalar typecode */ + typecode = PyArray_DescrFromScalar(scalar); + if ((typecode->type_num == PyArray_VOID) && !(((PyVoidScalarObject *)scalar)->flags & OWNDATA) && outcode == NULL) { - r = PyArray_NewFromDescr(&PyArray_Type, - typecode, - 0, NULL, NULL, - ((PyVoidScalarObject *)scalar)->obval, - ((PyVoidScalarObject *)scalar)->flags, - NULL); - PyArray_BASE(r) = (PyObject *)scalar; - Py_INCREF(scalar); - return r; - } r = PyArray_NewFromDescr(&PyArray_Type, - typecode, - 0, NULL, - NULL, NULL, 0, NULL); - if (r==NULL) {Py_XDECREF(outcode); return NULL;} + typecode, + 0, NULL, NULL, + ((PyVoidScalarObject *)scalar)->obval, + ((PyVoidScalarObject *)scalar)->flags, + NULL); + PyArray_BASE(r) = (PyObject *)scalar; + Py_INCREF(scalar); + return r; + } + r = PyArray_NewFromDescr(&PyArray_Type, + typecode, + 0, NULL, + NULL, NULL, 0, NULL); + if (r==NULL) {Py_XDECREF(outcode); return NULL;} - if (PyDataType_FLAGCHK(typecode, NPY_USE_SETITEM)) { - if (typecode->f->setitem(scalar, PyArray_DATA(r), r) < 0) { - Py_XDECREF(outcode); Py_DECREF(r); - return NULL; - } - goto finish; + if (PyDataType_FLAGCHK(typecode, NPY_USE_SETITEM)) { + if (typecode->f->setitem(scalar, PyArray_DATA(r), r) < 0) { + Py_XDECREF(outcode); Py_DECREF(r); + return NULL; } + goto finish; + } - memptr = scalar_value(scalar, typecode); + memptr = scalar_value(scalar, typecode); #ifndef Py_UNICODE_WIDE - if (typecode->type_num == PyArray_UNICODE) { - PyUCS2Buffer_AsUCS4((Py_UNICODE *)memptr, - (PyArray_UCS4 *)PyArray_DATA(r), - PyUnicode_GET_SIZE(scalar), - PyArray_ITEMSIZE(r) >> 2); - } else + if (typecode->type_num == PyArray_UNICODE) { + PyUCS2Buffer_AsUCS4((Py_UNICODE *)memptr, + (PyArray_UCS4 *)PyArray_DATA(r), + PyUnicode_GET_SIZE(scalar), + PyArray_ITEMSIZE(r) >> 2); + } else #endif - { - memcpy(PyArray_DATA(r), memptr, PyArray_ITEMSIZE(r)); - if (PyDataType_FLAGCHK(typecode, NPY_ITEM_HASOBJECT)) { - Py_INCREF(*((PyObject **)memptr)); - } + { + memcpy(PyArray_DATA(r), memptr, PyArray_ITEMSIZE(r)); + if (PyDataType_FLAGCHK(typecode, NPY_ITEM_HASOBJECT)) { + Py_INCREF(*((PyObject **)memptr)); } + } - finish: - if (outcode == NULL) return r; +finish: + if (outcode == NULL) return r; - if (outcode->type_num == typecode->type_num) { - if (!PyTypeNum_ISEXTENDED(typecode->type_num) || - (outcode->elsize == typecode->elsize)) - return r; - } + if (outcode->type_num == typecode->type_num) { + if (!PyTypeNum_ISEXTENDED(typecode->type_num) || + (outcode->elsize == typecode->elsize)) + return r; + } - /* cast if necessary to desired output typecode */ - ret = PyArray_CastToType((PyArrayObject *)r, outcode, 0); - Py_DECREF(r); - return ret; + /* cast if necessary to desired output typecode */ + ret = PyArray_CastToType((PyArrayObject *)r, outcode, 0); + Py_DECREF(r); + return ret; } /*NUMPY_API @@ -301,192 +305,192 @@ static PyObject * PyArray_ScalarFromObject(PyObject *object) { - PyObject *ret=NULL; - if (PyArray_IsZeroDim(object)) { - return PyArray_ToScalar(PyArray_DATA(object), object); + PyObject *ret=NULL; + if (PyArray_IsZeroDim(object)) { + return PyArray_ToScalar(PyArray_DATA(object), object); + } + if (PyInt_Check(object)) { + ret = PyArrayScalar_New(Long); + if (ret == NULL) return NULL; + PyArrayScalar_VAL(ret, Long) = PyInt_AS_LONG(object); + } + else if (PyFloat_Check(object)) { + ret = PyArrayScalar_New(Double); + if (ret == NULL) return NULL; + PyArrayScalar_VAL(ret, Double) = PyFloat_AS_DOUBLE(object); + } + else if (PyComplex_Check(object)) { + ret = PyArrayScalar_New(CDouble); + if (ret == NULL) return NULL; + PyArrayScalar_VAL(ret, CDouble).real = + ((PyComplexObject *)object)->cval.real; + PyArrayScalar_VAL(ret, CDouble).imag = + ((PyComplexObject *)object)->cval.imag; + } + else if (PyLong_Check(object)) { + longlong val; + val = PyLong_AsLongLong(object); + if (val==-1 && PyErr_Occurred()) { + PyErr_Clear(); + return NULL; } - if (PyInt_Check(object)) { - ret = PyArrayScalar_New(Long); - if (ret == NULL) return NULL; - PyArrayScalar_VAL(ret, Long) = PyInt_AS_LONG(object); + ret = PyArrayScalar_New(LongLong); + if (ret == NULL) return NULL; + PyArrayScalar_VAL(ret, LongLong) = val; + } + else if (PyBool_Check(object)) { + if (object == Py_True) { + PyArrayScalar_RETURN_TRUE; } - else if (PyFloat_Check(object)) { - ret = PyArrayScalar_New(Double); - if (ret == NULL) return NULL; - PyArrayScalar_VAL(ret, Double) = PyFloat_AS_DOUBLE(object); + else { + PyArrayScalar_RETURN_FALSE; } - else if (PyComplex_Check(object)) { - ret = PyArrayScalar_New(CDouble); - if (ret == NULL) return NULL; - PyArrayScalar_VAL(ret, CDouble).real = \ - ((PyComplexObject *)object)->cval.real; - PyArrayScalar_VAL(ret, CDouble).imag = \ - ((PyComplexObject *)object)->cval.imag; - } - else if (PyLong_Check(object)) { - longlong val; - val = PyLong_AsLongLong(object); - if (val==-1 && PyErr_Occurred()) { - PyErr_Clear(); - return NULL; - } - ret = PyArrayScalar_New(LongLong); - if (ret == NULL) return NULL; - PyArrayScalar_VAL(ret, LongLong) = val; - } - else if (PyBool_Check(object)) { - if (object == Py_True) { - PyArrayScalar_RETURN_TRUE; - } - else { - PyArrayScalar_RETURN_FALSE; - } - } - return ret; + } + return ret; } static PyObject * gentype_alloc(PyTypeObject *type, Py_ssize_t nitems) { - PyObject *obj; - const size_t size = _PyObject_VAR_SIZE(type, nitems+1); + PyObject *obj; + const size_t size = _PyObject_VAR_SIZE(type, nitems+1); - obj = (PyObject *)_pya_malloc(size); - memset(obj, 0, size); - if (type->tp_itemsize == 0) - PyObject_INIT(obj, type); - else - (void) PyObject_INIT_VAR((PyVarObject *)obj, type, nitems); - return obj; + obj = (PyObject *)_pya_malloc(size); + memset(obj, 0, size); + if (type->tp_itemsize == 0) + PyObject_INIT(obj, type); + else + (void) PyObject_INIT_VAR((PyVarObject *)obj, type, nitems); + return obj; } static void gentype_dealloc(PyObject *v) { - v->ob_type->tp_free(v); + v->ob_type->tp_free(v); } static PyObject * gentype_power(PyObject *m1, PyObject *m2, PyObject *m3) { - PyObject *arr, *ret, *arg2; - char *msg="unsupported operand type(s) for ** or pow()"; + PyObject *arr, *ret, *arg2; + char *msg="unsupported operand type(s) for ** or pow()"; - if (!PyArray_IsScalar(m1,Generic)) { - if (PyArray_Check(m1)) { - ret = m1->ob_type->tp_as_number->nb_power(m1,m2, - Py_None); - } - else { - if (!PyArray_IsScalar(m2,Generic)) { - PyErr_SetString(PyExc_TypeError, msg); - return NULL; - } - arr = PyArray_FromScalar(m2, NULL); - if (arr == NULL) return NULL; - ret = arr->ob_type->tp_as_number->nb_power(m1, arr, - Py_None); - Py_DECREF(arr); - } - return ret; + if (!PyArray_IsScalar(m1,Generic)) { + if (PyArray_Check(m1)) { + ret = m1->ob_type->tp_as_number->nb_power(m1,m2, + Py_None); } - if (!PyArray_IsScalar(m2, Generic)) { - if (PyArray_Check(m2)) { - ret = m2->ob_type->tp_as_number->nb_power(m1,m2, - Py_None); - } - else { - if (!PyArray_IsScalar(m1, Generic)) { - PyErr_SetString(PyExc_TypeError, msg); - return NULL; - } - arr = PyArray_FromScalar(m1, NULL); - if (arr == NULL) return NULL; - ret = arr->ob_type->tp_as_number->nb_power(arr, m2, - Py_None); - Py_DECREF(arr); - } - return ret; + else { + if (!PyArray_IsScalar(m2,Generic)) { + PyErr_SetString(PyExc_TypeError, msg); + return NULL; + } + arr = PyArray_FromScalar(m2, NULL); + if (arr == NULL) return NULL; + ret = arr->ob_type->tp_as_number->nb_power(m1, arr, + Py_None); + Py_DECREF(arr); } - arr=arg2=NULL; - arr = PyArray_FromScalar(m1, NULL); - arg2 = PyArray_FromScalar(m2, NULL); - if (arr == NULL || arg2 == NULL) { - Py_XDECREF(arr); Py_XDECREF(arg2); return NULL; + return ret; + } + if (!PyArray_IsScalar(m2, Generic)) { + if (PyArray_Check(m2)) { + ret = m2->ob_type->tp_as_number->nb_power(m1,m2, + Py_None); } - ret = arr->ob_type->tp_as_number->nb_power(arr, arg2, Py_None); - Py_DECREF(arr); - Py_DECREF(arg2); + else { + if (!PyArray_IsScalar(m1, Generic)) { + PyErr_SetString(PyExc_TypeError, msg); + return NULL; + } + arr = PyArray_FromScalar(m1, NULL); + if (arr == NULL) return NULL; + ret = arr->ob_type->tp_as_number->nb_power(arr, m2, + Py_None); + Py_DECREF(arr); + } return ret; + } + arr=arg2=NULL; + arr = PyArray_FromScalar(m1, NULL); + arg2 = PyArray_FromScalar(m2, NULL); + if (arr == NULL || arg2 == NULL) { + Py_XDECREF(arr); Py_XDECREF(arg2); return NULL; + } + ret = arr->ob_type->tp_as_number->nb_power(arr, arg2, Py_None); + Py_DECREF(arr); + Py_DECREF(arg2); + return ret; } static PyObject * gentype_generic_method(PyObject *self, PyObject *args, PyObject *kwds, - char *str) + char *str) { - PyObject *arr, *meth, *ret; + PyObject *arr, *meth, *ret; - arr = PyArray_FromScalar(self, NULL); - if (arr == NULL) return NULL; - meth = PyObject_GetAttrString(arr, str); - if (meth == NULL) {Py_DECREF(arr); return NULL;} - if (kwds == NULL) - ret = PyObject_CallObject(meth, args); - else - ret = PyObject_Call(meth, args, kwds); - Py_DECREF(meth); - Py_DECREF(arr); - if (ret && PyArray_Check(ret)) - return PyArray_Return((PyArrayObject *)ret); - else - return ret; + arr = PyArray_FromScalar(self, NULL); + if (arr == NULL) return NULL; + meth = PyObject_GetAttrString(arr, str); + if (meth == NULL) {Py_DECREF(arr); return NULL;} + if (kwds == NULL) + ret = PyObject_CallObject(meth, args); + else + ret = PyObject_Call(meth, args, kwds); + Py_DECREF(meth); + Py_DECREF(arr); + if (ret && PyArray_Check(ret)) + return PyArray_Return((PyArrayObject *)ret); + else + return ret; } /**begin repeat + * + * #name=add, subtract, divide, remainder, divmod, lshift, rshift, and, xor, or, floor_divide, true_divide# + */ -#name=add, subtract, divide, remainder, divmod, lshift, rshift, and, xor, or, floor_divide, true_divide# -#PYNAME=Add, Subtract, Divide, Remainder, Divmod, Lshift, Rshift, And, Xor, Or, FloorDivide, TrueDivide# -*/ - static PyObject * gentype_ at name@(PyObject *m1, PyObject *m2) { - return PyArray_Type.tp_as_number->nb_ at name@(m1, m2); + return PyArray_Type.tp_as_number->nb_ at name@(m1, m2); } + /**end repeat**/ static PyObject * gentype_multiply(PyObject *m1, PyObject *m2) { - PyObject *ret=NULL; - long repeat; + PyObject *ret=NULL; + long repeat; - if (!PyArray_IsScalar(m1, Generic) && + if (!PyArray_IsScalar(m1, Generic) && ((m1->ob_type->tp_as_number == NULL) || (m1->ob_type->tp_as_number->nb_multiply == NULL))) { - /* Try to convert m2 to an int and try sequence - repeat */ - repeat = PyInt_AsLong(m2); - if (repeat == -1 && PyErr_Occurred()) return NULL; - ret = PySequence_Repeat(m1, (int) repeat); - } - else if (!PyArray_IsScalar(m2, Generic) && - ((m2->ob_type->tp_as_number == NULL) || - (m2->ob_type->tp_as_number->nb_multiply == NULL))) { - /* Try to convert m1 to an int and try sequence - repeat */ - repeat = PyInt_AsLong(m1); - if (repeat == -1 && PyErr_Occurred()) return NULL; - ret = PySequence_Repeat(m2, (int) repeat); - } - if (ret==NULL) { - PyErr_Clear(); /* no effect if not set */ - ret = PyArray_Type.tp_as_number->nb_multiply(m1, m2); - } - return ret; + /* Try to convert m2 to an int and try sequence + repeat */ + repeat = PyInt_AsLong(m2); + if (repeat == -1 && PyErr_Occurred()) return NULL; + ret = PySequence_Repeat(m1, (int) repeat); + } + else if (!PyArray_IsScalar(m2, Generic) && + ((m2->ob_type->tp_as_number == NULL) || + (m2->ob_type->tp_as_number->nb_multiply == NULL))) { + /* Try to convert m1 to an int and try sequence + repeat */ + repeat = PyInt_AsLong(m1); + if (repeat == -1 && PyErr_Occurred()) return NULL; + ret = PySequence_Repeat(m2, (int) repeat); + } + if (ret==NULL) { + PyErr_Clear(); /* no effect if not set */ + ret = PyArray_Type.tp_as_number->nb_multiply(m1, m2); + } + return ret; } /**begin repeat @@ -497,54 +501,54 @@ static PyObject * gentype_ at name@(PyObject *m1) { - PyObject *arr, *ret; + PyObject *arr, *ret; - arr = PyArray_FromScalar(m1, NULL); - if (arr == NULL) return NULL; - ret = arr->ob_type->tp_as_number->nb_ at name@(arr); - Py_DECREF(arr); - return ret; + arr = PyArray_FromScalar(m1, NULL); + if (arr == NULL) return NULL; + ret = arr->ob_type->tp_as_number->nb_ at name@(arr); + Py_DECREF(arr); + return ret; } /**end repeat**/ static int gentype_nonzero_number(PyObject *m1) { - PyObject *arr; - int ret; + PyObject *arr; + int ret; - arr = PyArray_FromScalar(m1, NULL); - if (arr == NULL) return -1; - ret = arr->ob_type->tp_as_number->nb_nonzero(arr); - Py_DECREF(arr); - return ret; + arr = PyArray_FromScalar(m1, NULL); + if (arr == NULL) return -1; + ret = arr->ob_type->tp_as_number->nb_nonzero(arr); + Py_DECREF(arr); + return ret; } static PyObject * gentype_str(PyObject *self) { - PyArrayObject *arr; - PyObject *ret; + PyArrayObject *arr; + PyObject *ret; - arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); - if (arr==NULL) return NULL; - ret = PyObject_Str((PyObject *)arr); - Py_DECREF(arr); - return ret; + arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); + if (arr==NULL) return NULL; + ret = PyObject_Str((PyObject *)arr); + Py_DECREF(arr); + return ret; } static PyObject * gentype_repr(PyObject *self) { - PyArrayObject *arr; - PyObject *ret; + PyArrayObject *arr; + PyObject *ret; - arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); - if (arr==NULL) return NULL; - ret = PyObject_Str((PyObject *)arr); - Py_DECREF(arr); - return ret; + arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); + if (arr==NULL) return NULL; + ret = PyObject_Str((PyObject *)arr); + Py_DECREF(arr); + return ret; } /**begin repeat @@ -556,21 +560,21 @@ format_ at name@(char *buf, size_t buflen, @name@ val, unsigned int precision) { - char *cp; + char *cp; - PyOS_snprintf(buf, buflen, "%.*" @PREFIX@@NAME at _FMT, precision, val); - cp = buf; - if (*cp == '-') - cp++; - for (; *cp != '\0'; cp++) { - if (!isdigit(Py_CHARMASK(*cp))) - break; - } - if (*cp == '\0') { - *cp++ = '.'; - *cp++ = '0'; - *cp++ = '\0'; - } + PyOS_snprintf(buf, buflen, "%.*" @PREFIX@@NAME at _FMT, precision, val); + cp = buf; + if (*cp == '-') + cp++; + for (; *cp != '\0'; cp++) { + if (!isdigit(Py_CHARMASK(*cp))) + break; + } + if (*cp == '\0') { + *cp++ = '.'; + *cp++ = '0'; + *cp++ = '\0'; + } } /**end repeat**/ @@ -590,20 +594,20 @@ static PyObject * @name at type_@form@(PyObject *self) { - const @type@ *dptr, *ip; - int len; - PyObject *new; - PyObject *ret; + const @type@ *dptr, *ip; + int len; + PyObject *new; + PyObject *ret; - ip = dptr = Py at Name@_AS_ at NAME@(self); - len = Py at Name@_GET_SIZE(self); - dptr += len-1; - while(len > 0 && *dptr-- == 0) len--; - new = Py at Name@_From at Name@@extra@(ip, len); - if (new == NULL) return PyString_FromString(""); - ret = Py at Name@_Type.tp_ at form@(new); - Py_DECREF(new); - return ret; + ip = dptr = Py at Name@_AS_ at NAME@(self); + len = Py at Name@_GET_SIZE(self); + dptr += len-1; + while(len > 0 && *dptr-- == 0) len--; + new = Py at Name@_From at Name@@extra@(ip, len); + if (new == NULL) return PyString_FromString(""); + ret = Py at Name@_Type.tp_ at form@(new); + Py_DECREF(new); + return ret; } /**end repeat**/ @@ -635,25 +639,25 @@ static PyObject * @name at type_@kind@(PyObject *self) { - static char buf[100]; - format_ at name@(buf, sizeof(buf), - ((Py at Name@ScalarObject *)self)->obval, @NAME at PREC_@KIND@); - return PyString_FromString(buf); + static char buf[100]; + format_ at name@(buf, sizeof(buf), + ((Py at Name@ScalarObject *)self)->obval, @NAME at PREC_@KIND@); + return PyString_FromString(buf); } static PyObject * c at name@type_ at kind@(PyObject *self) { - static char buf1[100]; - static char buf2[100]; - static char buf3[202]; - c at name@ x; - x = ((PyC at Name@ScalarObject *)self)->obval; - format_ at name@(buf1, sizeof(buf1), x.real, @NAME at PREC_@KIND@); - format_ at name@(buf2, sizeof(buf2), x.imag, @NAME at PREC_@KIND@); + static char buf1[100]; + static char buf2[100]; + static char buf3[202]; + c at name@ x; + x = ((PyC at Name@ScalarObject *)self)->obval; + format_ at name@(buf1, sizeof(buf1), x.real, @NAME at PREC_@KIND@); + format_ at name@(buf2, sizeof(buf2), x.imag, @NAME at PREC_@KIND@); - snprintf(buf3, sizeof(buf3), "(%s+%sj)", buf1, buf2); - return PyString_FromString(buf3); + snprintf(buf3, sizeof(buf3), "(%s+%sj)", buf1, buf2); + return PyString_FromString(buf3); } /**end repeat1**/ /**end repeat**/ @@ -675,59 +679,59 @@ static PyObject * @char at longdoubletype_@name@(PyObject *self) { - double dval; - PyObject *obj, *ret; + double dval; + PyObject *obj, *ret; - dval = (double)(((Py at CHAR@LongDoubleScalarObject *)self)->obval)@POST@; - obj = Py at KIND@_FromDouble(dval); - ret = obj->ob_type->tp_as_number->nb_ at name@(obj); - Py_DECREF(obj); - return ret; + dval = (double)(((Py at CHAR@LongDoubleScalarObject *)self)->obval)@POST@; + obj = Py at KIND@_FromDouble(dval); + ret = obj->ob_type->tp_as_number->nb_ at name@(obj); + Py_DECREF(obj); + return ret; } /**end repeat**/ static PyNumberMethods gentype_as_number = { - (binaryfunc)gentype_add, /*nb_add*/ - (binaryfunc)gentype_subtract, /*nb_subtract*/ - (binaryfunc)gentype_multiply, /*nb_multiply*/ - (binaryfunc)gentype_divide, /*nb_divide*/ - (binaryfunc)gentype_remainder, /*nb_remainder*/ - (binaryfunc)gentype_divmod, /*nb_divmod*/ - (ternaryfunc)gentype_power, /*nb_power*/ - (unaryfunc)gentype_negative, - (unaryfunc)gentype_positive, /*nb_pos*/ - (unaryfunc)gentype_absolute, /*(unaryfunc)gentype_abs,*/ - (inquiry)gentype_nonzero_number, /*nb_nonzero*/ - (unaryfunc)gentype_invert, /*nb_invert*/ - (binaryfunc)gentype_lshift, /*nb_lshift*/ - (binaryfunc)gentype_rshift, /*nb_rshift*/ - (binaryfunc)gentype_and, /*nb_and*/ - (binaryfunc)gentype_xor, /*nb_xor*/ - (binaryfunc)gentype_or, /*nb_or*/ - 0, /*nb_coerce*/ - (unaryfunc)gentype_int, /*nb_int*/ - (unaryfunc)gentype_long, /*nb_long*/ - (unaryfunc)gentype_float, /*nb_float*/ - (unaryfunc)gentype_oct, /*nb_oct*/ - (unaryfunc)gentype_hex, /*nb_hex*/ - 0, /*inplace_add*/ - 0, /*inplace_subtract*/ - 0, /*inplace_multiply*/ - 0, /*inplace_divide*/ - 0, /*inplace_remainder*/ - 0, /*inplace_power*/ - 0, /*inplace_lshift*/ - 0, /*inplace_rshift*/ - 0, /*inplace_and*/ - 0, /*inplace_xor*/ - 0, /*inplace_or*/ - (binaryfunc)gentype_floor_divide, /*nb_floor_divide*/ - (binaryfunc)gentype_true_divide, /*nb_true_divide*/ - 0, /*nb_inplace_floor_divide*/ - 0, /*nb_inplace_true_divide*/ + (binaryfunc)gentype_add, /*nb_add*/ + (binaryfunc)gentype_subtract, /*nb_subtract*/ + (binaryfunc)gentype_multiply, /*nb_multiply*/ + (binaryfunc)gentype_divide, /*nb_divide*/ + (binaryfunc)gentype_remainder, /*nb_remainder*/ + (binaryfunc)gentype_divmod, /*nb_divmod*/ + (ternaryfunc)gentype_power, /*nb_power*/ + (unaryfunc)gentype_negative, + (unaryfunc)gentype_positive, /*nb_pos*/ + (unaryfunc)gentype_absolute, /*(unaryfunc)gentype_abs,*/ + (inquiry)gentype_nonzero_number, /*nb_nonzero*/ + (unaryfunc)gentype_invert, /*nb_invert*/ + (binaryfunc)gentype_lshift, /*nb_lshift*/ + (binaryfunc)gentype_rshift, /*nb_rshift*/ + (binaryfunc)gentype_and, /*nb_and*/ + (binaryfunc)gentype_xor, /*nb_xor*/ + (binaryfunc)gentype_or, /*nb_or*/ + 0, /*nb_coerce*/ + (unaryfunc)gentype_int, /*nb_int*/ + (unaryfunc)gentype_long, /*nb_long*/ + (unaryfunc)gentype_float, /*nb_float*/ + (unaryfunc)gentype_oct, /*nb_oct*/ + (unaryfunc)gentype_hex, /*nb_hex*/ + 0, /*inplace_add*/ + 0, /*inplace_subtract*/ + 0, /*inplace_multiply*/ + 0, /*inplace_divide*/ + 0, /*inplace_remainder*/ + 0, /*inplace_power*/ + 0, /*inplace_lshift*/ + 0, /*inplace_rshift*/ + 0, /*inplace_and*/ + 0, /*inplace_xor*/ + 0, /*inplace_or*/ + (binaryfunc)gentype_floor_divide, /*nb_floor_divide*/ + (binaryfunc)gentype_true_divide, /*nb_true_divide*/ + 0, /*nb_inplace_floor_divide*/ + 0, /*nb_inplace_true_divide*/ #if PY_VERSION_HEX >= 0x02050000 - (unaryfunc)NULL, /* nb_index */ + (unaryfunc)NULL, /*nb_index*/ #endif }; @@ -735,137 +739,136 @@ static PyObject * gentype_richcompare(PyObject *self, PyObject *other, int cmp_op) { + PyObject *arr, *ret; - PyObject *arr, *ret; - - arr = PyArray_FromScalar(self, NULL); - if (arr == NULL) return NULL; - ret = arr->ob_type->tp_richcompare(arr, other, cmp_op); - Py_DECREF(arr); - return ret; + arr = PyArray_FromScalar(self, NULL); + if (arr == NULL) return NULL; + ret = arr->ob_type->tp_richcompare(arr, other, cmp_op); + Py_DECREF(arr); + return ret; } static PyObject * gentype_ndim_get(PyObject *self) { - return PyInt_FromLong(0); + return PyInt_FromLong(0); } static PyObject * gentype_flags_get(PyObject *self) { - return PyArray_NewFlagsObject(NULL); + return PyArray_NewFlagsObject(NULL); } static PyObject * voidtype_flags_get(PyVoidScalarObject *self) { - PyObject *flagobj; - flagobj = PyArrayFlags_Type.tp_alloc(&PyArrayFlags_Type, 0); - if (flagobj == NULL) return NULL; - ((PyArrayFlagsObject *)flagobj)->arr = NULL; - ((PyArrayFlagsObject *)flagobj)->flags = self->flags; - return flagobj; + PyObject *flagobj; + flagobj = PyArrayFlags_Type.tp_alloc(&PyArrayFlags_Type, 0); + if (flagobj == NULL) return NULL; + ((PyArrayFlagsObject *)flagobj)->arr = NULL; + ((PyArrayFlagsObject *)flagobj)->flags = self->flags; + return flagobj; } static PyObject * voidtype_dtypedescr_get(PyVoidScalarObject *self) { - Py_INCREF(self->descr); - return (PyObject *)self->descr; + Py_INCREF(self->descr); + return (PyObject *)self->descr; } static PyObject * gentype_data_get(PyObject *self) { - return PyBuffer_FromObject(self, 0, Py_END_OF_BUFFER); + return PyBuffer_FromObject(self, 0, Py_END_OF_BUFFER); } static PyObject * gentype_itemsize_get(PyObject *self) { - PyArray_Descr *typecode; - PyObject *ret; - int elsize; + PyArray_Descr *typecode; + PyObject *ret; + int elsize; - typecode = PyArray_DescrFromScalar(self); - elsize = typecode->elsize; + typecode = PyArray_DescrFromScalar(self); + elsize = typecode->elsize; #ifndef Py_UNICODE_WIDE - if (typecode->type_num == NPY_UNICODE) { - elsize >>= 1; - } + if (typecode->type_num == NPY_UNICODE) { + elsize >>= 1; + } #endif - ret = PyInt_FromLong((long) elsize); - Py_DECREF(typecode); - return ret; + ret = PyInt_FromLong((long) elsize); + Py_DECREF(typecode); + return ret; } static PyObject * gentype_size_get(PyObject *self) { - return PyInt_FromLong(1); + return PyInt_FromLong(1); } static void gentype_struct_free(void *ptr, void *arg) { - PyArrayInterface *arrif = (PyArrayInterface *)ptr; - Py_DECREF((PyObject *)arg); - Py_XDECREF(arrif->descr); - _pya_free(arrif->shape); - _pya_free(arrif); + PyArrayInterface *arrif = (PyArrayInterface *)ptr; + Py_DECREF((PyObject *)arg); + Py_XDECREF(arrif->descr); + _pya_free(arrif->shape); + _pya_free(arrif); } static PyObject * gentype_struct_get(PyObject *self) { - PyArrayObject *arr; - PyArrayInterface *inter; + PyArrayObject *arr; + PyArrayInterface *inter; - arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); - inter = (PyArrayInterface *)_pya_malloc(sizeof(PyArrayInterface)); - inter->two = 2; - inter->nd = 0; - inter->flags = arr->flags; - inter->flags &= ~(UPDATEIFCOPY | OWNDATA); - inter->flags |= NPY_NOTSWAPPED; - inter->typekind = arr->descr->kind; - inter->itemsize = arr->descr->elsize; - inter->strides = NULL; - inter->shape = NULL; - inter->data = arr->data; - inter->descr = NULL; + arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); + inter = (PyArrayInterface *)_pya_malloc(sizeof(PyArrayInterface)); + inter->two = 2; + inter->nd = 0; + inter->flags = arr->flags; + inter->flags &= ~(UPDATEIFCOPY | OWNDATA); + inter->flags |= NPY_NOTSWAPPED; + inter->typekind = arr->descr->kind; + inter->itemsize = arr->descr->elsize; + inter->strides = NULL; + inter->shape = NULL; + inter->data = arr->data; + inter->descr = NULL; - return PyCObject_FromVoidPtrAndDesc(inter, arr, gentype_struct_free); + return PyCObject_FromVoidPtrAndDesc(inter, arr, gentype_struct_free); } static PyObject * gentype_priority_get(PyObject *self) { - return PyFloat_FromDouble(NPY_SCALAR_PRIORITY); + return PyFloat_FromDouble(NPY_SCALAR_PRIORITY); } static PyObject * gentype_shape_get(PyObject *self) { - return PyTuple_New(0); + return PyTuple_New(0); } static PyObject * gentype_interface_get(PyObject *self) { - PyArrayObject *arr; - PyObject *inter; + PyArrayObject *arr; + PyObject *inter; - arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); - if (arr == NULL) return NULL; - inter = PyObject_GetAttrString((PyObject *)arr, "__array_interface__"); - if (inter != NULL) PyDict_SetItemString(inter, "__ref", (PyObject *)arr); - Py_DECREF(arr); - return inter; + arr = (PyArrayObject *)PyArray_FromScalar(self, NULL); + if (arr == NULL) return NULL; + inter = PyObject_GetAttrString((PyObject *)arr, "__array_interface__"); + if (inter != NULL) PyDict_SetItemString(inter, "__ref", (PyObject *)arr); + Py_DECREF(arr); + return inter; } @@ -873,194 +876,194 @@ static PyObject * gentype_typedescr_get(PyObject *self) { - return (PyObject *)PyArray_DescrFromScalar(self); + return (PyObject *)PyArray_DescrFromScalar(self); } static PyObject * gentype_base_get(PyObject *self) { - Py_INCREF(Py_None); - return Py_None; + Py_INCREF(Py_None); + return Py_None; } static PyArray_Descr * _realdescr_fromcomplexscalar(PyObject *self, int *typenum) { - if (PyArray_IsScalar(self, CDouble)) { - *typenum = PyArray_CDOUBLE; - return PyArray_DescrFromType(PyArray_DOUBLE); - } - if (PyArray_IsScalar(self, CFloat)) { - *typenum = PyArray_CFLOAT; - return PyArray_DescrFromType(PyArray_FLOAT); - } - if (PyArray_IsScalar(self, CLongDouble)) { - *typenum = PyArray_CLONGDOUBLE; - return PyArray_DescrFromType(PyArray_LONGDOUBLE); - } - return NULL; + if (PyArray_IsScalar(self, CDouble)) { + *typenum = PyArray_CDOUBLE; + return PyArray_DescrFromType(PyArray_DOUBLE); + } + if (PyArray_IsScalar(self, CFloat)) { + *typenum = PyArray_CFLOAT; + return PyArray_DescrFromType(PyArray_FLOAT); + } + if (PyArray_IsScalar(self, CLongDouble)) { + *typenum = PyArray_CLONGDOUBLE; + return PyArray_DescrFromType(PyArray_LONGDOUBLE); + } + return NULL; } static PyObject * gentype_real_get(PyObject *self) { - PyArray_Descr *typecode; - PyObject *ret; - int typenum; + PyArray_Descr *typecode; + PyObject *ret; + int typenum; - if (PyArray_IsScalar(self, ComplexFloating)) { - void *ptr; - typecode = _realdescr_fromcomplexscalar(self, &typenum); - ptr = scalar_value(self, NULL); - ret = PyArray_Scalar(ptr, typecode, NULL); - Py_DECREF(typecode); - return ret; - } - else if (PyArray_IsScalar(self, Object)) { - PyObject *obj = ((PyObjectScalarObject *)self)->obval; - ret = PyObject_GetAttrString(obj, "real"); - if (ret != NULL) return ret; - PyErr_Clear(); - } - Py_INCREF(self); - return (PyObject *)self; + if (PyArray_IsScalar(self, ComplexFloating)) { + void *ptr; + typecode = _realdescr_fromcomplexscalar(self, &typenum); + ptr = scalar_value(self, NULL); + ret = PyArray_Scalar(ptr, typecode, NULL); + Py_DECREF(typecode); + return ret; + } + else if (PyArray_IsScalar(self, Object)) { + PyObject *obj = ((PyObjectScalarObject *)self)->obval; + ret = PyObject_GetAttrString(obj, "real"); + if (ret != NULL) return ret; + PyErr_Clear(); + } + Py_INCREF(self); + return (PyObject *)self; } static PyObject * gentype_imag_get(PyObject *self) { - PyArray_Descr *typecode=NULL; - PyObject *ret; - int typenum; + PyArray_Descr *typecode=NULL; + PyObject *ret; + int typenum; - if (PyArray_IsScalar(self, ComplexFloating)) { - char *ptr; - typecode = _realdescr_fromcomplexscalar(self, &typenum); - ptr = (char *)scalar_value(self, NULL); - ret = PyArray_Scalar(ptr + typecode->elsize, - typecode, NULL); + if (PyArray_IsScalar(self, ComplexFloating)) { + char *ptr; + typecode = _realdescr_fromcomplexscalar(self, &typenum); + ptr = (char *)scalar_value(self, NULL); + ret = PyArray_Scalar(ptr + typecode->elsize, + typecode, NULL); + } + else if (PyArray_IsScalar(self, Object)) { + PyObject *obj = ((PyObjectScalarObject *)self)->obval; + PyArray_Descr *newtype; + ret = PyObject_GetAttrString(obj, "imag"); + if (ret == NULL) { + PyErr_Clear(); + obj = PyInt_FromLong(0); + newtype = PyArray_DescrFromType(PyArray_OBJECT); + ret = PyArray_Scalar((char *)&obj, newtype, NULL); + Py_DECREF(newtype); + Py_DECREF(obj); } - else if (PyArray_IsScalar(self, Object)) { - PyObject *obj = ((PyObjectScalarObject *)self)->obval; - PyArray_Descr *newtype; - ret = PyObject_GetAttrString(obj, "imag"); - if (ret == NULL) { - PyErr_Clear(); - obj = PyInt_FromLong(0); - newtype = PyArray_DescrFromType(PyArray_OBJECT); - ret = PyArray_Scalar((char *)&obj, newtype, NULL); - Py_DECREF(newtype); - Py_DECREF(obj); - } - } - else { - char *temp; - int elsize; - typecode = PyArray_DescrFromScalar(self); - elsize = typecode->elsize; - temp = PyDataMem_NEW(elsize); - memset(temp, '\0', elsize); - ret = PyArray_Scalar(temp, typecode, NULL); - PyDataMem_FREE(temp); - } + } + else { + char *temp; + int elsize; + typecode = PyArray_DescrFromScalar(self); + elsize = typecode->elsize; + temp = PyDataMem_NEW(elsize); + memset(temp, '\0', elsize); + ret = PyArray_Scalar(temp, typecode, NULL); + PyDataMem_FREE(temp); + } - Py_XDECREF(typecode); - return ret; + Py_XDECREF(typecode); + return ret; } static PyObject * gentype_flat_get(PyObject *self) { - PyObject *ret, *arr; + PyObject *ret, *arr; - arr = PyArray_FromScalar(self, NULL); - if (arr == NULL) return NULL; - ret = PyArray_IterNew(arr); - Py_DECREF(arr); - return ret; + arr = PyArray_FromScalar(self, NULL); + if (arr == NULL) return NULL; + ret = PyArray_IterNew(arr); + Py_DECREF(arr); + return ret; } static PyObject * gentype_transpose_get(PyObject *self) { - Py_INCREF(self); - return self; + Py_INCREF(self); + return self; } static PyGetSetDef gentype_getsets[] = { - {"ndim", - (getter)gentype_ndim_get, - (setter) 0, - "number of array dimensions"}, - {"flags", - (getter)gentype_flags_get, - (setter)0, - "integer value of flags"}, - {"shape", - (getter)gentype_shape_get, - (setter)0, - "tuple of array dimensions"}, - {"strides", - (getter)gentype_shape_get, - (setter) 0, - "tuple of bytes steps in each dimension"}, - {"data", - (getter)gentype_data_get, - (setter) 0, - "pointer to start of data"}, - {"itemsize", - (getter)gentype_itemsize_get, - (setter)0, - "length of one element in bytes"}, - {"size", - (getter)gentype_size_get, - (setter)0, - "number of elements in the gentype"}, - {"nbytes", - (getter)gentype_itemsize_get, - (setter)0, - "length of item in bytes"}, - {"base", - (getter)gentype_base_get, - (setter)0, - "base object"}, - {"dtype", - (getter)gentype_typedescr_get, - NULL, - "get array data-descriptor"}, - {"real", - (getter)gentype_real_get, - (setter)0, - "real part of scalar"}, - {"imag", - (getter)gentype_imag_get, - (setter)0, - "imaginary part of scalar"}, - {"flat", - (getter)gentype_flat_get, - (setter)0, - "a 1-d view of scalar"}, - {"T", - (getter)gentype_transpose_get, - (setter)0, - "transpose"}, - {"__array_interface__", - (getter)gentype_interface_get, - NULL, - "Array protocol: Python side"}, - {"__array_struct__", - (getter)gentype_struct_get, - NULL, - "Array protocol: struct"}, - {"__array_priority__", - (getter)gentype_priority_get, - NULL, - "Array priority."}, - {NULL, NULL, NULL, NULL} /* Sentinel */ + {"ndim", + (getter)gentype_ndim_get, + (setter) 0, + "number of array dimensions"}, + {"flags", + (getter)gentype_flags_get, + (setter)0, + "integer value of flags"}, + {"shape", + (getter)gentype_shape_get, + (setter)0, + "tuple of array dimensions"}, + {"strides", + (getter)gentype_shape_get, + (setter) 0, + "tuple of bytes steps in each dimension"}, + {"data", + (getter)gentype_data_get, + (setter) 0, + "pointer to start of data"}, + {"itemsize", + (getter)gentype_itemsize_get, + (setter)0, + "length of one element in bytes"}, + {"size", + (getter)gentype_size_get, + (setter)0, + "number of elements in the gentype"}, + {"nbytes", + (getter)gentype_itemsize_get, + (setter)0, + "length of item in bytes"}, + {"base", + (getter)gentype_base_get, + (setter)0, + "base object"}, + {"dtype", + (getter)gentype_typedescr_get, + NULL, + "get array data-descriptor"}, + {"real", + (getter)gentype_real_get, + (setter)0, + "real part of scalar"}, + {"imag", + (getter)gentype_imag_get, + (setter)0, + "imaginary part of scalar"}, + {"flat", + (getter)gentype_flat_get, + (setter)0, + "a 1-d view of scalar"}, + {"T", + (getter)gentype_transpose_get, + (setter)0, + "transpose"}, + {"__array_interface__", + (getter)gentype_interface_get, + NULL, + "Array protocol: Python side"}, + {"__array_struct__", + (getter)gentype_struct_get, + NULL, + "Array protocol: struct"}, + {"__array_priority__", + (getter)gentype_priority_get, + NULL, + "Array priority."}, + {NULL, NULL, NULL, NULL} /* Sentinel */ }; @@ -1071,16 +1074,16 @@ static PyObject * gentype_getarray(PyObject *scalar, PyObject *args) { - PyArray_Descr *outcode=NULL; - PyObject *ret; + PyArray_Descr *outcode=NULL; + PyObject *ret; - if (!PyArg_ParseTuple(args, "|O&", &PyArray_DescrConverter, - &outcode)) { - Py_XDECREF(outcode); - return NULL; - } - ret = PyArray_FromScalar(scalar, outcode); - return ret; + if (!PyArg_ParseTuple(args, "|O&", &PyArray_DescrConverter, + &outcode)) { + Py_XDECREF(outcode); + return NULL; + } + ret = PyArray_FromScalar(scalar, outcode); + return ret; } static char doc_sc_wraparray[] = "sc.__array_wrap__(obj) return scalar from array"; @@ -1088,21 +1091,21 @@ static PyObject * gentype_wraparray(PyObject *scalar, PyObject *args) { - PyObject *arr; + PyObject *arr; - if (PyTuple_Size(args) < 1) { - PyErr_SetString(PyExc_TypeError, - "only accepts 1 argument."); - return NULL; - } - arr = PyTuple_GET_ITEM(args, 0); - if (!PyArray_Check(arr)) { - PyErr_SetString(PyExc_TypeError, - "can only be called with ndarray object"); - return NULL; - } + if (PyTuple_Size(args) < 1) { + PyErr_SetString(PyExc_TypeError, + "only accepts 1 argument."); + return NULL; + } + arr = PyTuple_GET_ITEM(args, 0); + if (!PyArray_Check(arr)) { + PyErr_SetString(PyExc_TypeError, + "can only be called with ndarray object"); + return NULL; + } - return PyArray_Scalar(PyArray_DATA(arr), PyArray_DESCR(arr), arr); + return PyArray_Scalar(PyArray_DATA(arr), PyArray_DESCR(arr), arr); } @@ -1114,23 +1117,23 @@ static PyObject * gentype_ at name@(PyObject *self, PyObject *args) { - return gentype_generic_method(self, args, NULL, "@name@"); + return gentype_generic_method(self, args, NULL, "@name@"); } /**end repeat**/ static PyObject * gentype_itemset(PyObject *self, PyObject *args) { - PyErr_SetString(PyExc_ValueError, "array-scalars are immutable"); - return NULL; + PyErr_SetString(PyExc_ValueError, "array-scalars are immutable"); + return NULL; } static PyObject * gentype_squeeze(PyObject *self, PyObject *args) { - if (!PyArg_ParseTuple(args, "")) return NULL; - Py_INCREF(self); - return self; + if (!PyArg_ParseTuple(args, "")) return NULL; + Py_INCREF(self); + return self; } static Py_ssize_t @@ -1139,36 +1142,36 @@ static PyObject * gentype_byteswap(PyObject *self, PyObject *args) { - Bool inplace=FALSE; + Bool inplace=FALSE; - if (!PyArg_ParseTuple(args, "|O&", PyArray_BoolConverter, &inplace)) - return NULL; + if (!PyArg_ParseTuple(args, "|O&", PyArray_BoolConverter, &inplace)) + return NULL; - if (inplace) { - PyErr_SetString(PyExc_ValueError, - "cannot byteswap a scalar in-place"); - return NULL; - } - else { - /* get the data, copyswap it and pass it to a new Array scalar - */ - char *data; - int numbytes; - PyArray_Descr *descr; - PyObject *new; - char *newmem; + if (inplace) { + PyErr_SetString(PyExc_ValueError, + "cannot byteswap a scalar in-place"); + return NULL; + } + else { + /* get the data, copyswap it and pass it to a new Array scalar + */ + char *data; + int numbytes; + PyArray_Descr *descr; + PyObject *new; + char *newmem; - numbytes = gentype_getreadbuf(self, 0, (void **)&data); - descr = PyArray_DescrFromScalar(self); - newmem = _pya_malloc(descr->elsize); - if (newmem == NULL) {Py_DECREF(descr); return PyErr_NoMemory();} - else memcpy(newmem, data, descr->elsize); - byte_swap_vector(newmem, 1, descr->elsize); - new = PyArray_Scalar(newmem, descr, NULL); - _pya_free(newmem); - Py_DECREF(descr); - return new; - } + numbytes = gentype_getreadbuf(self, 0, (void **)&data); + descr = PyArray_DescrFromScalar(self); + newmem = _pya_malloc(descr->elsize); + if (newmem == NULL) {Py_DECREF(descr); return PyErr_NoMemory();} + else memcpy(newmem, data, descr->elsize); + byte_swap_vector(newmem, 1, descr->elsize); + new = PyArray_Scalar(newmem, descr, NULL); + _pya_free(newmem); + Py_DECREF(descr); + return new; + } } @@ -1180,198 +1183,197 @@ static PyObject * gentype_ at name@(PyObject *self, PyObject *args, PyObject *kwds) { - return gentype_generic_method(self, args, kwds, "@name@"); + return gentype_generic_method(self, args, kwds, "@name@"); } /**end repeat**/ static PyObject * voidtype_getfield(PyVoidScalarObject *self, PyObject *args, PyObject *kwds) { - PyObject *ret; + PyObject *ret; - ret = gentype_generic_method((PyObject *)self, args, kwds, "getfield"); - if (!ret) return ret; - if (PyArray_IsScalar(ret, Generic) && \ + ret = gentype_generic_method((PyObject *)self, args, kwds, "getfield"); + if (!ret) return ret; + if (PyArray_IsScalar(ret, Generic) && \ (!PyArray_IsScalar(ret, Void))) { - PyArray_Descr *new; - void *ptr; - if (!PyArray_ISNBO(self->descr->byteorder)) { - new = PyArray_DescrFromScalar(ret); - ptr = scalar_value(ret, new); - byte_swap_vector(ptr, 1, new->elsize); - Py_DECREF(new); - } + PyArray_Descr *new; + void *ptr; + if (!PyArray_ISNBO(self->descr->byteorder)) { + new = PyArray_DescrFromScalar(ret); + ptr = scalar_value(ret, new); + byte_swap_vector(ptr, 1, new->elsize); + Py_DECREF(new); } - return ret; + } + return ret; } static PyObject * gentype_setfield(PyObject *self, PyObject *args, PyObject *kwds) { - - PyErr_SetString(PyExc_TypeError, - "Can't set fields in a non-void array scalar."); - return NULL; + PyErr_SetString(PyExc_TypeError, + "Can't set fields in a non-void array scalar."); + return NULL; } static PyObject * voidtype_setfield(PyVoidScalarObject *self, PyObject *args, PyObject *kwds) { - PyArray_Descr *typecode=NULL; - int offset = 0; - PyObject *value, *src; - int mysize; - char *dptr; - static char *kwlist[] = {"value", "dtype", "offset", 0}; + PyArray_Descr *typecode=NULL; + int offset = 0; + PyObject *value, *src; + int mysize; + char *dptr; + static char *kwlist[] = {"value", "dtype", "offset", 0}; - if ((self->flags & WRITEABLE) != WRITEABLE) { - PyErr_SetString(PyExc_RuntimeError, - "Can't write to memory"); - return NULL; - } - if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO&|i", kwlist, - &value, - PyArray_DescrConverter, - &typecode, &offset)) { - Py_XDECREF(typecode); - return NULL; - } + if ((self->flags & WRITEABLE) != WRITEABLE) { + PyErr_SetString(PyExc_RuntimeError, + "Can't write to memory"); + return NULL; + } + if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO&|i", kwlist, + &value, + PyArray_DescrConverter, + &typecode, &offset)) { + Py_XDECREF(typecode); + return NULL; + } - mysize = self->ob_size; + mysize = self->ob_size; - if (offset < 0 || (offset + typecode->elsize) > mysize) { - PyErr_Format(PyExc_ValueError, - "Need 0 <= offset <= %d for requested type " \ - "but received offset = %d", - mysize-typecode->elsize, offset); - Py_DECREF(typecode); - return NULL; - } + if (offset < 0 || (offset + typecode->elsize) > mysize) { + PyErr_Format(PyExc_ValueError, + "Need 0 <= offset <= %d for requested type " \ + "but received offset = %d", + mysize-typecode->elsize, offset); + Py_DECREF(typecode); + return NULL; + } - dptr = self->obval + offset; + dptr = self->obval + offset; - if (typecode->type_num == PyArray_OBJECT) { - PyObject **temp; - Py_INCREF(value); - temp = (PyObject **)dptr; - Py_XDECREF(*temp); - memcpy(temp, &value, sizeof(PyObject *)); - Py_DECREF(typecode); - } - else { - /* Copy data from value to correct place in dptr */ - src = PyArray_FromAny(value, typecode, 0, 0, CARRAY, NULL); - if (src == NULL) return NULL; - typecode->f->copyswap(dptr, PyArray_DATA(src), - !PyArray_ISNBO(self->descr->byteorder), - src); - Py_DECREF(src); - } - Py_INCREF(Py_None); - return Py_None; + if (typecode->type_num == PyArray_OBJECT) { + PyObject **temp; + Py_INCREF(value); + temp = (PyObject **)dptr; + Py_XDECREF(*temp); + memcpy(temp, &value, sizeof(PyObject *)); + Py_DECREF(typecode); + } + else { + /* Copy data from value to correct place in dptr */ + src = PyArray_FromAny(value, typecode, 0, 0, CARRAY, NULL); + if (src == NULL) return NULL; + typecode->f->copyswap(dptr, PyArray_DATA(src), + !PyArray_ISNBO(self->descr->byteorder), + src); + Py_DECREF(src); + } + Py_INCREF(Py_None); + return Py_None; } static PyObject * gentype_reduce(PyObject *self, PyObject *args) { - PyObject *ret=NULL, *obj=NULL, *mod=NULL; - const char *buffer; - Py_ssize_t buflen; + PyObject *ret=NULL, *obj=NULL, *mod=NULL; + const char *buffer; + Py_ssize_t buflen; - /* Return a tuple of (callable object, arguments) */ + /* Return a tuple of (callable object, arguments) */ - ret = PyTuple_New(2); - if (ret == NULL) return NULL; - if (PyObject_AsReadBuffer(self, (const void **)&buffer, &buflen)<0) { - Py_DECREF(ret); return NULL; - } - mod = PyImport_ImportModule("numpy.core.multiarray"); - if (mod == NULL) return NULL; - obj = PyObject_GetAttrString(mod, "scalar"); - Py_DECREF(mod); - if (obj == NULL) return NULL; - PyTuple_SET_ITEM(ret, 0, obj); - obj = PyObject_GetAttrString((PyObject *)self, "dtype"); - if (PyArray_IsScalar(self, Object)) { - mod = ((PyObjectScalarObject *)self)->obval; - PyTuple_SET_ITEM(ret, 1, - Py_BuildValue("NO", obj, mod)); - } - else { + ret = PyTuple_New(2); + if (ret == NULL) return NULL; + if (PyObject_AsReadBuffer(self, (const void **)&buffer, &buflen)<0) { + Py_DECREF(ret); return NULL; + } + mod = PyImport_ImportModule("numpy.core.multiarray"); + if (mod == NULL) return NULL; + obj = PyObject_GetAttrString(mod, "scalar"); + Py_DECREF(mod); + if (obj == NULL) return NULL; + PyTuple_SET_ITEM(ret, 0, obj); + obj = PyObject_GetAttrString((PyObject *)self, "dtype"); + if (PyArray_IsScalar(self, Object)) { + mod = ((PyObjectScalarObject *)self)->obval; + PyTuple_SET_ITEM(ret, 1, + Py_BuildValue("NO", obj, mod)); + } + else { #ifndef Py_UNICODE_WIDE - /* We need to expand the buffer so that we always write - UCS4 to disk for pickle of unicode scalars. + /* We need to expand the buffer so that we always write + UCS4 to disk for pickle of unicode scalars. - This could be in a unicode_reduce function, but - that would require re-factoring. - */ - int alloc=0; - char *tmp; - int newlen; + This could be in a unicode_reduce function, but + that would require re-factoring. + */ + int alloc=0; + char *tmp; + int newlen; - if (PyArray_IsScalar(self, Unicode)) { - tmp = _pya_malloc(buflen*2); - if (tmp == NULL) { - Py_DECREF(ret); - return PyErr_NoMemory(); - } - alloc = 1; - newlen = PyUCS2Buffer_AsUCS4((Py_UNICODE *)buffer, - (PyArray_UCS4 *)tmp, - buflen / 2, buflen / 2); - buflen = newlen*4; - buffer = tmp; - } + if (PyArray_IsScalar(self, Unicode)) { + tmp = _pya_malloc(buflen*2); + if (tmp == NULL) { + Py_DECREF(ret); + return PyErr_NoMemory(); + } + alloc = 1; + newlen = PyUCS2Buffer_AsUCS4((Py_UNICODE *)buffer, + (PyArray_UCS4 *)tmp, + buflen / 2, buflen / 2); + buflen = newlen*4; + buffer = tmp; + } #endif - mod = PyString_FromStringAndSize(buffer, buflen); - if (mod == NULL) { - Py_DECREF(ret); + mod = PyString_FromStringAndSize(buffer, buflen); + if (mod == NULL) { + Py_DECREF(ret); #ifndef Py_UNICODE_WIDE - ret = NULL; - goto fail; + ret = NULL; + goto fail; #else - return NULL; + return NULL; #endif - } - PyTuple_SET_ITEM(ret, 1, - Py_BuildValue("NN", obj, mod)); + } + PyTuple_SET_ITEM(ret, 1, + Py_BuildValue("NN", obj, mod)); #ifndef Py_UNICODE_WIDE - fail: - if (alloc) _pya_free((char *)buffer); +fail: + if (alloc) _pya_free((char *)buffer); #endif - } - return ret; + } + return ret; } /* ignores everything */ static PyObject * gentype_setstate(PyObject *self, PyObject *args) { - Py_INCREF(Py_None); - return (Py_None); + Py_INCREF(Py_None); + return (Py_None); } static PyObject * gentype_dump(PyObject *self, PyObject *args) { - PyObject *file=NULL; - int ret; + PyObject *file=NULL; + int ret; - if (!PyArg_ParseTuple(args, "O", &file)) - return NULL; - ret = PyArray_Dump(self, file, 2); - if (ret < 0) return NULL; - Py_INCREF(Py_None); - return Py_None; + if (!PyArg_ParseTuple(args, "O", &file)) + return NULL; + ret = PyArray_Dump(self, file, 2); + if (ret < 0) return NULL; + Py_INCREF(Py_None); + return Py_None; } static PyObject * gentype_dumps(PyObject *self, PyObject *args) { - if (!PyArg_ParseTuple(args, "")) - return NULL; - return PyArray_Dumps(self, 2); + if (!PyArg_ParseTuple(args, "")) + return NULL; + return PyArray_Dumps(self, 2); } @@ -1379,146 +1381,202 @@ static PyObject * gentype_setflags(PyObject *self, PyObject *args, PyObject *kwds) { - Py_INCREF(Py_None); - return Py_None; + Py_INCREF(Py_None); + return Py_None; } /* need to fill in doc-strings for these methods on import -- copy from array docstrings */ static PyMethodDef gentype_methods[] = { - {"tolist", (PyCFunction)gentype_tolist, 1, NULL}, - {"item", (PyCFunction)gentype_item, METH_VARARGS, NULL}, - {"itemset", (PyCFunction)gentype_itemset, METH_VARARGS, NULL}, - {"tofile", (PyCFunction)gentype_tofile, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"tostring", (PyCFunction)gentype_tostring, METH_VARARGS, NULL}, - {"byteswap", (PyCFunction)gentype_byteswap,1, NULL}, - {"astype", (PyCFunction)gentype_astype, 1, NULL}, - {"getfield", (PyCFunction)gentype_getfield, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"setfield", (PyCFunction)gentype_setfield, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"copy", (PyCFunction)gentype_copy, 1, NULL}, - {"resize", (PyCFunction)gentype_resize, - METH_VARARGS|METH_KEYWORDS, NULL}, + {"tolist", + (PyCFunction)gentype_tolist, 1, NULL}, + {"item", + (PyCFunction)gentype_item, METH_VARARGS, NULL}, + {"itemset", + (PyCFunction)gentype_itemset, METH_VARARGS, NULL}, + {"tofile", (PyCFunction)gentype_tofile, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"tostring", + (PyCFunction)gentype_tostring, METH_VARARGS, NULL}, + {"byteswap", + (PyCFunction)gentype_byteswap,1, NULL}, + {"astype", + (PyCFunction)gentype_astype, 1, NULL}, + {"getfield", + (PyCFunction)gentype_getfield, + METH_VARARGS | METH_KEYWORDS, NULL}, + {"setfield", + (PyCFunction)gentype_setfield, + METH_VARARGS | METH_KEYWORDS, NULL}, + {"copy", + (PyCFunction)gentype_copy, 1, NULL}, + {"resize", (PyCFunction)gentype_resize, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"__array__", + (PyCFunction)gentype_getarray, 1, doc_getarray}, + {"__array_wrap__", + (PyCFunction)gentype_wraparray, 1, doc_sc_wraparray}, - {"__array__", (PyCFunction)gentype_getarray, 1, doc_getarray}, - {"__array_wrap__", (PyCFunction)gentype_wraparray, 1, doc_sc_wraparray}, + /* for the copy module */ + {"__copy__", + (PyCFunction)gentype_copy, 1, NULL}, + {"__deepcopy__", + (PyCFunction)gentype___deepcopy__, 1, NULL}, - /* for the copy module */ - {"__copy__", (PyCFunction)gentype_copy, 1, NULL}, - {"__deepcopy__", (PyCFunction)gentype___deepcopy__, 1, NULL}, + {"__reduce__", + (PyCFunction) gentype_reduce, 1, NULL}, + /* For consistency does nothing */ + {"__setstate__", + (PyCFunction) gentype_setstate, 1, NULL}, + {"dumps", + (PyCFunction) gentype_dumps, 1, NULL}, + {"dump", + (PyCFunction) gentype_dump, 1, NULL}, - {"__reduce__", (PyCFunction) gentype_reduce, 1, NULL}, - /* For consistency does nothing */ - {"__setstate__", (PyCFunction) gentype_setstate, 1, NULL}, - - {"dumps", (PyCFunction) gentype_dumps, 1, NULL}, - {"dump", (PyCFunction) gentype_dump, 1, NULL}, - - /* Methods for array */ - {"fill", (PyCFunction)gentype_fill, - METH_VARARGS, NULL}, - {"transpose", (PyCFunction)gentype_transpose, - METH_VARARGS, NULL}, - {"take", (PyCFunction)gentype_take, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"put", (PyCFunction)gentype_put, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"repeat", (PyCFunction)gentype_repeat, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"choose", (PyCFunction)gentype_choose, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"sort", (PyCFunction)gentype_sort, - METH_VARARGS, NULL}, - {"argsort", (PyCFunction)gentype_argsort, - METH_VARARGS, NULL}, - {"searchsorted", (PyCFunction)gentype_searchsorted, - METH_VARARGS, NULL}, - {"argmax", (PyCFunction)gentype_argmax, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"argmin", (PyCFunction)gentype_argmin, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"reshape", (PyCFunction)gentype_reshape, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"squeeze", (PyCFunction)gentype_squeeze, - METH_VARARGS, NULL}, - {"view", (PyCFunction)gentype_view, - METH_VARARGS, NULL}, - {"swapaxes", (PyCFunction)gentype_swapaxes, - METH_VARARGS, NULL}, - {"max", (PyCFunction)gentype_max, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"min", (PyCFunction)gentype_min, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"ptp", (PyCFunction)gentype_ptp, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"mean", (PyCFunction)gentype_mean, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"trace", (PyCFunction)gentype_trace, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"diagonal", (PyCFunction)gentype_diagonal, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"clip", (PyCFunction)gentype_clip, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"conj", (PyCFunction)gentype_conj, - METH_VARARGS, NULL}, - {"conjugate", (PyCFunction)gentype_conjugate, - METH_VARARGS, NULL}, - {"nonzero", (PyCFunction)gentype_nonzero, - METH_VARARGS, NULL}, - {"std", (PyCFunction)gentype_std, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"var", (PyCFunction)gentype_var, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"sum", (PyCFunction)gentype_sum, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"cumsum", (PyCFunction)gentype_cumsum, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"prod", (PyCFunction)gentype_prod, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"cumprod", (PyCFunction)gentype_cumprod, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"all", (PyCFunction)gentype_all, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"any", (PyCFunction)gentype_any, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"compress", (PyCFunction)gentype_compress, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"flatten", (PyCFunction)gentype_flatten, - METH_VARARGS, NULL}, - {"ravel", (PyCFunction)gentype_ravel, - METH_VARARGS, NULL}, - {"round", (PyCFunction)gentype_round, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"setflags", (PyCFunction)gentype_setflags, - METH_VARARGS|METH_KEYWORDS, NULL}, - {"newbyteorder", (PyCFunction)gentype_newbyteorder, - METH_VARARGS, NULL}, - {NULL, NULL} /* sentinel */ + /* Methods for array */ + {"fill", + (PyCFunction)gentype_fill, + METH_VARARGS, NULL}, + {"transpose", + (PyCFunction)gentype_transpose, + METH_VARARGS, NULL}, + {"take", + (PyCFunction)gentype_take, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"put", + (PyCFunction)gentype_put, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"repeat", + (PyCFunction)gentype_repeat, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"choose", + (PyCFunction)gentype_choose, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"sort", + (PyCFunction)gentype_sort, + METH_VARARGS, NULL}, + {"argsort", + (PyCFunction)gentype_argsort, + METH_VARARGS, NULL}, + {"searchsorted", + (PyCFunction)gentype_searchsorted, + METH_VARARGS, NULL}, + {"argmax", + (PyCFunction)gentype_argmax, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"argmin", + (PyCFunction)gentype_argmin, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"reshape", + (PyCFunction)gentype_reshape, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"squeeze", + (PyCFunction)gentype_squeeze, + METH_VARARGS, NULL}, + {"view", + (PyCFunction)gentype_view, + METH_VARARGS, NULL}, + {"swapaxes", + (PyCFunction)gentype_swapaxes, + METH_VARARGS, NULL}, + {"max", + (PyCFunction)gentype_max, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"min", + (PyCFunction)gentype_min, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"ptp", + (PyCFunction)gentype_ptp, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"mean", + (PyCFunction)gentype_mean, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"trace", + (PyCFunction)gentype_trace, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"diagonal", + (PyCFunction)gentype_diagonal, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"clip", + (PyCFunction)gentype_clip, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"conj", + (PyCFunction)gentype_conj, + METH_VARARGS, NULL}, + {"conjugate", + (PyCFunction)gentype_conjugate, + METH_VARARGS, NULL}, + {"nonzero", + (PyCFunction)gentype_nonzero, + METH_VARARGS, NULL}, + {"std", + (PyCFunction)gentype_std, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"var", + (PyCFunction)gentype_var, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"sum", + (PyCFunction)gentype_sum, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"cumsum", + (PyCFunction)gentype_cumsum, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"prod", + (PyCFunction)gentype_prod, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"cumprod", + (PyCFunction)gentype_cumprod, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"all", + (PyCFunction)gentype_all, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"any", + (PyCFunction)gentype_any, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"compress", + (PyCFunction)gentype_compress, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"flatten", + (PyCFunction)gentype_flatten, + METH_VARARGS, NULL}, + {"ravel", + (PyCFunction)gentype_ravel, + METH_VARARGS, NULL}, + {"round", + (PyCFunction)gentype_round, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"setflags", + (PyCFunction)gentype_setflags, + METH_VARARGS|METH_KEYWORDS, NULL}, + {"newbyteorder", + (PyCFunction)gentype_newbyteorder, + METH_VARARGS, NULL}, + {NULL, NULL} /* sentinel */ }; static PyGetSetDef voidtype_getsets[] = { - {"flags", - (getter)voidtype_flags_get, - (setter)0, - "integer value of flags"}, - {"dtype", - (getter)voidtype_dtypedescr_get, - (setter)0, - "dtype object"}, - {NULL, NULL} + {"flags", + (getter)voidtype_flags_get, + (setter)0, + "integer value of flags"}, + {"dtype", + (getter)voidtype_dtypedescr_get, + (setter)0, + "dtype object"}, + {NULL, NULL} }; static PyMethodDef voidtype_methods[] = { - {"getfield", (PyCFunction)voidtype_getfield, - METH_VARARGS | METH_KEYWORDS, NULL}, - {"setfield", (PyCFunction)voidtype_setfield, - METH_VARARGS | METH_KEYWORDS, NULL}, - {NULL, NULL} + {"getfield", + (PyCFunction)voidtype_getfield, + METH_VARARGS | METH_KEYWORDS, NULL}, + {"setfield", + (PyCFunction)voidtype_setfield, + METH_VARARGS | METH_KEYWORDS, NULL}, + {NULL, NULL} }; /************* As_mapping functions for void array scalar ************/ @@ -1526,35 +1584,35 @@ static Py_ssize_t voidtype_length(PyVoidScalarObject *self) { - if (!self->descr->names) { - return 0; - } - else { /* return the number of fields */ - return (Py_ssize_t) PyTuple_GET_SIZE(self->descr->names); - } + if (!self->descr->names) { + return 0; + } + else { /* return the number of fields */ + return (Py_ssize_t) PyTuple_GET_SIZE(self->descr->names); + } } static PyObject * voidtype_item(PyVoidScalarObject *self, Py_ssize_t n) { - intp m; - PyObject *flist=NULL, *fieldinfo; + intp m; + PyObject *flist=NULL, *fieldinfo; - if (!(PyDescr_HASFIELDS(self->descr))) { - PyErr_SetString(PyExc_IndexError, - "can't index void scalar without fields"); - return NULL; - } - flist = self->descr->names; - m = PyTuple_GET_SIZE(flist); - if (n < 0) n += m; - if (n < 0 || n >= m) { - PyErr_Format(PyExc_IndexError, "invalid index (%d)", (int) n); - return NULL; - } - fieldinfo = PyDict_GetItem(self->descr->fields, - PyTuple_GET_ITEM(flist, n)); - return voidtype_getfield(self, fieldinfo, NULL); + if (!(PyDescr_HASFIELDS(self->descr))) { + PyErr_SetString(PyExc_IndexError, + "can't index void scalar without fields"); + return NULL; + } + flist = self->descr->names; + m = PyTuple_GET_SIZE(flist); + if (n < 0) n += m; + if (n < 0 || n >= m) { + PyErr_Format(PyExc_IndexError, "invalid index (%d)", (int) n); + return NULL; + } + fieldinfo = PyDict_GetItem(self->descr->fields, + PyTuple_GET_ITEM(flist, n)); + return voidtype_getfield(self, fieldinfo, NULL); } @@ -1562,132 +1620,130 @@ static PyObject * voidtype_subscript(PyVoidScalarObject *self, PyObject *ind) { - intp n; - PyObject *fieldinfo; + intp n; + PyObject *fieldinfo; - if (!(PyDescr_HASFIELDS(self->descr))) { - PyErr_SetString(PyExc_IndexError, - "can't index void scalar without fields"); - return NULL; - } + if (!(PyDescr_HASFIELDS(self->descr))) { + PyErr_SetString(PyExc_IndexError, + "can't index void scalar without fields"); + return NULL; + } - if (PyString_Check(ind) || PyUnicode_Check(ind)) { - /* look up in fields */ - fieldinfo = PyDict_GetItem(self->descr->fields, ind); - if (!fieldinfo) goto fail; - return voidtype_getfield(self, fieldinfo, NULL); - } + if (PyString_Check(ind) || PyUnicode_Check(ind)) { + /* look up in fields */ + fieldinfo = PyDict_GetItem(self->descr->fields, ind); + if (!fieldinfo) goto fail; + return voidtype_getfield(self, fieldinfo, NULL); + } - /* try to convert it to a number */ - n = PyArray_PyIntAsIntp(ind); - if (error_converting(n)) goto fail; + /* try to convert it to a number */ + n = PyArray_PyIntAsIntp(ind); + if (error_converting(n)) goto fail; - return voidtype_item(self, (Py_ssize_t)n); + return voidtype_item(self, (Py_ssize_t)n); - fail: - PyErr_SetString(PyExc_IndexError, "invalid index"); - return NULL; - +fail: + PyErr_SetString(PyExc_IndexError, "invalid index"); + return NULL; } static int voidtype_ass_item(PyVoidScalarObject *self, Py_ssize_t n, PyObject *val) { - intp m; - PyObject *flist=NULL, *fieldinfo, *newtup; - PyObject *res; + intp m; + PyObject *flist=NULL, *fieldinfo, *newtup; + PyObject *res; - if (!(PyDescr_HASFIELDS(self->descr))) { - PyErr_SetString(PyExc_IndexError, - "can't index void scalar without fields"); - return -1; - } + if (!(PyDescr_HASFIELDS(self->descr))) { + PyErr_SetString(PyExc_IndexError, + "can't index void scalar without fields"); + return -1; + } - flist = self->descr->names; - m = PyTuple_GET_SIZE(flist); - if (n < 0) n += m; - if (n < 0 || n >= m) goto fail; - fieldinfo = PyDict_GetItem(self->descr->fields, - PyTuple_GET_ITEM(flist, n)); - newtup = Py_BuildValue("(OOO)", val, - PyTuple_GET_ITEM(fieldinfo, 0), - PyTuple_GET_ITEM(fieldinfo, 1)); - res = voidtype_setfield(self, newtup, NULL); - Py_DECREF(newtup); - if (!res) return -1; - Py_DECREF(res); - return 0; + flist = self->descr->names; + m = PyTuple_GET_SIZE(flist); + if (n < 0) n += m; + if (n < 0 || n >= m) goto fail; + fieldinfo = PyDict_GetItem(self->descr->fields, + PyTuple_GET_ITEM(flist, n)); + newtup = Py_BuildValue("(OOO)", val, + PyTuple_GET_ITEM(fieldinfo, 0), + PyTuple_GET_ITEM(fieldinfo, 1)); + res = voidtype_setfield(self, newtup, NULL); + Py_DECREF(newtup); + if (!res) return -1; + Py_DECREF(res); + return 0; - fail: - PyErr_Format(PyExc_IndexError, "invalid index (%d)", (int) n); - return -1; - +fail: + PyErr_Format(PyExc_IndexError, "invalid index (%d)", (int) n); + return -1; } static int voidtype_ass_subscript(PyVoidScalarObject *self, PyObject *ind, PyObject *val) { - intp n; - char *msg = "invalid index"; - PyObject *fieldinfo, *newtup; - PyObject *res; + intp n; + char *msg = "invalid index"; + PyObject *fieldinfo, *newtup; + PyObject *res; - if (!PyDescr_HASFIELDS(self->descr)) { - PyErr_SetString(PyExc_IndexError, - "can't index void scalar without fields"); - return -1; - } + if (!PyDescr_HASFIELDS(self->descr)) { + PyErr_SetString(PyExc_IndexError, + "can't index void scalar without fields"); + return -1; + } - if (PyString_Check(ind) || PyUnicode_Check(ind)) { - /* look up in fields */ - fieldinfo = PyDict_GetItem(self->descr->fields, ind); - if (!fieldinfo) goto fail; - newtup = Py_BuildValue("(OOO)", val, - PyTuple_GET_ITEM(fieldinfo, 0), - PyTuple_GET_ITEM(fieldinfo, 1)); - res = voidtype_setfield(self, newtup, NULL); - Py_DECREF(newtup); - if (!res) return -1; - Py_DECREF(res); - return 0; - } + if (PyString_Check(ind) || PyUnicode_Check(ind)) { + /* look up in fields */ + fieldinfo = PyDict_GetItem(self->descr->fields, ind); + if (!fieldinfo) goto fail; + newtup = Py_BuildValue("(OOO)", val, + PyTuple_GET_ITEM(fieldinfo, 0), + PyTuple_GET_ITEM(fieldinfo, 1)); + res = voidtype_setfield(self, newtup, NULL); + Py_DECREF(newtup); + if (!res) return -1; + Py_DECREF(res); + return 0; + } - /* try to convert it to a number */ - n = PyArray_PyIntAsIntp(ind); - if (error_converting(n)) goto fail; - return voidtype_ass_item(self, (Py_ssize_t)n, val); + /* try to convert it to a number */ + n = PyArray_PyIntAsIntp(ind); + if (error_converting(n)) goto fail; + return voidtype_ass_item(self, (Py_ssize_t)n, val); - fail: - PyErr_SetString(PyExc_IndexError, msg); - return -1; +fail: + PyErr_SetString(PyExc_IndexError, msg); + return -1; } static PyMappingMethods voidtype_as_mapping = { #if PY_VERSION_HEX >= 0x02050000 - (lenfunc)voidtype_length, /*mp_length*/ + (lenfunc)voidtype_length, /*mp_length*/ #else - (inquiry)voidtype_length, /*mp_length*/ + (inquiry)voidtype_length, /*mp_length*/ #endif - (binaryfunc)voidtype_subscript, /*mp_subscript*/ - (objobjargproc)voidtype_ass_subscript, /*mp_ass_subscript*/ + (binaryfunc)voidtype_subscript, /*mp_subscript*/ + (objobjargproc)voidtype_ass_subscript, /*mp_ass_subscript*/ }; static PySequenceMethods voidtype_as_sequence = { #if PY_VERSION_HEX >= 0x02050000 - (lenfunc)voidtype_length, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - (ssizeargfunc)voidtype_item, /*sq_item*/ - 0, /*sq_slice*/ - (ssizeobjargproc)voidtype_ass_item /*sq_ass_item*/ + (lenfunc)voidtype_length, /*sq_length*/ + 0, /*sq_concat*/ + 0, /*sq_repeat*/ + (ssizeargfunc)voidtype_item, /*sq_item*/ + 0, /*sq_slice*/ + (ssizeobjargproc)voidtype_ass_item /*sq_ass_item*/ #else - (inquiry)voidtype_length, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - (intargfunc)voidtype_item, /*sq_item*/ - 0, /*sq_slice*/ - (intobjargproc)voidtype_ass_item /*sq_ass_item*/ + (inquiry)voidtype_length, /*sq_length*/ + 0, /*sq_concat*/ + 0, /*sq_repeat*/ + (intargfunc)voidtype_item, /*sq_item*/ + 0, /*sq_slice*/ + (intobjargproc)voidtype_ass_item /*sq_ass_item*/ #endif }; @@ -1696,66 +1752,66 @@ static Py_ssize_t gentype_getreadbuf(PyObject *self, Py_ssize_t segment, void **ptrptr) { - int numbytes; - PyArray_Descr *outcode; + int numbytes; + PyArray_Descr *outcode; - if (segment != 0) { - PyErr_SetString(PyExc_SystemError, - "Accessing non-existent array segment"); - return -1; - } + if (segment != 0) { + PyErr_SetString(PyExc_SystemError, + "Accessing non-existent array segment"); + return -1; + } - outcode = PyArray_DescrFromScalar(self); - numbytes = outcode->elsize; - *ptrptr = (void *)scalar_value(self, outcode); + outcode = PyArray_DescrFromScalar(self); + numbytes = outcode->elsize; + *ptrptr = (void *)scalar_value(self, outcode); #ifndef Py_UNICODE_WIDE - if (outcode->type_num == NPY_UNICODE) { - numbytes >>= 1; - } + if (outcode->type_num == NPY_UNICODE) { + numbytes >>= 1; + } #endif - Py_DECREF(outcode); - return numbytes; + Py_DECREF(outcode); + return numbytes; } static Py_ssize_t gentype_getsegcount(PyObject *self, Py_ssize_t *lenp) { - PyArray_Descr *outcode; + PyArray_Descr *outcode; - outcode = PyArray_DescrFromScalar(self); - if (lenp) { - *lenp = outcode->elsize; + outcode = PyArray_DescrFromScalar(self); + if (lenp) { + *lenp = outcode->elsize; #ifndef Py_UNICODE_WIDE - if (outcode->type_num == NPY_UNICODE) { - *lenp >>= 1; - } + if (outcode->type_num == NPY_UNICODE) { + *lenp >>= 1; + } #endif - } - Py_DECREF(outcode); - return 1; + } + Py_DECREF(outcode); + return 1; } static Py_ssize_t gentype_getcharbuf(PyObject *self, Py_ssize_t segment, constchar **ptrptr) { - if (PyArray_IsScalar(self, String) || \ + if (PyArray_IsScalar(self, String) || \ PyArray_IsScalar(self, Unicode)) - return gentype_getreadbuf(self, segment, (void **)ptrptr); - else { - PyErr_SetString(PyExc_TypeError, - "Non-character array cannot be interpreted "\ - "as character buffer."); - return -1; - } + return gentype_getreadbuf(self, segment, (void **)ptrptr); + else { + PyErr_SetString(PyExc_TypeError, + "Non-character array cannot be interpreted "\ + "as character buffer."); + return -1; + } } static PyBufferProcs gentype_as_buffer = { - gentype_getreadbuf, /*bf_getreadbuffer*/ - NULL, /*bf_getwritebuffer*/ - gentype_getsegcount, /*bf_getsegcount*/ - gentype_getcharbuf, /*bf_getcharbuffer*/ + gentype_getreadbuf, /*bf_getreadbuffer*/ + NULL, /*bf_getwritebuffer*/ + gentype_getsegcount, /*bf_getsegcount*/ + gentype_getcharbuf, /*bf_getcharbuffer*/ }; @@ -1763,27 +1819,27 @@ #define LEAFFLAGS Py_TPFLAGS_DEFAULT | Py_TPFLAGS_CHECKTYPES static PyTypeObject PyGenericArrType_Type = { - PyObject_HEAD_INIT(NULL) - 0, /*ob_size*/ - "numpy.generic", /*tp_name*/ - sizeof(PyObject), /*tp_basicsize*/ + PyObject_HEAD_INIT(NULL) + 0, /*ob_size*/ + "numpy.generic", /*tp_name*/ + sizeof(PyObject), /*tp_basicsize*/ }; static void void_dealloc(PyVoidScalarObject *v) { - if (v->flags & OWNDATA) - PyDataMem_FREE(v->obval); - Py_XDECREF(v->descr); - Py_XDECREF(v->base); - v->ob_type->tp_free(v); + if (v->flags & OWNDATA) + PyDataMem_FREE(v->obval); + Py_XDECREF(v->descr); + Py_XDECREF(v->base); + v->ob_type->tp_free(v); } static void object_arrtype_dealloc(PyObject *v) { - Py_XDECREF(((PyObjectScalarObject *)v)->obval); - v->ob_type->tp_free(v); + Py_XDECREF(((PyObjectScalarObject *)v)->obval); + v->ob_type->tp_free(v); } /* string and unicode inherit from Python Type first and so GET_ITEM is different to get to the Python Type. @@ -1793,17 +1849,17 @@ */ #define _WORK(num) \ - if (type->tp_bases && (PyTuple_GET_SIZE(type->tp_bases)==2)) { \ - PyTypeObject *sup; \ - /* We are inheriting from a Python type as well so \ - give it first dibs on conversion */ \ - sup = (PyTypeObject *)PyTuple_GET_ITEM(type->tp_bases, num); \ - robj = sup->tp_new(type, args, kwds); \ - if (robj != NULL) goto finish; \ - if (PyTuple_GET_SIZE(args)!=1) return NULL; \ - PyErr_Clear(); \ - /* now do default conversion */ \ - } + if (type->tp_bases && (PyTuple_GET_SIZE(type->tp_bases)==2)) { \ + PyTypeObject *sup; \ + /* We are inheriting from a Python type as well so \ + give it first dibs on conversion */ \ + sup = (PyTypeObject *)PyTuple_GET_ITEM(type->tp_bases, num); \ + robj = sup->tp_new(type, args, kwds); \ + if (robj != NULL) goto finish; \ + if (PyTuple_GET_SIZE(args)!=1) return NULL; \ + PyErr_Clear(); \ + /* now do default conversion */ \ + } #define _WORK1 _WORK(1) #define _WORKz _WORK(0) @@ -1818,66 +1874,66 @@ static PyObject * @name at _arrtype_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { - PyObject *obj=NULL; - PyObject *robj; - PyObject *arr; - PyArray_Descr *typecode=NULL; - int itemsize; - void *dest, *src; + PyObject *obj=NULL; + PyObject *robj; + PyObject *arr; + PyArray_Descr *typecode=NULL; + int itemsize; + void *dest, *src; - _WORK at work@ + _WORK at work@ if (!PyArg_ParseTuple(args, "|O", &obj)) return NULL; - typecode = PyArray_DescrFromType(PyArray_ at TYPE@); - Py_INCREF(typecode); - if (obj == NULL) { + typecode = PyArray_DescrFromType(PyArray_ at TYPE@); + Py_INCREF(typecode); + if (obj == NULL) { #if @default@ == 0 - char *mem; - mem = malloc(sizeof(@name@)); - memset(mem, 0, sizeof(@name@)); - robj = PyArray_Scalar(mem, typecode, NULL); - free(mem); + char *mem; + mem = malloc(sizeof(@name@)); + memset(mem, 0, sizeof(@name@)); + robj = PyArray_Scalar(mem, typecode, NULL); + free(mem); #elif @default@ == 1 - robj = PyArray_Scalar(NULL, typecode, NULL); + robj = PyArray_Scalar(NULL, typecode, NULL); #elif @default@ == 2 - obj = Py_None; - robj = PyArray_Scalar(&obj, typecode, NULL); + obj = Py_None; + robj = PyArray_Scalar(&obj, typecode, NULL); #endif - goto finish; - } + goto finish; + } - arr = PyArray_FromAny(obj, typecode, 0, 0, FORCECAST, NULL); - if ((arr==NULL) || (PyArray_NDIM(arr) > 0)) return arr; - robj = PyArray_Return((PyArrayObject *)arr); + arr = PyArray_FromAny(obj, typecode, 0, 0, FORCECAST, NULL); + if ((arr==NULL) || (PyArray_NDIM(arr) > 0)) return arr; + robj = PyArray_Return((PyArrayObject *)arr); - finish: - if ((robj==NULL) || (robj->ob_type == type)) return robj; - /* Need to allocate new type and copy data-area over */ - if (type->tp_itemsize) { - itemsize = PyString_GET_SIZE(robj); - } - else itemsize = 0; - obj = type->tp_alloc(type, itemsize); - if (obj == NULL) {Py_DECREF(robj); return NULL;} - if (typecode==NULL) - typecode = PyArray_DescrFromType(PyArray_ at TYPE@); - dest = scalar_value(obj, typecode); - src = scalar_value(robj, typecode); - Py_DECREF(typecode); +finish: + if ((robj==NULL) || (robj->ob_type == type)) return robj; + /* Need to allocate new type and copy data-area over */ + if (type->tp_itemsize) { + itemsize = PyString_GET_SIZE(robj); + } + else itemsize = 0; + obj = type->tp_alloc(type, itemsize); + if (obj == NULL) {Py_DECREF(robj); return NULL;} + if (typecode==NULL) + typecode = PyArray_DescrFromType(PyArray_ at TYPE@); + dest = scalar_value(obj, typecode); + src = scalar_value(robj, typecode); + Py_DECREF(typecode); #if @default@ == 0 - *((npy_ at name@ *)dest) = *((npy_ at name@ *)src); + *((npy_ at name@ *)dest) = *((npy_ at name@ *)src); #elif @default@ == 1 - if (itemsize == 0) { - itemsize = ((PyUnicodeObject *)robj)->length << 2; - } - memcpy(dest, src, itemsize); + if (itemsize == 0) { + itemsize = ((PyUnicodeObject *)robj)->length << 2; + } + memcpy(dest, src, itemsize); #elif @default@ == 2 - memcpy(dest, src, sizeof(void *)); - Py_INCREF(*((PyObject **)dest)); + memcpy(dest, src, sizeof(void *)); + Py_INCREF(*((PyObject **)dest)); #endif - Py_DECREF(robj); - return obj; + Py_DECREF(robj); + return obj; } /**end repeat**/ @@ -1890,56 +1946,56 @@ static PyObject * bool_arrtype_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { - PyObject *obj=NULL; - PyObject *arr; + PyObject *obj=NULL; + PyObject *arr; - if (!PyArg_ParseTuple(args, "|O", &obj)) return NULL; - if (obj == NULL) - PyArrayScalar_RETURN_FALSE; - if (obj == Py_False) - PyArrayScalar_RETURN_FALSE; - if (obj == Py_True) - PyArrayScalar_RETURN_TRUE; - arr = PyArray_FROM_OTF(obj, PyArray_BOOL, FORCECAST); - if (arr && 0 == PyArray_NDIM(arr)) { - Bool val = *((Bool *)PyArray_DATA(arr)); - Py_DECREF(arr); - PyArrayScalar_RETURN_BOOL_FROM_LONG(val); - } - return PyArray_Return((PyArrayObject *)arr); + if (!PyArg_ParseTuple(args, "|O", &obj)) return NULL; + if (obj == NULL) + PyArrayScalar_RETURN_FALSE; + if (obj == Py_False) + PyArrayScalar_RETURN_FALSE; + if (obj == Py_True) + PyArrayScalar_RETURN_TRUE; + arr = PyArray_FROM_OTF(obj, PyArray_BOOL, FORCECAST); + if (arr && 0 == PyArray_NDIM(arr)) { + Bool val = *((Bool *)PyArray_DATA(arr)); + Py_DECREF(arr); + PyArrayScalar_RETURN_BOOL_FROM_LONG(val); + } + return PyArray_Return((PyArrayObject *)arr); } static PyObject * bool_arrtype_and(PyObject *a, PyObject *b) { - if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) - PyArrayScalar_RETURN_BOOL_FROM_LONG - ((a == PyArrayScalar_True)&(b == PyArrayScalar_True)); - return PyGenericArrType_Type.tp_as_number->nb_and(a, b); + if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) + PyArrayScalar_RETURN_BOOL_FROM_LONG + ((a == PyArrayScalar_True)&(b == PyArrayScalar_True)); + return PyGenericArrType_Type.tp_as_number->nb_and(a, b); } static PyObject * bool_arrtype_or(PyObject *a, PyObject *b) { - if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) - PyArrayScalar_RETURN_BOOL_FROM_LONG - ((a == PyArrayScalar_True)|(b == PyArrayScalar_True)); - return PyGenericArrType_Type.tp_as_number->nb_or(a, b); + if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) + PyArrayScalar_RETURN_BOOL_FROM_LONG + ((a == PyArrayScalar_True)|(b == PyArrayScalar_True)); + return PyGenericArrType_Type.tp_as_number->nb_or(a, b); } static PyObject * bool_arrtype_xor(PyObject *a, PyObject *b) { - if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) - PyArrayScalar_RETURN_BOOL_FROM_LONG - ((a == PyArrayScalar_True)^(b == PyArrayScalar_True)); - return PyGenericArrType_Type.tp_as_number->nb_xor(a, b); + if (PyArray_IsScalar(a, Bool) && PyArray_IsScalar(b, Bool)) + PyArrayScalar_RETURN_BOOL_FROM_LONG + ((a == PyArrayScalar_True)^(b == PyArrayScalar_True)); + return PyGenericArrType_Type.tp_as_number->nb_xor(a, b); } static int bool_arrtype_nonzero(PyObject *a) { - return a == PyArrayScalar_True; + return a == PyArrayScalar_True; } #if PY_VERSION_HEX >= 0x02050000 @@ -1952,87 +2008,87 @@ static PyObject * @name at _index(PyObject *self) { - return @type@(PyArrayScalar_VAL(self, @Name@)); + return @type@(PyArrayScalar_VAL(self, @Name@)); } /**end repeat**/ static PyObject * bool_index(PyObject *a) { - return PyInt_FromLong(PyArrayScalar_VAL(a, Bool)); + return PyInt_FromLong(PyArrayScalar_VAL(a, Bool)); } #endif /* Arithmetic methods -- only so we can override &, |, ^. */ static PyNumberMethods bool_arrtype_as_number = { - 0, /* nb_add */ - 0, /* nb_subtract */ - 0, /* nb_multiply */ - 0, /* nb_divide */ - 0, /* nb_remainder */ - 0, /* nb_divmod */ - 0, /* nb_power */ - 0, /* nb_negative */ - 0, /* nb_positive */ - 0, /* nb_absolute */ - (inquiry)bool_arrtype_nonzero, /* nb_nonzero */ - 0, /* nb_invert */ - 0, /* nb_lshift */ - 0, /* nb_rshift */ - (binaryfunc)bool_arrtype_and, /* nb_and */ - (binaryfunc)bool_arrtype_xor, /* nb_xor */ - (binaryfunc)bool_arrtype_or, /* nb_or */ + 0, /* nb_add */ + 0, /* nb_subtract */ + 0, /* nb_multiply */ + 0, /* nb_divide */ + 0, /* nb_remainder */ + 0, /* nb_divmod */ + 0, /* nb_power */ + 0, /* nb_negative */ + 0, /* nb_positive */ + 0, /* nb_absolute */ + (inquiry)bool_arrtype_nonzero, /* nb_nonzero */ + 0, /* nb_invert */ + 0, /* nb_lshift */ + 0, /* nb_rshift */ + (binaryfunc)bool_arrtype_and, /* nb_and */ + (binaryfunc)bool_arrtype_xor, /* nb_xor */ + (binaryfunc)bool_arrtype_or, /* nb_or */ }; static PyObject * void_arrtype_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { - PyObject *obj, *arr; - ulonglong memu=1; - PyObject *new=NULL; - char *destptr; + PyObject *obj, *arr; + ulonglong memu=1; + PyObject *new=NULL; + char *destptr; - if (!PyArg_ParseTuple(args, "O", &obj)) return NULL; - /* For a VOID scalar first see if obj is an integer or long - and create new memory of that size (filled with 0) for the scalar - */ + if (!PyArg_ParseTuple(args, "O", &obj)) return NULL; + /* For a VOID scalar first see if obj is an integer or long + and create new memory of that size (filled with 0) for the scalar + */ - if (PyLong_Check(obj) || PyInt_Check(obj) || \ + if (PyLong_Check(obj) || PyInt_Check(obj) || \ PyArray_IsScalar(obj, Integer) || (PyArray_Check(obj) && PyArray_NDIM(obj)==0 && \ PyArray_ISINTEGER(obj))) { - new = obj->ob_type->tp_as_number->nb_long(obj); + new = obj->ob_type->tp_as_number->nb_long(obj); + } + if (new && PyLong_Check(new)) { + PyObject *ret; + memu = PyLong_AsUnsignedLongLong(new); + Py_DECREF(new); + if (PyErr_Occurred() || (memu > MAX_INT)) { + PyErr_Clear(); + PyErr_Format(PyExc_OverflowError, + "size must be smaller than %d", + (int) MAX_INT); + return NULL; } - if (new && PyLong_Check(new)) { - PyObject *ret; - memu = PyLong_AsUnsignedLongLong(new); - Py_DECREF(new); - if (PyErr_Occurred() || (memu > MAX_INT)) { - PyErr_Clear(); - PyErr_Format(PyExc_OverflowError, - "size must be smaller than %d", - (int) MAX_INT); - return NULL; - } - destptr = PyDataMem_NEW((int) memu); - if (destptr == NULL) return PyErr_NoMemory(); - ret = type->tp_alloc(type, 0); - if (ret == NULL) { - PyDataMem_FREE(destptr); - return PyErr_NoMemory(); - } - ((PyVoidScalarObject *)ret)->obval = destptr; - ((PyVoidScalarObject *)ret)->ob_size = (int) memu; - ((PyVoidScalarObject *)ret)->descr = \ - PyArray_DescrNewFromType(PyArray_VOID); - ((PyVoidScalarObject *)ret)->descr->elsize = (int) memu; - ((PyVoidScalarObject *)ret)->flags = BEHAVED | OWNDATA; - ((PyVoidScalarObject *)ret)->base = NULL; - memset(destptr, '\0', (size_t) memu); - return ret; + destptr = PyDataMem_NEW((int) memu); + if (destptr == NULL) return PyErr_NoMemory(); + ret = type->tp_alloc(type, 0); + if (ret == NULL) { + PyDataMem_FREE(destptr); + return PyErr_NoMemory(); } + ((PyVoidScalarObject *)ret)->obval = destptr; + ((PyVoidScalarObject *)ret)->ob_size = (int) memu; + ((PyVoidScalarObject *)ret)->descr = \ + PyArray_DescrNewFromType(PyArray_VOID); + ((PyVoidScalarObject *)ret)->descr->elsize = (int) memu; + ((PyVoidScalarObject *)ret)->flags = BEHAVED | OWNDATA; + ((PyVoidScalarObject *)ret)->base = NULL; + memset(destptr, '\0', (size_t) memu); + return ret; + } - arr = PyArray_FROM_OTF(obj, PyArray_VOID, FORCECAST); - return PyArray_Return((PyArrayObject *)arr); + arr = PyArray_FROM_OTF(obj, PyArray_VOID, FORCECAST); + return PyArray_Return((PyArrayObject *)arr); } @@ -2045,7 +2101,7 @@ static long @lname at _arrtype_hash(PyObject *obj) { - return (long)(((Py at name@ScalarObject *)obj)->obval); + return (long)(((Py at name@ScalarObject *)obj)->obval); } /**end repeat**/ @@ -2056,9 +2112,9 @@ static long @lname at _arrtype_hash(PyObject *obj) { - long x = (long)(((Py at name@ScalarObject *)obj)->obval); - if (x == -1) x=-2; - return x; + long x = (long)(((Py at name@ScalarObject *)obj)->obval); + if (x == -1) x=-2; + return x; } /**end repeat**/ @@ -2066,9 +2122,9 @@ static long int_arrtype_hash(PyObject *obj) { - long x = (long)(((PyIntScalarObject *)obj)->obval); - if (x == -1) x=-2; - return x; + long x = (long)(((PyIntScalarObject *)obj)->obval); + if (x == -1) x=-2; + return x; } #endif @@ -2082,23 +2138,23 @@ static long @char at longlong_arrtype_hash(PyObject *obj) { - long y; - @char at longlong x = (((Py at Char@LongLongScalarObject *)obj)->obval); + long y; + @char at longlong x = (((Py at Char@LongLongScalarObject *)obj)->obval); - if ((x <= LONG_MAX)@ext@) { - y = (long) x; - } - else { - union Mask { - long hashvals[2]; - @char at longlong v; - } both; + if ((x <= LONG_MAX)@ext@) { + y = (long) x; + } + else { + union Mask { + long hashvals[2]; + @char at longlong v; + } both; - both.v = x; - y = both.hashvals[0] + (1000003)*both.hashvals[1]; - } - if (y == -1) y = -2; - return y; + both.v = x; + y = both.hashvals[0] + (1000003)*both.hashvals[1]; + } + if (y == -1) y = -2; + return y; } #endif /**end repeat**/ @@ -2107,9 +2163,9 @@ static long ulonglong_arrtype_hash(PyObject *obj) { - long x = (long)(((PyULongLongScalarObject *)obj)->obval); - if (x == -1) x=-2; - return x; + long x = (long)(((PyULongLongScalarObject *)obj)->obval); + if (x == -1) x=-2; + return x; } #endif @@ -2123,230 +2179,230 @@ static long @lname at _arrtype_hash(PyObject *obj) { - return _Py_HashDouble((double) ((Py at name@ScalarObject *)obj)->obval); + return _Py_HashDouble((double) ((Py at name@ScalarObject *)obj)->obval); } /* borrowed from complex_hash */ static long c at lname@_arrtype_hash(PyObject *obj) { - long hashreal, hashimag, combined; - hashreal = _Py_HashDouble((double) \ - (((PyC at name@ScalarObject *)obj)->obval).real); + long hashreal, hashimag, combined; + hashreal = _Py_HashDouble((double) \ + (((PyC at name@ScalarObject *)obj)->obval).real); - if (hashreal == -1) return -1; - hashimag = _Py_HashDouble((double) \ - (((PyC at name@ScalarObject *)obj)->obval).imag); - if (hashimag == -1) return -1; + if (hashreal == -1) return -1; + hashimag = _Py_HashDouble((double) \ + (((PyC at name@ScalarObject *)obj)->obval).imag); + if (hashimag == -1) return -1; - combined = hashreal + 1000003 * hashimag; - if (combined == -1) combined = -2; - return combined; + combined = hashreal + 1000003 * hashimag; + if (combined == -1) combined = -2; + return combined; } /**end repeat**/ static long object_arrtype_hash(PyObject *obj) { - return PyObject_Hash(((PyObjectScalarObject *)obj)->obval); + return PyObject_Hash(((PyObjectScalarObject *)obj)->obval); } /* just hash the pointer */ static long void_arrtype_hash(PyObject *obj) { - return _Py_HashPointer((void *)(((PyVoidScalarObject *)obj)->obval)); + return _Py_HashPointer((void *)(((PyVoidScalarObject *)obj)->obval)); } /*object arrtype getattro and setattro */ static PyObject * object_arrtype_getattro(PyObjectScalarObject *obj, PyObject *attr) { - PyObject *res; + PyObject *res; - /* first look in object and then hand off to generic type */ + /* first look in object and then hand off to generic type */ - res = PyObject_GenericGetAttr(obj->obval, attr); - if (res) return res; - PyErr_Clear(); - return PyObject_GenericGetAttr((PyObject *)obj, attr); + res = PyObject_GenericGetAttr(obj->obval, attr); + if (res) return res; + PyErr_Clear(); + return PyObject_GenericGetAttr((PyObject *)obj, attr); } static int object_arrtype_setattro(PyObjectScalarObject *obj, PyObject *attr, PyObject *val) { - int res; - /* first look in object and then hand off to generic type */ + int res; + /* first look in object and then hand off to generic type */ - res = PyObject_GenericSetAttr(obj->obval, attr, val); - if (res >= 0) return res; - PyErr_Clear(); - return PyObject_GenericSetAttr((PyObject *)obj, attr, val); + res = PyObject_GenericSetAttr(obj->obval, attr, val); + if (res >= 0) return res; + PyErr_Clear(); + return PyObject_GenericSetAttr((PyObject *)obj, attr, val); } static PyObject * object_arrtype_concat(PyObjectScalarObject *self, PyObject *other) { - return PySequence_Concat(self->obval, other); + return PySequence_Concat(self->obval, other); } static Py_ssize_t object_arrtype_length(PyObjectScalarObject *self) { - return PyObject_Length(self->obval); + return PyObject_Length(self->obval); } static PyObject * object_arrtype_repeat(PyObjectScalarObject *self, Py_ssize_t count) { - return PySequence_Repeat(self->obval, count); + return PySequence_Repeat(self->obval, count); } static PyObject * object_arrtype_subscript(PyObjectScalarObject *self, PyObject *key) { - return PyObject_GetItem(self->obval, key); + return PyObject_GetItem(self->obval, key); } static int object_arrtype_ass_subscript(PyObjectScalarObject *self, PyObject *key, PyObject *value) { - return PyObject_SetItem(self->obval, key, value); + return PyObject_SetItem(self->obval, key, value); } static int object_arrtype_contains(PyObjectScalarObject *self, PyObject *ob) { - return PySequence_Contains(self->obval, ob); + return PySequence_Contains(self->obval, ob); } static PyObject * object_arrtype_inplace_concat(PyObjectScalarObject *self, PyObject *o) { - return PySequence_InPlaceConcat(self->obval, o); + return PySequence_InPlaceConcat(self->obval, o); } static PyObject * object_arrtype_inplace_repeat(PyObjectScalarObject *self, Py_ssize_t count) { - return PySequence_InPlaceRepeat(self->obval, count); + return PySequence_InPlaceRepeat(self->obval, count); } static PySequenceMethods object_arrtype_as_sequence = { #if PY_VERSION_HEX >= 0x02050000 - (lenfunc)object_arrtype_length, /*sq_length*/ - (binaryfunc)object_arrtype_concat, /*sq_concat*/ - (ssizeargfunc)object_arrtype_repeat, /*sq_repeat*/ - 0, /*sq_item*/ - 0, /*sq_slice*/ - 0, /* sq_ass_item */ - 0, /* sq_ass_slice */ - (objobjproc)object_arrtype_contains, /* sq_contains */ - (binaryfunc)object_arrtype_inplace_concat, /* sq_inplace_concat */ - (ssizeargfunc)object_arrtype_inplace_repeat, /* sq_inplace_repeat */ + (lenfunc)object_arrtype_length, /*sq_length*/ + (binaryfunc)object_arrtype_concat, /*sq_concat*/ + (ssizeargfunc)object_arrtype_repeat, /*sq_repeat*/ + 0, /*sq_item*/ + 0, /*sq_slice*/ + 0, /* sq_ass_item */ + 0, /* sq_ass_slice */ + (objobjproc)object_arrtype_contains, /* sq_contains */ + (binaryfunc)object_arrtype_inplace_concat, /* sq_inplace_concat */ + (ssizeargfunc)object_arrtype_inplace_repeat, /* sq_inplace_repeat */ #else - (inquiry)object_arrtype_length, /*sq_length*/ - (binaryfunc)object_arrtype_concat, /*sq_concat*/ - (intargfunc)object_arrtype_repeat, /*sq_repeat*/ - 0, /*sq_item*/ - 0, /*sq_slice*/ - 0, /* sq_ass_item */ - 0, /* sq_ass_slice */ - (objobjproc)object_arrtype_contains, /* sq_contains */ - (binaryfunc)object_arrtype_inplace_concat, /* sq_inplace_concat */ - (intargfunc)object_arrtype_inplace_repeat, /* sq_inplace_repeat */ + (inquiry)object_arrtype_length, /*sq_length*/ + (binaryfunc)object_arrtype_concat, /*sq_concat*/ + (intargfunc)object_arrtype_repeat, /*sq_repeat*/ + 0, /*sq_item*/ + 0, /*sq_slice*/ + 0, /* sq_ass_item */ + 0, /* sq_ass_slice */ + (objobjproc)object_arrtype_contains, /* sq_contains */ + (binaryfunc)object_arrtype_inplace_concat, /* sq_inplace_concat */ + (intargfunc)object_arrtype_inplace_repeat, /* sq_inplace_repeat */ #endif }; static PyMappingMethods object_arrtype_as_mapping = { #if PY_VERSION_HEX >= 0x02050000 - (lenfunc)object_arrtype_length, - (binaryfunc)object_arrtype_subscript, - (objobjargproc)object_arrtype_ass_subscript, + (lenfunc)object_arrtype_length, + (binaryfunc)object_arrtype_subscript, + (objobjargproc)object_arrtype_ass_subscript, #else - (inquiry)object_arrtype_length, - (binaryfunc)object_arrtype_subscript, - (objobjargproc)object_arrtype_ass_subscript, + (inquiry)object_arrtype_length, + (binaryfunc)object_arrtype_subscript, + (objobjargproc)object_arrtype_ass_subscript, #endif }; static Py_ssize_t object_arrtype_getsegcount(PyObjectScalarObject *self, Py_ssize_t *lenp) { - Py_ssize_t newlen; - int cnt; - PyBufferProcs *pb = self->obval->ob_type->tp_as_buffer; + Py_ssize_t newlen; + int cnt; + PyBufferProcs *pb = self->obval->ob_type->tp_as_buffer; - if (pb == NULL || \ + if (pb == NULL || \ pb->bf_getsegcount == NULL || \ (cnt = (*pb->bf_getsegcount)(self->obval, &newlen)) != 1) - return 0; + return 0; - if (lenp) - *lenp = newlen; + if (lenp) + *lenp = newlen; - return cnt; + return cnt; } static Py_ssize_t object_arrtype_getreadbuf(PyObjectScalarObject *self, Py_ssize_t segment, void **ptrptr) { - PyBufferProcs *pb = self->obval->ob_type->tp_as_buffer; + PyBufferProcs *pb = self->obval->ob_type->tp_as_buffer; - if (pb == NULL || \ + if (pb == NULL || \ pb->bf_getreadbuffer == NULL || pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a readable buffer object"); - return -1; - } + PyErr_SetString(PyExc_TypeError, + "expected a readable buffer object"); + return -1; + } - return (*pb->bf_getreadbuffer)(self->obval, segment, ptrptr); + return (*pb->bf_getreadbuffer)(self->obval, segment, ptrptr); } static Py_ssize_t object_arrtype_getwritebuf(PyObjectScalarObject *self, Py_ssize_t segment, void **ptrptr) { - PyBufferProcs *pb = self->obval->ob_type->tp_as_buffer; + PyBufferProcs *pb = self->obval->ob_type->tp_as_buffer; - if (pb == NULL || \ + if (pb == NULL || \ pb->bf_getwritebuffer == NULL || pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a writeable buffer object"); - return -1; - } + PyErr_SetString(PyExc_TypeError, + "expected a writeable buffer object"); + return -1; + } - return (*pb->bf_getwritebuffer)(self->obval, segment, ptrptr); + return (*pb->bf_getwritebuffer)(self->obval, segment, ptrptr); } static Py_ssize_t object_arrtype_getcharbuf(PyObjectScalarObject *self, Py_ssize_t segment, constchar **ptrptr) { - PyBufferProcs *pb = self->obval->ob_type->tp_as_buffer; + PyBufferProcs *pb = self->obval->ob_type->tp_as_buffer; - if (pb == NULL || \ + if (pb == NULL || \ pb->bf_getcharbuffer == NULL || pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a character buffer object"); - return -1; - } + PyErr_SetString(PyExc_TypeError, + "expected a character buffer object"); + return -1; + } - return (*pb->bf_getcharbuffer)(self->obval, segment, ptrptr); + return (*pb->bf_getcharbuffer)(self->obval, segment, ptrptr); } static PyBufferProcs object_arrtype_as_buffer = { #if PY_VERSION_HEX >= 0x02050000 - (readbufferproc)object_arrtype_getreadbuf, - (writebufferproc)object_arrtype_getwritebuf, - (segcountproc)object_arrtype_getsegcount, - (charbufferproc)object_arrtype_getcharbuf, + (readbufferproc)object_arrtype_getreadbuf, + (writebufferproc)object_arrtype_getwritebuf, + (segcountproc)object_arrtype_getsegcount, + (charbufferproc)object_arrtype_getcharbuf, #else - (getreadbufferproc)object_arrtype_getreadbuf, - (getwritebufferproc)object_arrtype_getwritebuf, - (getsegcountproc)object_arrtype_getsegcount, - (getcharbufferproc)object_arrtype_getcharbuf, + (getreadbufferproc)object_arrtype_getreadbuf, + (getwritebufferproc)object_arrtype_getwritebuf, + (getsegcountproc)object_arrtype_getsegcount, + (getcharbufferproc)object_arrtype_getcharbuf, #endif }; @@ -2357,27 +2413,27 @@ } static PyTypeObject PyObjectArrType_Type = { - PyObject_HEAD_INIT(NULL) - 0, /*ob_size*/ - "numpy.object_", /*tp_name*/ - sizeof(PyObjectScalarObject), /*tp_basicsize*/ - 0, /* tp_itemsize */ - (destructor)object_arrtype_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - &object_arrtype_as_sequence, /* tp_as_sequence */ - &object_arrtype_as_mapping, /* tp_as_mapping */ - 0, /* tp_hash */ - (ternaryfunc)object_arrtype_call, /* tp_call */ - 0, /* tp_str */ - (getattrofunc)object_arrtype_getattro, /* tp_getattro */ - (setattrofunc)object_arrtype_setattro, /* tp_setattro */ - &object_arrtype_as_buffer, /* tp_as_buffer */ - 0, /* tp_flags */ + PyObject_HEAD_INIT(NULL) + 0, /*ob_size*/ + "numpy.object_", /*tp_name*/ + sizeof(PyObjectScalarObject), /*tp_basicsize*/ + 0, /* tp_itemsize */ + (destructor)object_arrtype_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + &object_arrtype_as_sequence, /* tp_as_sequence */ + &object_arrtype_as_mapping, /* tp_as_mapping */ + 0, /* tp_hash */ + (ternaryfunc)object_arrtype_call, /* tp_call */ + 0, /* tp_str */ + (getattrofunc)object_arrtype_getattro, /* tp_getattro */ + (setattrofunc)object_arrtype_setattro, /* tp_setattro */ + &object_arrtype_as_buffer, /* tp_as_buffer */ + 0, /* tp_flags */ }; @@ -2390,54 +2446,54 @@ static PyObject * gen_arrtype_subscript(PyObject *self, PyObject *key) { - /* Only [...], [...,], [, ...], - is allowed for indexing a scalar + /* Only [...], [...,], [, ...], + is allowed for indexing a scalar - These return a new N-d array with a copy of - the data where N is the number of None's in . + These return a new N-d array with a copy of + the data where N is the number of None's in . - */ - PyObject *res, *ret; - int N; + */ + PyObject *res, *ret; + int N; - if (key == Py_Ellipsis || key == Py_None || - PyTuple_Check(key)) { - res = PyArray_FromScalar(self, NULL); - } - else { - PyErr_SetString(PyExc_IndexError, - "invalid index to scalar variable."); - return NULL; - } + if (key == Py_Ellipsis || key == Py_None || + PyTuple_Check(key)) { + res = PyArray_FromScalar(self, NULL); + } + else { + PyErr_SetString(PyExc_IndexError, + "invalid index to scalar variable."); + return NULL; + } - if (key == Py_Ellipsis) - return res; + if (key == Py_Ellipsis) + return res; - if (key == Py_None) { - ret = add_new_axes_0d((PyArrayObject *)res, 1); - Py_DECREF(res); - return ret; - } - /* Must be a Tuple */ + if (key == Py_None) { + ret = add_new_axes_0d((PyArrayObject *)res, 1); + Py_DECREF(res); + return ret; + } + /* Must be a Tuple */ - N = count_new_axes_0d(key); - if (N < 0) return NULL; - ret = add_new_axes_0d((PyArrayObject *)res, N); - Py_DECREF(res); - return ret; + N = count_new_axes_0d(key); + if (N < 0) return NULL; + ret = add_new_axes_0d((PyArrayObject *)res, N); + Py_DECREF(res); + return ret; } /**begin repeat -#name=bool, string, unicode, void# -#NAME=Bool, String, Unicode, Void# -#ex=_,_,_,# -*/ + * #name=bool, string, unicode, void# + * #NAME=Bool, String, Unicode, Void# + * #ex=_,_,_,# + */ static PyTypeObject Py at NAME@ArrType_Type = { - PyObject_HEAD_INIT(NULL) - 0, /*ob_size*/ - "numpy. at name@@ex@", /*tp_name*/ - sizeof(Py at NAME@ScalarObject), /*tp_basicsize*/ + PyObject_HEAD_INIT(NULL) + 0, /*ob_size*/ + "numpy. at name@@ex@", /*tp_name*/ + sizeof(Py at NAME@ScalarObject), /*tp_basicsize*/ }; /**end repeat**/ @@ -2464,10 +2520,10 @@ #define _THIS_SIZE "256" #endif static PyTypeObject Py at NAME@ArrType_Type = { - PyObject_HEAD_INIT(NULL) - 0, /*ob_size*/ - "numpy. at name@" _THIS_SIZE, /*tp_name*/ - sizeof(Py at NAME@ScalarObject), /*tp_basicsize*/ + PyObject_HEAD_INIT(NULL) + 0, /*ob_size*/ + "numpy. at name@" _THIS_SIZE, /*tp_name*/ + sizeof(Py at NAME@ScalarObject), /*tp_basicsize*/ }; #undef _THIS_SIZE @@ -2475,9 +2531,9 @@ static PyMappingMethods gentype_as_mapping = { - NULL, - (binaryfunc)gen_arrtype_subscript, - NULL + NULL, + (binaryfunc)gen_arrtype_subscript, + NULL }; @@ -2509,28 +2565,28 @@ #define _THIS_SIZE1 "512" #endif static PyTypeObject Py at NAME@ArrType_Type = { - PyObject_HEAD_INIT(NULL) - 0, /*ob_size*/ - "numpy. at name@" _THIS_SIZE1, /*tp_name*/ - sizeof(Py at NAME@ScalarObject), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - 0, /*tp_dealloc*/ - 0, /*tp_print*/ - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - 0, /*tp_compare*/ - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash */ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT, /*tp_flags*/ - "Composed of two " _THIS_SIZE2 " bit floats", /* tp_doc */ + PyObject_HEAD_INIT(NULL) + 0, /*ob_size*/ + "numpy. at name@" _THIS_SIZE1, /*tp_name*/ + sizeof(Py at NAME@ScalarObject), /*tp_basicsize*/ + 0, /*tp_itemsize*/ + 0, /*tp_dealloc*/ + 0, /*tp_print*/ + 0, /*tp_getattr*/ + 0, /*tp_setattr*/ + 0, /*tp_compare*/ + 0, /*tp_repr*/ + 0, /*tp_as_number*/ + 0, /*tp_as_sequence*/ + 0, /*tp_as_mapping*/ + 0, /*tp_hash */ + 0, /*tp_call*/ + 0, /*tp_str*/ + 0, /*tp_getattro*/ + 0, /*tp_setattro*/ + 0, /*tp_as_buffer*/ + Py_TPFLAGS_DEFAULT, /*tp_flags*/ + "Composed of two " _THIS_SIZE2 " bit floats", /* tp_doc */ }; #undef _THIS_SIZE1 #undef _THIS_SIZE2 @@ -2545,178 +2601,178 @@ static void initialize_numeric_types(void) { - PyGenericArrType_Type.tp_dealloc = (destructor)gentype_dealloc; - PyGenericArrType_Type.tp_as_number = &gentype_as_number; - PyGenericArrType_Type.tp_as_buffer = &gentype_as_buffer; - PyGenericArrType_Type.tp_as_mapping = &gentype_as_mapping; - PyGenericArrType_Type.tp_flags = BASEFLAGS; - PyGenericArrType_Type.tp_methods = gentype_methods; - PyGenericArrType_Type.tp_getset = gentype_getsets; - PyGenericArrType_Type.tp_new = NULL; - PyGenericArrType_Type.tp_alloc = gentype_alloc; - PyGenericArrType_Type.tp_free = _pya_free; - PyGenericArrType_Type.tp_repr = gentype_repr; - PyGenericArrType_Type.tp_str = gentype_str; - PyGenericArrType_Type.tp_richcompare = gentype_richcompare; + PyGenericArrType_Type.tp_dealloc = (destructor)gentype_dealloc; + PyGenericArrType_Type.tp_as_number = &gentype_as_number; + PyGenericArrType_Type.tp_as_buffer = &gentype_as_buffer; + PyGenericArrType_Type.tp_as_mapping = &gentype_as_mapping; + PyGenericArrType_Type.tp_flags = BASEFLAGS; + PyGenericArrType_Type.tp_methods = gentype_methods; + PyGenericArrType_Type.tp_getset = gentype_getsets; + PyGenericArrType_Type.tp_new = NULL; + PyGenericArrType_Type.tp_alloc = gentype_alloc; + PyGenericArrType_Type.tp_free = _pya_free; + PyGenericArrType_Type.tp_repr = gentype_repr; + PyGenericArrType_Type.tp_str = gentype_str; + PyGenericArrType_Type.tp_richcompare = gentype_richcompare; - PyBoolArrType_Type.tp_as_number = &bool_arrtype_as_number; + PyBoolArrType_Type.tp_as_number = &bool_arrtype_as_number; #if PY_VERSION_HEX >= 0x02050000 - /* need to add dummy versions with filled-in nb_index - in-order for PyType_Ready to fill in .__index__() method - */ - /**begin repeat + /* need to add dummy versions with filled-in nb_index + in-order for PyType_Ready to fill in .__index__() method + */ + /**begin repeat #name=byte, short, int, long, longlong, ubyte, ushort, uint, ulong, ulonglong# #NAME=Byte, Short, Int, Long, LongLong, UByte, UShort, UInt, ULong, ULongLong# - */ - Py at NAME@ArrType_Type.tp_as_number = &@name at _arrtype_as_number; - Py at NAME@ArrType_Type.tp_as_number->nb_index = (unaryfunc)@name at _index; + */ + Py at NAME@ArrType_Type.tp_as_number = &@name at _arrtype_as_number; + Py at NAME@ArrType_Type.tp_as_number->nb_index = (unaryfunc)@name at _index; - /**end repeat**/ - PyBoolArrType_Type.tp_as_number->nb_index = (unaryfunc)bool_index; + /**end repeat**/ + PyBoolArrType_Type.tp_as_number->nb_index = (unaryfunc)bool_index; #endif - PyStringArrType_Type.tp_alloc = NULL; - PyStringArrType_Type.tp_free = NULL; + PyStringArrType_Type.tp_alloc = NULL; + PyStringArrType_Type.tp_free = NULL; - PyStringArrType_Type.tp_repr = stringtype_repr; - PyStringArrType_Type.tp_str = stringtype_str; + PyStringArrType_Type.tp_repr = stringtype_repr; + PyStringArrType_Type.tp_str = stringtype_str; - PyUnicodeArrType_Type.tp_repr = unicodetype_repr; - PyUnicodeArrType_Type.tp_str = unicodetype_str; + PyUnicodeArrType_Type.tp_repr = unicodetype_repr; + PyUnicodeArrType_Type.tp_str = unicodetype_str; - PyVoidArrType_Type.tp_methods = voidtype_methods; - PyVoidArrType_Type.tp_getset = voidtype_getsets; - PyVoidArrType_Type.tp_as_mapping = &voidtype_as_mapping; - PyVoidArrType_Type.tp_as_sequence = &voidtype_as_sequence; + PyVoidArrType_Type.tp_methods = voidtype_methods; + PyVoidArrType_Type.tp_getset = voidtype_getsets; + PyVoidArrType_Type.tp_as_mapping = &voidtype_as_mapping; + PyVoidArrType_Type.tp_as_sequence = &voidtype_as_sequence; - /**begin repeat + /**begin repeat #NAME=Number, Integer, SignedInteger, UnsignedInteger, Inexact, Floating, ComplexFloating, Flexible, Character# - */ - Py at NAME@ArrType_Type.tp_flags = BASEFLAGS; - /**end repeat**/ + */ + Py at NAME@ArrType_Type.tp_flags = BASEFLAGS; + /**end repeat**/ - /**begin repeat + /**begin repeat #name=bool, byte, short, int, long, longlong, ubyte, ushort, uint, ulong, ulonglong, float, double, longdouble, cfloat, cdouble, clongdouble, string, unicode, void, object# #NAME=Bool, Byte, Short, Int, Long, LongLong, UByte, UShort, UInt, ULong, ULongLong, Float, Double, LongDouble, CFloat, CDouble, CLongDouble, String, Unicode, Void, Object# - */ - Py at NAME@ArrType_Type.tp_flags = BASEFLAGS; - Py at NAME@ArrType_Type.tp_new = @name at _arrtype_new; - Py at NAME@ArrType_Type.tp_richcompare = gentype_richcompare; - /**end repeat**/ + */ + Py at NAME@ArrType_Type.tp_flags = BASEFLAGS; + Py at NAME@ArrType_Type.tp_new = @name at _arrtype_new; + Py at NAME@ArrType_Type.tp_richcompare = gentype_richcompare; + /**end repeat**/ - /**begin repeat + /**begin repeat #name=bool, byte, short, ubyte, ushort, uint, ulong, ulonglong, float, longdouble, cfloat, clongdouble, void, object# #NAME=Bool, Byte, Short, UByte, UShort, UInt, ULong, ULongLong, Float, LongDouble, CFloat, CLongDouble, Void, Object# - */ - Py at NAME@ArrType_Type.tp_hash = @name at _arrtype_hash; - /**end repeat**/ + */ + Py at NAME@ArrType_Type.tp_hash = @name at _arrtype_hash; + /**end repeat**/ #if SIZEOF_INT != SIZEOF_LONG - /* We won't be inheriting from Python Int type. */ - PyIntArrType_Type.tp_hash = int_arrtype_hash; + /* We won't be inheriting from Python Int type. */ + PyIntArrType_Type.tp_hash = int_arrtype_hash; #endif #if SIZEOF_LONG != SIZEOF_LONGLONG - /* We won't be inheriting from Python Int type. */ - PyLongLongArrType_Type.tp_hash = longlong_arrtype_hash; + /* We won't be inheriting from Python Int type. */ + PyLongLongArrType_Type.tp_hash = longlong_arrtype_hash; #endif - /**begin repeat - *#name = repr, str# - */ - PyFloatArrType_Type.tp_ at name@ = floattype_ at name@; - PyCFloatArrType_Type.tp_ at name@ = cfloattype_ at name@; + /**begin repeat + *#name = repr, str# + */ + PyFloatArrType_Type.tp_ at name@ = floattype_ at name@; + PyCFloatArrType_Type.tp_ at name@ = cfloattype_ at name@; - PyDoubleArrType_Type.tp_ at name@ = doubletype_ at name@; - PyCDoubleArrType_Type.tp_ at name@ = cdoubletype_ at name@; - /**end repeat**/ + PyDoubleArrType_Type.tp_ at name@ = doubletype_ at name@; + PyCDoubleArrType_Type.tp_ at name@ = cdoubletype_ at name@; + /**end repeat**/ - /* These need to be coded specially because getitem does not - return a normal Python type - */ - PyLongDoubleArrType_Type.tp_as_number = &longdoubletype_as_number; - PyCLongDoubleArrType_Type.tp_as_number = &clongdoubletype_as_number; + /* These need to be coded specially because getitem does not + return a normal Python type + */ + PyLongDoubleArrType_Type.tp_as_number = &longdoubletype_as_number; + PyCLongDoubleArrType_Type.tp_as_number = &clongdoubletype_as_number; - /**begin repeat + /**begin repeat #name=int, long, hex, oct, float, repr, str# #kind=tp_as_number->nb*5, tp*2# - */ - PyLongDoubleArrType_Type. at kind@_ at name@ = longdoubletype_ at name@; - PyCLongDoubleArrType_Type. at kind@_ at name@ = clongdoubletype_ at name@; - /**end repeat**/ + */ + PyLongDoubleArrType_Type. at kind@_ at name@ = longdoubletype_ at name@; + PyCLongDoubleArrType_Type. at kind@_ at name@ = clongdoubletype_ at name@; + /**end repeat**/ - PyStringArrType_Type.tp_itemsize = sizeof(char); - PyVoidArrType_Type.tp_dealloc = (destructor) void_dealloc; + PyStringArrType_Type.tp_itemsize = sizeof(char); + PyVoidArrType_Type.tp_dealloc = (destructor) void_dealloc; - PyArrayIter_Type.tp_iter = PyObject_SelfIter; - PyArrayMapIter_Type.tp_iter = PyObject_SelfIter; + PyArrayIter_Type.tp_iter = PyObject_SelfIter; + PyArrayMapIter_Type.tp_iter = PyObject_SelfIter; } /* the order of this table is important */ static PyTypeObject *typeobjects[] = { - &PyBoolArrType_Type, - &PyByteArrType_Type, - &PyUByteArrType_Type, - &PyShortArrType_Type, - &PyUShortArrType_Type, - &PyIntArrType_Type, - &PyUIntArrType_Type, - &PyLongArrType_Type, - &PyULongArrType_Type, - &PyLongLongArrType_Type, - &PyULongLongArrType_Type, - &PyFloatArrType_Type, - &PyDoubleArrType_Type, - &PyLongDoubleArrType_Type, - &PyCFloatArrType_Type, - &PyCDoubleArrType_Type, - &PyCLongDoubleArrType_Type, - &PyObjectArrType_Type, - &PyStringArrType_Type, - &PyUnicodeArrType_Type, - &PyVoidArrType_Type + &PyBoolArrType_Type, + &PyByteArrType_Type, + &PyUByteArrType_Type, + &PyShortArrType_Type, + &PyUShortArrType_Type, + &PyIntArrType_Type, + &PyUIntArrType_Type, + &PyLongArrType_Type, + &PyULongArrType_Type, + &PyLongLongArrType_Type, + &PyULongLongArrType_Type, + &PyFloatArrType_Type, + &PyDoubleArrType_Type, + &PyLongDoubleArrType_Type, + &PyCFloatArrType_Type, + &PyCDoubleArrType_Type, + &PyCLongDoubleArrType_Type, + &PyObjectArrType_Type, + &PyStringArrType_Type, + &PyUnicodeArrType_Type, + &PyVoidArrType_Type }; static int _typenum_fromtypeobj(PyObject *type, int user) { - int typenum, i; + int typenum, i; - typenum = PyArray_NOTYPE; - i = 0; - while(i < PyArray_NTYPES) { - if (type == (PyObject *)typeobjects[i]) { - typenum = i; - break; - } - i++; + typenum = PyArray_NOTYPE; + i = 0; + while(i < PyArray_NTYPES) { + if (type == (PyObject *)typeobjects[i]) { + typenum = i; + break; } + i++; + } - if (!user) return typenum; + if (!user) return typenum; - /* Search any registered types */ - i = 0; - while (i < PyArray_NUMUSERTYPES) { - if (type == (PyObject *)(userdescrs[i]->typeobj)) { - typenum = i + PyArray_USERDEF; - break; - } - i++; + /* Search any registered types */ + i = 0; + while (i < PyArray_NUMUSERTYPES) { + if (type == (PyObject *)(userdescrs[i]->typeobj)) { + typenum = i + PyArray_USERDEF; + break; } - return typenum; + i++; + } + return typenum; } static PyArray_Descr * _descr_from_subtype(PyObject *type) { - PyObject *mro; - mro = ((PyTypeObject *)type)->tp_mro; - if (PyTuple_GET_SIZE(mro) < 2) { - return PyArray_DescrFromType(PyArray_OBJECT); - } - return PyArray_DescrFromTypeObject(PyTuple_GET_ITEM(mro, 1)); + PyObject *mro; + mro = ((PyTypeObject *)type)->tp_mro; + if (PyTuple_GET_SIZE(mro) < 2) { + return PyArray_DescrFromType(PyArray_OBJECT); + } + return PyArray_DescrFromTypeObject(PyTuple_GET_ITEM(mro, 1)); } /*New reference */ @@ -2725,64 +2781,64 @@ static PyArray_Descr * PyArray_DescrFromTypeObject(PyObject *type) { - int typenum; - PyArray_Descr *new, *conv=NULL; + int typenum; + PyArray_Descr *new, *conv=NULL; - /* if it's a builtin type, then use the typenumber */ - typenum = _typenum_fromtypeobj(type,1); - if (typenum != PyArray_NOTYPE) { - new = PyArray_DescrFromType(typenum); - return new; - } + /* if it's a builtin type, then use the typenumber */ + typenum = _typenum_fromtypeobj(type,1); + if (typenum != PyArray_NOTYPE) { + new = PyArray_DescrFromType(typenum); + return new; + } - /* Check the generic types */ - if ((type == (PyObject *) &PyNumberArrType_Type) || \ + /* Check the generic types */ + if ((type == (PyObject *) &PyNumberArrType_Type) || \ (type == (PyObject *) &PyInexactArrType_Type) || \ (type == (PyObject *) &PyFloatingArrType_Type)) - typenum = PyArray_DOUBLE; - else if (type == (PyObject *)&PyComplexFloatingArrType_Type) - typenum = PyArray_CDOUBLE; - else if ((type == (PyObject *)&PyIntegerArrType_Type) || \ - (type == (PyObject *)&PySignedIntegerArrType_Type)) - typenum = PyArray_LONG; - else if (type == (PyObject *) &PyUnsignedIntegerArrType_Type) - typenum = PyArray_ULONG; - else if (type == (PyObject *) &PyCharacterArrType_Type) - typenum = PyArray_STRING; - else if ((type == (PyObject *) &PyGenericArrType_Type) || \ - (type == (PyObject *) &PyFlexibleArrType_Type)) - typenum = PyArray_VOID; + typenum = PyArray_DOUBLE; + else if (type == (PyObject *)&PyComplexFloatingArrType_Type) + typenum = PyArray_CDOUBLE; + else if ((type == (PyObject *)&PyIntegerArrType_Type) || \ + (type == (PyObject *)&PySignedIntegerArrType_Type)) + typenum = PyArray_LONG; + else if (type == (PyObject *) &PyUnsignedIntegerArrType_Type) + typenum = PyArray_ULONG; + else if (type == (PyObject *) &PyCharacterArrType_Type) + typenum = PyArray_STRING; + else if ((type == (PyObject *) &PyGenericArrType_Type) || \ + (type == (PyObject *) &PyFlexibleArrType_Type)) + typenum = PyArray_VOID; - if (typenum != PyArray_NOTYPE) { - return PyArray_DescrFromType(typenum); - } + if (typenum != PyArray_NOTYPE) { + return PyArray_DescrFromType(typenum); + } - /* Otherwise --- type is a sub-type of an array scalar - not corresponding to a registered data-type object. - */ + /* Otherwise --- type is a sub-type of an array scalar + not corresponding to a registered data-type object. + */ - /* Do special thing for VOID sub-types - */ - if (PyType_IsSubtype((PyTypeObject *)type, &PyVoidArrType_Type)) { - new = PyArray_DescrNewFromType(PyArray_VOID); + /* Do special thing for VOID sub-types + */ + if (PyType_IsSubtype((PyTypeObject *)type, &PyVoidArrType_Type)) { + new = PyArray_DescrNewFromType(PyArray_VOID); - conv = _arraydescr_fromobj(type); - if (conv) { - new->fields = conv->fields; - Py_INCREF(new->fields); - new->names = conv->names; - Py_INCREF(new->names); - new->elsize = conv->elsize; - new->subarray = conv->subarray; - conv->subarray = NULL; - Py_DECREF(conv); - } - Py_XDECREF(new->typeobj); - new->typeobj = (PyTypeObject *)type; - Py_INCREF(type); - return new; + conv = _arraydescr_fromobj(type); + if (conv) { + new->fields = conv->fields; + Py_INCREF(new->fields); + new->names = conv->names; + Py_INCREF(new->names); + new->elsize = conv->elsize; + new->subarray = conv->subarray; + conv->subarray = NULL; + Py_DECREF(conv); } - return _descr_from_subtype(type); + Py_XDECREF(new->typeobj); + new->typeobj = (PyTypeObject *)type; + Py_INCREF(type); + return new; + } + return _descr_from_subtype(type); } /*NUMPY_API @@ -2791,24 +2847,24 @@ static PyObject * PyArray_FieldNames(PyObject *fields) { - PyObject *tup; - PyObject *ret; - PyObject *_numpy_internal; + PyObject *tup; + PyObject *ret; + PyObject *_numpy_internal; - if (!PyDict_Check(fields)) { - PyErr_SetString(PyExc_TypeError, - "Fields must be a dictionary"); - return NULL; - } - _numpy_internal = PyImport_ImportModule("numpy.core._internal"); - if (_numpy_internal == NULL) return NULL; - tup = PyObject_CallMethod(_numpy_internal, "_makenames_list", "O", fields); - Py_DECREF(_numpy_internal); - if (tup == NULL) return NULL; - ret = PyTuple_GET_ITEM(tup, 0); - ret = PySequence_Tuple(ret); - Py_DECREF(tup); - return ret; + if (!PyDict_Check(fields)) { + PyErr_SetString(PyExc_TypeError, + "Fields must be a dictionary"); + return NULL; + } + _numpy_internal = PyImport_ImportModule("numpy.core._internal"); + if (_numpy_internal == NULL) return NULL; + tup = PyObject_CallMethod(_numpy_internal, "_makenames_list", "O", fields); + Py_DECREF(_numpy_internal); + if (tup == NULL) return NULL; + ret = PyTuple_GET_ITEM(tup, 0); + ret = PySequence_Tuple(ret); + Py_DECREF(tup); + return ret; } /* New reference */ @@ -2818,41 +2874,41 @@ static PyArray_Descr * PyArray_DescrFromScalar(PyObject *sc) { - int type_num; - PyArray_Descr *descr; + int type_num; + PyArray_Descr *descr; - if (PyArray_IsScalar(sc, Void)) { - descr = ((PyVoidScalarObject *)sc)->descr; - Py_INCREF(descr); - return descr; - } - descr = PyArray_DescrFromTypeObject((PyObject *)sc->ob_type); - if (descr->elsize == 0) { - PyArray_DESCR_REPLACE(descr); - type_num = descr->type_num; - if (type_num == PyArray_STRING) - descr->elsize = PyString_GET_SIZE(sc); - else if (type_num == PyArray_UNICODE) { - descr->elsize = PyUnicode_GET_DATA_SIZE(sc); + if (PyArray_IsScalar(sc, Void)) { + descr = ((PyVoidScalarObject *)sc)->descr; + Py_INCREF(descr); + return descr; + } + descr = PyArray_DescrFromTypeObject((PyObject *)sc->ob_type); + if (descr->elsize == 0) { + PyArray_DESCR_REPLACE(descr); + type_num = descr->type_num; + if (type_num == PyArray_STRING) + descr->elsize = PyString_GET_SIZE(sc); + else if (type_num == PyArray_UNICODE) { + descr->elsize = PyUnicode_GET_DATA_SIZE(sc); #ifndef Py_UNICODE_WIDE - descr->elsize <<= 1; + descr->elsize <<= 1; #endif - } - else { - descr->elsize = - ((PyVoidScalarObject *)sc)->ob_size; - descr->fields = PyObject_GetAttrString(sc, "fields"); - if (!descr->fields || !PyDict_Check(descr->fields) || - (descr->fields == Py_None)) { - Py_XDECREF(descr->fields); - descr->fields = NULL; - } - if (descr->fields) - descr->names = PyArray_FieldNames(descr->fields); - PyErr_Clear(); - } } - return descr; + else { + descr->elsize = + ((PyVoidScalarObject *)sc)->ob_size; + descr->fields = PyObject_GetAttrString(sc, "fields"); + if (!descr->fields || !PyDict_Check(descr->fields) || + (descr->fields == Py_None)) { + Py_XDECREF(descr->fields); + descr->fields = NULL; + } + if (descr->fields) + descr->names = PyArray_FieldNames(descr->fields); + PyErr_Clear(); + } + } + return descr; } /* New reference */ @@ -2862,13 +2918,13 @@ static PyObject * PyArray_TypeObjectFromType(int type) { - PyArray_Descr *descr; - PyObject *obj; + PyArray_Descr *descr; + PyObject *obj; - descr = PyArray_DescrFromType(type); - if (descr == NULL) return NULL; - obj = (PyObject *)descr->typeobj; - Py_XINCREF(obj); - Py_DECREF(descr); - return obj; + descr = PyArray_DescrFromType(type); + if (descr == NULL) return NULL; + obj = (PyObject *)descr->typeobj; + Py_XINCREF(obj); + Py_DECREF(descr); + return obj; } From numpy-svn at scipy.org Wed Jun 25 15:43:59 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 25 Jun 2008 14:43:59 -0500 (CDT) Subject: [Numpy-svn] r5314 - trunk/numpy/ma Message-ID: <20080625194359.5EB3DC7C01D@scipy.org> Author: pierregm Date: 2008-06-25 14:43:56 -0500 (Wed, 25 Jun 2008) New Revision: 5314 Modified: trunk/numpy/ma/core.py Log: MaskedArray.__new__ : compare data to cls and not to MaskedArray, in case __new__ is called from a subclass MaskedArray: add the iscontiguous method. Modified: trunk/numpy/ma/core.py =================================================================== --- trunk/numpy/ma/core.py 2008-06-24 05:10:37 UTC (rev 5313) +++ trunk/numpy/ma/core.py 2008-06-25 19:43:56 UTC (rev 5314) @@ -1201,7 +1201,8 @@ _data = np.array(data, dtype=dtype, copy=copy, subok=True, ndmin=ndmin) _baseclass = getattr(data, '_baseclass', type(_data)) _basedict = getattr(data, '_basedict', getattr(data, '__dict__', {})) - if not isinstance(data, MaskedArray) or not subok: + # Careful, cls might not always be MaskedArray... + if not isinstance(data, cls) or not subok: _data = _data.view(cls) else: _data = _data.view(type(data)) @@ -2079,6 +2080,11 @@ if self._mask is nomask: return (self.ctypes.data, id(nomask)) return (self.ctypes.data, self._mask.ctypes.data) + + def iscontiguous(self): + "Is the data contiguous?" + return self.flags['CONTIGUOUS'] + #............................................ def all(self, axis=None, out=None): """a.all(axis=None, out=None) From numpy-svn at scipy.org Wed Jun 25 15:47:56 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 25 Jun 2008 14:47:56 -0500 (CDT) Subject: [Numpy-svn] r5315 - trunk/numpy/ma/tests Message-ID: <20080625194756.94810C7C01D@scipy.org> Author: pierregm Date: 2008-06-25 14:47:54 -0500 (Wed, 25 Jun 2008) New Revision: 5315 Modified: trunk/numpy/ma/tests/ Log: Property changes on: trunk/numpy/ma/tests ___________________________________________________________________ Name: svn:ignore + test_mrecords_new.py test_core_new.py test_core.py.new From numpy-svn at scipy.org Thu Jun 26 00:32:57 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Wed, 25 Jun 2008 23:32:57 -0500 (CDT) Subject: [Numpy-svn] r5316 - trunk/numpy/core/src Message-ID: <20080626043257.4EEEF39C10E@scipy.org> Author: charris Date: 2008-06-25 23:32:54 -0500 (Wed, 25 Jun 2008) New Revision: 5316 Modified: trunk/numpy/core/src/arraytypes.inc.src Log: Test #825 fix. Modified: trunk/numpy/core/src/arraytypes.inc.src =================================================================== --- trunk/numpy/core/src/arraytypes.inc.src 2008-06-25 19:47:54 UTC (rev 5315) +++ trunk/numpy/core/src/arraytypes.inc.src 2008-06-26 04:32:54 UTC (rev 5316) @@ -248,49 +248,60 @@ return PyArray_Scalar(ip, ap->descr, NULL); } - /* UNICODE */ static PyObject * UNICODE_getitem(char *ip, PyArrayObject *ap) { - PyObject *obj; - int mysize; - PyArray_UCS4 *dptr; - char *buffer; - int alloc=0; + intp elsize = ap->descr->elsize; + intp mysize = elsize/sizeof(PyArray_UCS4); + int alloc = 0; + PyArray_UCS4 *buffer = NULL; + PyUnicodeObject *obj; + intp i; - mysize = ap->descr->elsize >> 2; - dptr = (PyArray_UCS4 *)ip + mysize-1; - while(mysize > 0 && *dptr-- == 0) mysize--; - if (!PyArray_ISBEHAVED(ap)) { - buffer = _pya_malloc(mysize << 2); - if (buffer == NULL) - return PyErr_NoMemory(); + if (!PyArray_ISBEHAVED_RO(ap)) { + buffer = malloc(elsize); + if (buffer == NULL) { + PyErr_NoMemory(); + goto fail; + } alloc = 1; - memcpy(buffer, ip, mysize << 2); + memcpy(buffer, ip, elsize); if (!PyArray_ISNOTSWAPPED(ap)) { - byte_swap_vector(buffer, mysize, 4); + byte_swap_vector(buffer, mysize, sizeof(PyArray_UCS4)); } } - else buffer = ip; + else { + buffer = (PyArray_UCS4 *)ip; + } + for (i = mysize; i > 0 && buffer[--i] == 0; mysize = i); + #ifdef Py_UNICODE_WIDE - obj = PyUnicode_FromUnicode((const Py_UNICODE *)buffer, mysize); + obj = (PyUnicodeObject *)PyUnicode_FromUnicode(buffer, mysize); #else /* create new empty unicode object of length mysize*2 */ - obj = MyPyUnicode_New(mysize*2); - if (obj == NULL) {if (alloc) _pya_free(buffer); return obj;} - mysize = PyUCS2Buffer_FromUCS4(((PyUnicodeObject *)obj)->str, - (PyArray_UCS4 *)buffer, mysize); + obj = (PyUnicodeObject *)MyPyUnicode_New(mysize*2); + if (obj == NULL) { + goto fail; + } + mysize = PyUCS2Buffer_FromUCS4(obj->str, buffer, mysize); /* reset length of unicode object to ucs2size */ - if (MyPyUnicode_Resize((PyUnicodeObject *)obj, mysize) < 0) { - if (alloc) _pya_free(buffer); + if (MyPyUnicode_Resize(obj, mysize) < 0) { Py_DECREF(obj); - return NULL; + goto fail; } #endif - if (alloc) _pya_free(buffer); - return obj; + if (alloc) { + free(buffer); + } + return (PyObject *)obj; + +fail: + if (alloc) { + free(buffer); + } + return NULL; } static int From numpy-svn at scipy.org Thu Jun 26 19:06:27 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Thu, 26 Jun 2008 18:06:27 -0500 (CDT) Subject: [Numpy-svn] r5317 - branches/1.1.x/numpy/core/src Message-ID: <20080626230627.56BF839C352@scipy.org> Author: charris Date: 2008-06-26 18:06:13 -0500 (Thu, 26 Jun 2008) New Revision: 5317 Modified: branches/1.1.x/numpy/core/src/arraytypes.inc.src Log: Fix bus error on SPARC. Modified: branches/1.1.x/numpy/core/src/arraytypes.inc.src =================================================================== --- branches/1.1.x/numpy/core/src/arraytypes.inc.src 2008-06-26 04:32:54 UTC (rev 5316) +++ branches/1.1.x/numpy/core/src/arraytypes.inc.src 2008-06-26 23:06:13 UTC (rev 5317) @@ -253,44 +253,56 @@ static PyObject * UNICODE_getitem(char *ip, PyArrayObject *ap) { - PyObject *obj; - int mysize; - PyArray_UCS4 *dptr; - char *buffer; - int alloc=0; + intp elsize = ap->descr->elsize; + intp mysize = elsize/sizeof(PyArray_UCS4); + int alloc = 0; + PyArray_UCS4 *buffer = NULL; + PyUnicodeObject *obj; + intp i; - mysize = ap->descr->elsize >> 2; - dptr = (PyArray_UCS4 *)ip + mysize-1; - while(mysize > 0 && *dptr-- == 0) mysize--; - if (!PyArray_ISBEHAVED(ap)) { - buffer = _pya_malloc(mysize << 2); - if (buffer == NULL) - return PyErr_NoMemory(); + if (!PyArray_ISBEHAVED_RO(ap)) { + buffer = malloc(elsize); + if (buffer == NULL) { + PyErr_NoMemory(); + goto fail; + } alloc = 1; - memcpy(buffer, ip, mysize << 2); + memcpy(buffer, ip, elsize); if (!PyArray_ISNOTSWAPPED(ap)) { - byte_swap_vector(buffer, mysize, 4); + byte_swap_vector(buffer, mysize, sizeof(PyArray_UCS4)); } } - else buffer = ip; + else { + buffer = (PyArray_UCS4 *)ip; + } + for (i = mysize; i > 0 && buffer[--i] == 0; mysize = i); + #ifdef Py_UNICODE_WIDE - obj = PyUnicode_FromUnicode((const Py_UNICODE *)buffer, mysize); + obj = (PyUnicodeObject *)PyUnicode_FromUnicode(buffer, mysize); #else /* create new empty unicode object of length mysize*2 */ - obj = MyPyUnicode_New(mysize*2); - if (obj == NULL) {if (alloc) _pya_free(buffer); return obj;} - mysize = PyUCS2Buffer_FromUCS4(((PyUnicodeObject *)obj)->str, - (PyArray_UCS4 *)buffer, mysize); + obj = (PyUnicodeObject *)MyPyUnicode_New(mysize*2); + if (obj == NULL) { + goto fail; + } + mysize = PyUCS2Buffer_FromUCS4(obj->str, buffer, mysize); /* reset length of unicode object to ucs2size */ - if (MyPyUnicode_Resize((PyUnicodeObject *)obj, mysize) < 0) { - if (alloc) _pya_free(buffer); + if (MyPyUnicode_Resize(obj, mysize) < 0) { Py_DECREF(obj); - return NULL; + goto fail; } #endif - if (alloc) _pya_free(buffer); - return obj; + if (alloc) { + free(buffer); + } + return (PyObject *)obj; + +fail: + if (alloc) { + free(buffer); + } + return NULL; } static int From numpy-svn at scipy.org Fri Jun 27 01:26:27 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Fri, 27 Jun 2008 00:26:27 -0500 (CDT) Subject: [Numpy-svn] r5318 - trunk/numpy/core/tests Message-ID: <20080627052627.92EA6C7C03E@scipy.org> Author: charris Date: 2008-06-27 00:26:25 -0500 (Fri, 27 Jun 2008) New Revision: 5318 Modified: trunk/numpy/core/tests/test_regression.py Log: Add regression test for #825. Modified: trunk/numpy/core/tests/test_regression.py =================================================================== --- trunk/numpy/core/tests/test_regression.py 2008-06-26 23:06:13 UTC (rev 5317) +++ trunk/numpy/core/tests/test_regression.py 2008-06-27 05:26:25 UTC (rev 5318) @@ -1152,6 +1152,14 @@ b = np.array(['1','2','3']) assert_equal(a,b) + def test_unaligned_unicode_access(self, level=rlevel) : + """Ticket #825""" + for i in range(1,9) : + msg = 'unicode offset: %d chars'%i + t = np.dtype([('a','S%d'%i),('b','U2')]) + x = np.array([('a',u'b')], dtype=t) + assert_equal(str(x), "[('a', u'b')]", err_msg=msg) + if __name__ == "__main__": run_module_suite() From numpy-svn at scipy.org Sat Jun 28 13:05:40 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 28 Jun 2008 12:05:40 -0500 (CDT) Subject: [Numpy-svn] r5319 - trunk/numpy/lib Message-ID: <20080628170540.2482F39C522@scipy.org> Author: charris Date: 2008-06-28 12:05:37 -0500 (Sat, 28 Jun 2008) New Revision: 5319 Modified: trunk/numpy/lib/format.py Log: Fix ticket #828 by explicitly sorting keys instead of relying on pprint. Thanks to Neil Muller for the analysis and patch. Modified: trunk/numpy/lib/format.py =================================================================== --- trunk/numpy/lib/format.py 2008-06-27 05:26:25 UTC (rev 5318) +++ trunk/numpy/lib/format.py 2008-06-28 17:05:37 UTC (rev 5319) @@ -42,10 +42,9 @@ "shape" : tuple of int The shape of the array. -For repeatability and readability, this dictionary is formatted using -pprint.pformat() so the keys are in alphabetic order. This is for convenience -only. A writer SHOULD implement this if possible. A reader MUST NOT depend on -this. +For repeatability and readability, the dictionary keys are sorted in alphabetic +order. This is for convenience only. A writer SHOULD implement this if possible. +A reader MUST NOT depend on this. Following the header comes the array data. If the dtype contains Python objects (i.e. dtype.hasobject is True), then the data is a Python pickle of the array. @@ -56,7 +55,6 @@ """ import cPickle -import pprint import struct import numpy @@ -102,9 +100,11 @@ """ magic_str = fp.read(MAGIC_LEN) if len(magic_str) != MAGIC_LEN: - raise ValueError("could not read %d characters for the magic string; got %r" % (MAGIC_LEN, magic_str)) + msg = "could not read %d characters for the magic string; got %r" + raise ValueError(msg % (MAGIC_LEN, magic_str)) if magic_str[:-2] != MAGIC_PREFIX: - raise ValueError("the magic string is not correct; expected %r, got %r" % (MAGIC_PREFIX, magic_str[:-2])) + msg = "the magic string is not correct; expected %r, got %r" + raise ValueError(msg % (MAGIC_PREFIX, magic_str[:-2])) major, minor = map(ord, magic_str[-2:]) return major, minor @@ -164,7 +164,11 @@ This has the appropriate entries for writing its string representation to the header of the file. """ - header = pprint.pformat(d) + header = "{" + for key, value in sorted(d.items()): + # Need to use repr here, since we eval these when reading + header += "'%s': %s, " % (key, repr(value)) + header += "}" # Pad the header with spaces and a final newline such that the magic string, # the header-length short and the header are aligned on a 16-byte boundary. # Hopefully, some system, possibly memory-mapping, can take advantage of From numpy-svn at scipy.org Sat Jun 28 13:51:58 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 28 Jun 2008 12:51:58 -0500 (CDT) Subject: [Numpy-svn] r5320 - branches/1.1.x/numpy/lib Message-ID: <20080628175158.0988839C09B@scipy.org> Author: charris Date: 2008-06-28 12:51:56 -0500 (Sat, 28 Jun 2008) New Revision: 5320 Modified: branches/1.1.x/numpy/lib/format.py Log: Backport fix for ticket #828. Modified: branches/1.1.x/numpy/lib/format.py =================================================================== --- branches/1.1.x/numpy/lib/format.py 2008-06-28 17:05:37 UTC (rev 5319) +++ branches/1.1.x/numpy/lib/format.py 2008-06-28 17:51:56 UTC (rev 5320) @@ -42,10 +42,9 @@ "shape" : tuple of int The shape of the array. -For repeatability and readability, this dictionary is formatted using -pprint.pformat() so the keys are in alphabetic order. This is for convenience -only. A writer SHOULD implement this if possible. A reader MUST NOT depend on -this. +For repeatability and readability, the dictionary keys are sorted in alphabetic +order. This is for convenience only. A writer SHOULD implement this if possible. +A reader MUST NOT depend on this. Following the header comes the array data. If the dtype contains Python objects (i.e. dtype.hasobject is True), then the data is a Python pickle of the array. @@ -56,7 +55,6 @@ """ import cPickle -import pprint import struct import numpy @@ -102,9 +100,11 @@ """ magic_str = fp.read(MAGIC_LEN) if len(magic_str) != MAGIC_LEN: - raise ValueError("could not read %d characters for the magic string; got %r" % (MAGIC_LEN, magic_str)) + msg = "could not read %d characters for the magic string; got %r" + raise ValueError(msg % (MAGIC_LEN, magic_str)) if magic_str[:-2] != MAGIC_PREFIX: - raise ValueError("the magic string is not correct; expected %r, got %r" % (MAGIC_PREFIX, magic_str[:-2])) + msg = "the magic string is not correct; expected %r, got %r" + raise ValueError(msg % (MAGIC_PREFIX, magic_str[:-2])) major, minor = map(ord, magic_str[-2:]) return major, minor @@ -164,7 +164,11 @@ This has the appropriate entries for writing its string representation to the header of the file. """ - header = pprint.pformat(d) + header = "{" + for key, value in sorted(d.items()): + # Need to use repr here, since we eval these when reading + header += "'%s': %s, " % (key, repr(value)) + header += "}" # Pad the header with spaces and a final newline such that the magic string, # the header-length short and the header are aligned on a 16-byte boundary. # Hopefully, some system, possibly memory-mapping, can take advantage of From numpy-svn at scipy.org Sat Jun 28 17:02:27 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 28 Jun 2008 16:02:27 -0500 (CDT) Subject: [Numpy-svn] r5321 - trunk/numpy/lib Message-ID: <20080628210227.E7802C7C0AC@scipy.org> Author: charris Date: 2008-06-28 16:02:25 -0500 (Sat, 28 Jun 2008) New Revision: 5321 Modified: trunk/numpy/lib/format.py Log: Shorten long lines. Modified: trunk/numpy/lib/format.py =================================================================== --- trunk/numpy/lib/format.py 2008-06-28 17:51:56 UTC (rev 5320) +++ trunk/numpy/lib/format.py 2008-06-28 21:02:25 UTC (rev 5321) @@ -1,12 +1,14 @@ -""" Define a simple format for saving numpy arrays to disk with the full +"""Define a simple format for saving numpy arrays to disk. + +Define a simple format for saving numpy arrays to disk with the full information about them. WARNING: Due to limitations in the interpretation of structured dtypes, dtypes with fields with empty names will have the names replaced by 'f0', 'f1', etc. -Such arrays will not round-trip through the format entirely accurately. The data -is intact; only the field names will differ. We are working on a fix for this. -This fix will not require a change in the file format. The arrays with such -structures can still be saved and restored, and the correct dtype may be +Such arrays will not round-trip through the format entirely accurately. The +data is intact; only the field names will differ. We are working on a fix for +this. This fix will not require a change in the file format. The arrays with +such structures can still be saved and restored, and the correct dtype may be restored by using the `loadedarray.view(correct_dtype)` method. Format Version 1.0 @@ -24,11 +26,11 @@ The next 2 bytes form a little-endian unsigned short int: the length of the header data HEADER_LEN. -The next HEADER_LEN bytes form the header data describing the array's format. It -is an ASCII string which contains a Python literal expression of a dictionary. -It is terminated by a newline ('\\n') and padded with spaces ('\\x20') to make -the total length of the magic string + 4 + HEADER_LEN be evenly divisible by 16 -for alignment purposes. +The next HEADER_LEN bytes form the header data describing the array's format. +It is an ASCII string which contains a Python literal expression of a +dictionary. It is terminated by a newline ('\\n') and padded with spaces +('\\x20') to make the total length of the magic string + 4 + HEADER_LEN be +evenly divisible by 16 for alignment purposes. The dictionary contains three keys: @@ -43,8 +45,8 @@ The shape of the array. For repeatability and readability, the dictionary keys are sorted in alphabetic -order. This is for convenience only. A writer SHOULD implement this if possible. -A reader MUST NOT depend on this. +order. This is for convenience only. A writer SHOULD implement this if +possible. A reader MUST NOT depend on this. Following the header comes the array data. If the dtype contains Python objects (i.e. dtype.hasobject is True), then the data is a Python pickle of the array. @@ -52,6 +54,7 @@ fortran_order) bytes of the array. Consumers can figure out the number of bytes by multiplying the number of elements given by the shape (noting that shape=() means there is 1 element) by dtype.itemsize. + """ import cPickle @@ -112,10 +115,11 @@ """ Get a serializable descriptor from the dtype. The .descr attribute of a dtype object cannot be round-tripped through the - dtype() constructor. Simple types, like dtype('float32'), have a descr which - looks like a record array with one field with '' as a name. The dtype() - constructor interprets this as a request to give a default name. Instead, we - construct descriptor that can be passed to dtype(). + dtype() constructor. Simple types, like dtype('float32'), have a descr + which looks like a record array with one field with '' as a name. The + dtype() constructor interprets this as a request to give a default name. + Instead, we construct descriptor that can be passed to dtype(). + """ if dtype.names is not None: # This is a record array. The .descr is fine. @@ -169,10 +173,10 @@ # Need to use repr here, since we eval these when reading header += "'%s': %s, " % (key, repr(value)) header += "}" - # Pad the header with spaces and a final newline such that the magic string, - # the header-length short and the header are aligned on a 16-byte boundary. - # Hopefully, some system, possibly memory-mapping, can take advantage of - # our premature optimization. + # Pad the header with spaces and a final newline such that the magic + # string, the header-length short and the header are aligned on a 16-byte + # boundary. Hopefully, some system, possibly memory-mapping, can take + # advantage of our premature optimization. current_header_len = MAGIC_LEN + 2 + len(header) + 1 # 1 for the newline topad = 16 - (current_header_len % 16) header = '%s%s\n' % (header, ' '*topad) @@ -210,7 +214,8 @@ # header. hlength_str = fp.read(2) if len(hlength_str) != 2: - raise ValueError("EOF at %s before reading array header length" % fp.tell()) + msg = "EOF at %s before reading array header length" + raise ValueError(msg % fp.tell()) header_length = struct.unpack(' Author: charris Date: 2008-06-28 21:28:43 -0500 (Sat, 28 Jun 2008) New Revision: 5322 Modified: trunk/numpy/core/records.py Log: Fix ticket #390. Modified: trunk/numpy/core/records.py =================================================================== --- trunk/numpy/core/records.py 2008-06-28 21:02:25 UTC (rev 5321) +++ trunk/numpy/core/records.py 2008-06-29 02:28:43 UTC (rev 5322) @@ -309,6 +309,10 @@ return obj.view(ndarray) return obj + def __repr__(self) : + ret = ndarray.__repr__(self) + return ret.replace("recarray", "rec.array", 1) + def field(self, attr, val=None): if isinstance(attr, int): names = ndarray.__getattribute__(self,'dtype').names From numpy-svn at scipy.org Sat Jun 28 23:26:02 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Sat, 28 Jun 2008 22:26:02 -0500 (CDT) Subject: [Numpy-svn] r5323 - trunk/numpy/core/tests Message-ID: <20080629032602.8A40639C65B@scipy.org> Author: charris Date: 2008-06-28 22:26:00 -0500 (Sat, 28 Jun 2008) New Revision: 5323 Modified: trunk/numpy/core/tests/test_records.py Log: Add test for ticket #390. Modified: trunk/numpy/core/tests/test_records.py =================================================================== --- trunk/numpy/core/tests/test_records.py 2008-06-29 02:28:43 UTC (rev 5322) +++ trunk/numpy/core/tests/test_records.py 2008-06-29 03:26:00 UTC (rev 5323) @@ -1,70 +1,72 @@ +import numpy as np +from numpy.testing import * from os import path -from numpy.testing import * set_package_path() -import numpy.core -reload(numpy.core) -import numpy -from numpy.core import * restore_path() class TestFromrecords(TestCase): def test_fromrecords(self): - r = rec.fromrecords([[456,'dbe',1.2],[2,'de',1.3]], + r = np.rec.fromrecords([[456,'dbe',1.2],[2,'de',1.3]], names='col1,col2,col3') - assert_equal(r[0].item(),(456, 'dbe', 1.2)) + assert_equal(r[0].item(), (456, 'dbe', 1.2)) def test_method_array(self): - r = rec.array('abcdefg'*100,formats='i2,a3,i4',shape=3,byteorder='big') - assert_equal(r[1].item(),(25444, 'efg', 1633837924)) + r = np.rec.array('abcdefg'*100,formats='i2,a3,i4',shape=3,byteorder='big') + assert_equal(r[1].item(), (25444, 'efg', 1633837924)) def test_method_array2(self): - r=rec.array([(1,11,'a'),(2,22,'b'),(3,33,'c'),(4,44,'d'),(5,55,'ex'), + r = np.rec.array([(1,11,'a'),(2,22,'b'),(3,33,'c'),(4,44,'d'),(5,55,'ex'), (6,66,'f'),(7,77,'g')],formats='u1,f4,a1') - assert_equal(r[1].item(),(2, 22.0, 'b')) + assert_equal(r[1].item(), (2, 22.0, 'b')) def test_recarray_slices(self): - r=rec.array([(1,11,'a'),(2,22,'b'),(3,33,'c'),(4,44,'d'),(5,55,'ex'), + r = np.rec.array([(1,11,'a'),(2,22,'b'),(3,33,'c'),(4,44,'d'),(5,55,'ex'), (6,66,'f'),(7,77,'g')],formats='u1,f4,a1') - assert_equal(r[1::2][1].item(),(4, 44.0, 'd')) + assert_equal(r[1::2][1].item(), (4, 44.0, 'd')) def test_recarray_fromarrays(self): - x1 = array([1,2,3,4]) - x2 = array(['a','dd','xyz','12']) - x3 = array([1.1,2,3,4]) - r = rec.fromarrays([x1,x2,x3],names='a,b,c') - assert_equal(r[1].item(),(2,'dd',2.0)) + x1 = np.array([1,2,3,4]) + x2 = np.array(['a','dd','xyz','12']) + x3 = np.array([1.1,2,3,4]) + r = np.rec.fromarrays([x1,x2,x3],names='a,b,c') + assert_equal(r[1].item(), (2,'dd',2.0)) x1[1] = 34 - assert_equal(r.a,array([1,2,3,4])) + assert_equal(r.a, np.array([1,2,3,4])) def test_recarray_fromfile(self): data_dir = path.join(path.dirname(__file__),'data') filename = path.join(data_dir,'recarray_from_file.fits') fd = open(filename) fd.seek(2880*2) - r = rec.fromfile(fd, formats='f8,i4,a5', shape=3, byteorder='big') + r = np.rec.fromfile(fd, formats='f8,i4,a5', shape=3, byteorder='big') def test_recarray_from_obj(self): count = 10 - a = zeros(count, dtype='O') - b = zeros(count, dtype='f8') - c = zeros(count, dtype='f8') + a = np.zeros(count, dtype='O') + b = np.zeros(count, dtype='f8') + c = np.zeros(count, dtype='f8') for i in range(len(a)): a[i] = range(1,10) - mine = numpy.rec.fromarrays([a,b,c], - names='date,data1,data2') + mine = np.rec.fromarrays([a,b,c], names='date,data1,data2') for i in range(len(a)): - assert(mine.date[i]==range(1,10)) - assert(mine.data1[i]==0.0) - assert(mine.data2[i]==0.0) + assert (mine.date[i] == range(1,10)) + assert (mine.data1[i] == 0.0) + assert (mine.data2[i] == 0.0) + def check_recarray_from_repr(self): + x = np.rec.array([ (1, 2)],dtype=[('a', np.int8), ('b', np.int8)]) + y = eval("np." + repr(x)) + assert isinstance(y, np.recarray) + assert_equal(y, x) + def test_recarray_from_names(self): - ra = rec.array([ + ra = np.rec.array([ (1, 'abc', 3.7000002861022949, 0), (2, 'xy', 6.6999998092651367, 1), (0, ' ', 0.40000000596046448, 0)], names='c1, c2, c3, c4') - pa = rec.fromrecords([ + pa = np.rec.fromrecords([ (1, 'abc', 3.7000002861022949, 0), (2, 'xy', 6.6999998092651367, 1), (0, ' ', 0.40000000596046448, 0)], @@ -75,7 +77,7 @@ assert ra[k].item() == pa[k].item() def test_recarray_conflict_fields(self): - ra = rec.array([(1,'abc',2.3),(2,'xyz',4.2), + ra = np.rec.array([(1,'abc',2.3),(2,'xyz',4.2), (3,'wrs',1.3)], names='field, shape, mean') ra.mean = [1.1,2.2,3.3] @@ -91,7 +93,7 @@ class TestRecord(TestCase): def setUp(self): - self.data = rec.fromrecords([(1,2,3),(4,5,6)], + self.data = np.rec.fromrecords([(1,2,3),(4,5,6)], dtype=[("col1", " Author: charris Date: 2008-06-28 22:37:04 -0500 (Sat, 28 Jun 2008) New Revision: 5324 Modified: trunk/numpy/lib/format.py Log: Use join instead of += to build string. Modified: trunk/numpy/lib/format.py =================================================================== --- trunk/numpy/lib/format.py 2008-06-29 03:26:00 UTC (rev 5323) +++ trunk/numpy/lib/format.py 2008-06-29 03:37:04 UTC (rev 5324) @@ -168,11 +168,12 @@ This has the appropriate entries for writing its string representation to the header of the file. """ - header = "{" + header = ["{"] for key, value in sorted(d.items()): # Need to use repr here, since we eval these when reading - header += "'%s': %s, " % (key, repr(value)) - header += "}" + header.append("'%s': %s, " % (key, repr(value))) + header.append("}") + header = "".join(header) # Pad the header with spaces and a final newline such that the magic # string, the header-length short and the header are aligned on a 16-byte # boundary. Hopefully, some system, possibly memory-mapping, can take From numpy-svn at scipy.org Mon Jun 30 03:12:00 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 30 Jun 2008 02:12:00 -0500 (CDT) Subject: [Numpy-svn] r5325 - branches/1.1.x/numpy/ma Message-ID: <20080630071200.5CB9239C683@scipy.org> Author: pierregm Date: 2008-06-30 02:11:53 -0500 (Mon, 30 Jun 2008) New Revision: 5325 Modified: branches/1.1.x/numpy/ma/extras.py Log: implement itertools.groupby as a class Modified: branches/1.1.x/numpy/ma/extras.py =================================================================== --- branches/1.1.x/numpy/ma/extras.py 2008-06-29 03:37:04 UTC (rev 5324) +++ branches/1.1.x/numpy/ma/extras.py 2008-06-30 07:11:53 UTC (rev 5325) @@ -27,7 +27,6 @@ 'vander','vstack', ] -from itertools import groupby import core from core import MaskedArray, MAError, add, array, asarray, concatenate, count,\ @@ -42,6 +41,28 @@ from numpy.lib.polynomial import _lstsq, _single_eps, _double_eps #............................................................................... +class groupby(object): + "Implements itertools.groupby for numpy 1.1.x" + def __init__(self, iterable, key=None): + if key is None: + key = lambda x: x + self.keyfunc = key + self.it = iter(iterable) + self.tgtkey = self.currkey = self.currvalue = xrange(0) + def __iter__(self): + return self + def next(self): + while self.currkey == self.tgtkey: + self.currvalue = self.it.next() # Exit on StopIteration + self.currkey = self.keyfunc(self.currvalue) + self.tgtkey = self.currkey + return (self.currkey, self._grouper(self.tgtkey)) + def _grouper(self, tgtkey): + while self.currkey == tgtkey: + yield self.currvalue + self.currvalue = self.it.next() # Exit on StopIteration + self.currkey = self.keyfunc(self.currvalue) + def issequence(seq): """Is seq a sequence (ndarray, list or tuple)?""" if isinstance(seq, ndarray): From numpy-svn at scipy.org Mon Jun 30 10:55:27 2008 From: numpy-svn at scipy.org (numpy-svn at scipy.org) Date: Mon, 30 Jun 2008 09:55:27 -0500 (CDT) Subject: [Numpy-svn] r5326 - in branches/1.1.x/numpy/ma: . tests Message-ID: <20080630145527.E7DAA39C300@scipy.org> Author: pierregm Date: 2008-06-30 09:55:14 -0500 (Mon, 30 Jun 2008) New Revision: 5326 Modified: branches/1.1.x/numpy/ma/tests/test_core.py branches/1.1.x/numpy/ma/tests/test_extras.py branches/1.1.x/numpy/ma/testutils.py Log: tests: * use the `import numpy as np` convention testutils: * assert_equal_records now uses getitem instead of getattr * assert_array_compare now calls numpy.testing.utils.assert_array_compare on filled data Modified: branches/1.1.x/numpy/ma/tests/test_core.py =================================================================== --- branches/1.1.x/numpy/ma/tests/test_core.py 2008-06-30 07:11:53 UTC (rev 5325) +++ branches/1.1.x/numpy/ma/tests/test_core.py 2008-06-30 14:55:14 UTC (rev 5326) @@ -1276,7 +1276,7 @@ # x = [1,4,2,3] sortedx = sort(x) - assert(not isinstance(sorted, MaskedArray)) + assert(not isinstance(sortedx, MaskedArray)) # x = array([0,1,-1,-2,2], mask=nomask, dtype=numpy.int8) sortedx = sort(x, endwith=False) Modified: branches/1.1.x/numpy/ma/tests/test_extras.py =================================================================== --- branches/1.1.x/numpy/ma/tests/test_extras.py 2008-06-30 07:11:53 UTC (rev 5325) +++ branches/1.1.x/numpy/ma/tests/test_extras.py 2008-06-30 14:55:14 UTC (rev 5326) @@ -11,21 +11,15 @@ __revision__ = "$Revision: 3473 $" __date__ = '$Date: 2007-10-29 17:18:13 +0200 (Mon, 29 Oct 2007) $' -import numpy as N -from numpy.testing import NumpyTest, NumpyTestCase -from numpy.testing.utils import build_err_msg - -import numpy.ma.testutils +import numpy +from numpy.testing import * from numpy.ma.testutils import * - -import numpy.ma.core from numpy.ma.core import * -import numpy.ma.extras from numpy.ma.extras import * class TestAverage(NumpyTestCase): "Several tests of average. Why so many ? Good point..." - def check_testAverage1(self): + def test_testAverage1(self): "Test of average." ott = array([0.,1.,2.,3.], mask=[1,0,0,0]) assert_equal(2.0, average(ott,axis=0)) @@ -44,7 +38,7 @@ result, wts = average(ott, axis=0, returned=1) assert_equal(wts, [1., 0.]) - def check_testAverage2(self): + def test_testAverage2(self): "More tests of average." w1 = [0,1,1,1,1,0] w2 = [[0,1,1,1,1,0],[1,0,0,0,0,1]] @@ -52,12 +46,15 @@ assert_equal(average(x, axis=0), 2.5) assert_equal(average(x, axis=0, weights=w1), 2.5) y = array([arange(6, dtype=float_), 2.0*arange(6)]) - assert_equal(average(y, None), N.add.reduce(N.arange(6))*3./12.) - assert_equal(average(y, axis=0), N.arange(6) * 3./2.) - assert_equal(average(y, axis=1), [average(x,axis=0), average(x,axis=0) * 2.0]) + assert_equal(average(y, None), np.add.reduce(np.arange(6))*3./12.) + assert_equal(average(y, axis=0), np.arange(6) * 3./2.) + assert_equal(average(y, axis=1), + [average(x,axis=0), average(x,axis=0) * 2.0]) assert_equal(average(y, None, weights=w2), 20./6.) - assert_equal(average(y, axis=0, weights=w2), [0.,1.,2.,3.,4.,10.]) - assert_equal(average(y, axis=1), [average(x,axis=0), average(x,axis=0) * 2.0]) + assert_equal(average(y, axis=0, weights=w2), + [0.,1.,2.,3.,4.,10.]) + assert_equal(average(y, axis=1), + [average(x,axis=0), average(x,axis=0) * 2.0]) m1 = zeros(6) m2 = [0,0,1,1,0,0] m3 = [[0,0,1,1,0,0],[0,1,1,1,1,0]] @@ -74,7 +71,7 @@ assert_equal(average(z, axis=1), [2.5, 5.0]) assert_equal(average(z,axis=0, weights=w2), [0.,1., 99., 99., 4.0, 10.0]) - def check_testAverage3(self): + def test_testAverage3(self): "Yet more tests of average!" a = arange(6) b = arange(6) * 3 @@ -100,7 +97,7 @@ class TestConcatenator(NumpyTestCase): "Tests for mr_, the equivalent of r_ for masked arrays." - def check_1d(self): + def test_1d(self): "Tests mr_ on 1D arrays." assert_array_equal(mr_[1,2,3,4,5,6],array([1,2,3,4,5,6])) b = ones(5) @@ -111,30 +108,30 @@ assert_array_equal(c,[1,1,1,1,1,0,0,1,1,1,1,1]) assert_array_equal(c.mask, mr_[m,0,0,m]) - def check_2d(self): + def test_2d(self): "Tests mr_ on 2D arrays." a_1 = rand(5,5) a_2 = rand(5,5) - m_1 = N.round_(rand(5,5),0) - m_2 = N.round_(rand(5,5),0) + m_1 = np.round_(rand(5,5),0) + m_2 = np.round_(rand(5,5),0) b_1 = masked_array(a_1,mask=m_1) b_2 = masked_array(a_2,mask=m_2) d = mr_['1',b_1,b_2] # append columns assert(d.shape == (5,10)) assert_array_equal(d[:,:5],b_1) assert_array_equal(d[:,5:],b_2) - assert_array_equal(d.mask, N.r_['1',m_1,m_2]) + assert_array_equal(d.mask, np.r_['1',m_1,m_2]) d = mr_[b_1,b_2] assert(d.shape == (10,5)) assert_array_equal(d[:5,:],b_1) assert_array_equal(d[5:,:],b_2) - assert_array_equal(d.mask, N.r_[m_1,m_2]) + assert_array_equal(d.mask, np.r_[m_1,m_2]) class TestNotMasked(NumpyTestCase): "Tests notmasked_edges and notmasked_contiguous." - def check_edges(self): + def test_edges(self): "Tests unmasked_edges" - a = masked_array(N.arange(24).reshape(3,8), + a = masked_array(np.arange(24).reshape(3,8), mask=[[0,0,0,0,1,1,1,0], [1,1,1,1,1,1,1,1], [0,0,0,0,0,0,1,0],]) @@ -149,9 +146,9 @@ assert_equal(tmp[0], (array([0,2,]), array([0,0]))) assert_equal(tmp[1], (array([0,2,]), array([7,7]))) - def check_contiguous(self): + def test_contiguous(self): "Tests notmasked_contiguous" - a = masked_array(N.arange(24).reshape(3,8), + a = masked_array(np.arange(24).reshape(3,8), mask=[[0,0,0,0,1,1,1,1], [1,1,1,1,1,1,1,1], [0,0,0,0,0,0,1,0],]) @@ -174,9 +171,9 @@ class Test2DFunctions(NumpyTestCase): "Tests 2D functions" - def check_compress2d(self): + def test_compress2d(self): "Tests compress2d" - x = array(N.arange(9).reshape(3,3), mask=[[1,0,0],[0,0,0],[0,0,0]]) + x = array(np.arange(9).reshape(3,3), mask=[[1,0,0],[0,0,0],[0,0,0]]) assert_equal(compress_rowcols(x), [[4,5],[7,8]] ) assert_equal(compress_rowcols(x,0), [[3,4,5],[6,7,8]] ) assert_equal(compress_rowcols(x,1), [[1,2],[4,5],[7,8]] ) @@ -193,9 +190,9 @@ assert_equal(compress_rowcols(x,0).size, 0 ) assert_equal(compress_rowcols(x,1).size, 0 ) # - def check_mask_rowcols(self): + def test_mask_rowcols(self): "Tests mask_rowcols." - x = array(N.arange(9).reshape(3,3), mask=[[1,0,0],[0,0,0],[0,0,0]]) + x = array(np.arange(9).reshape(3,3), mask=[[1,0,0],[0,0,0],[0,0,0]]) assert_equal(mask_rowcols(x).mask, [[1,1,1],[1,0,0],[1,0,0]] ) assert_equal(mask_rowcols(x,0).mask, [[1,1,1],[0,0,0],[0,0,0]] ) assert_equal(mask_rowcols(x,1).mask, [[1,0,0],[1,0,0],[1,0,0]] ) @@ -208,13 +205,16 @@ assert_equal(mask_rowcols(x,0).mask, [[1,1,1],[1,1,1],[0,0,0]] ) assert_equal(mask_rowcols(x,1,).mask, [[1,1,0],[1,1,0],[1,1,0]] ) x = array(x._data, mask=[[1,0,0],[0,1,0],[0,0,1]]) - assert(mask_rowcols(x).all()) - assert(mask_rowcols(x,0).all()) - assert(mask_rowcols(x,1).all()) + assert(mask_rowcols(x).all() is masked) + assert(mask_rowcols(x,0).all() is masked) + assert(mask_rowcols(x,1).all() is masked) + assert(mask_rowcols(x).mask.all()) + assert(mask_rowcols(x,0).mask.all()) + assert(mask_rowcols(x,1).mask.all()) # def test_dot(self): "Tests dot product" - n = N.arange(1,7) + n = np.arange(1,7) # m = [1,0,0,0,0,0] a = masked_array(n, mask=m).reshape(2,3) @@ -224,9 +224,9 @@ c = dot(b,a,True) assert_equal(c.mask, [[1,1,1],[1,0,0],[1,0,0]]) c = dot(a,b,False) - assert_equal(c, N.dot(a.filled(0), b.filled(0))) + assert_equal(c, np.dot(a.filled(0), b.filled(0))) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0), a.filled(0))) + assert_equal(c, np.dot(b.filled(0), a.filled(0))) # m = [0,0,0,0,0,1] a = masked_array(n, mask=m).reshape(2,3) @@ -236,10 +236,10 @@ c = dot(b,a,True) assert_equal(c.mask, [[0,0,1],[0,0,1],[1,1,1]]) c = dot(a,b,False) - assert_equal(c, N.dot(a.filled(0), b.filled(0))) + assert_equal(c, np.dot(a.filled(0), b.filled(0))) assert_equal(c, dot(a,b)) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0), a.filled(0))) + assert_equal(c, np.dot(b.filled(0), a.filled(0))) # m = [0,0,0,0,0,0] a = masked_array(n, mask=m).reshape(2,3) @@ -254,37 +254,37 @@ c = dot(a,b,True) assert_equal(c.mask,[[1,1],[0,0]]) c = dot(a,b,False) - assert_equal(c, N.dot(a.filled(0),b.filled(0))) + assert_equal(c, np.dot(a.filled(0),b.filled(0))) c = dot(b,a,True) assert_equal(c.mask,[[1,0,0],[1,0,0],[1,0,0]]) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0),a.filled(0))) + assert_equal(c, np.dot(b.filled(0),a.filled(0))) # a = masked_array(n, mask=[0,0,0,0,0,1]).reshape(2,3) b = masked_array(n, mask=[0,0,0,0,0,0]).reshape(3,2) c = dot(a,b,True) assert_equal(c.mask,[[0,0],[1,1]]) c = dot(a,b) - assert_equal(c, N.dot(a.filled(0),b.filled(0))) + assert_equal(c, np.dot(a.filled(0),b.filled(0))) c = dot(b,a,True) assert_equal(c.mask,[[0,0,1],[0,0,1],[0,0,1]]) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0), a.filled(0))) + assert_equal(c, np.dot(b.filled(0), a.filled(0))) # a = masked_array(n, mask=[0,0,0,0,0,1]).reshape(2,3) b = masked_array(n, mask=[0,0,1,0,0,0]).reshape(3,2) c = dot(a,b,True) assert_equal(c.mask,[[1,0],[1,1]]) c = dot(a,b,False) - assert_equal(c, N.dot(a.filled(0),b.filled(0))) + assert_equal(c, np.dot(a.filled(0),b.filled(0))) c = dot(b,a,True) assert_equal(c.mask,[[0,0,1],[1,1,1],[0,0,1]]) c = dot(b,a,False) - assert_equal(c, N.dot(b.filled(0),a.filled(0))) + assert_equal(c, np.dot(b.filled(0),a.filled(0))) def test_ediff1d(self): "Tests mediff1d" - x = masked_array(N.arange(5), mask=[1,0,0,0,1]) + x = masked_array(np.arange(5), mask=[1,0,0,0,1]) difx_d = (x._data[1:]-x._data[:-1]) difx_m = (x._mask[1:]-x._mask[:-1]) dx = ediff1d(x) @@ -292,33 +292,33 @@ assert_equal(dx._mask, difx_m) # dx = ediff1d(x, to_begin=masked) - assert_equal(dx._data, N.r_[0,difx_d]) - assert_equal(dx._mask, N.r_[1,difx_m]) + assert_equal(dx._data, np.r_[0,difx_d]) + assert_equal(dx._mask, np.r_[1,difx_m]) dx = ediff1d(x, to_begin=[1,2,3]) - assert_equal(dx._data, N.r_[[1,2,3],difx_d]) - assert_equal(dx._mask, N.r_[[0,0,0],difx_m]) + assert_equal(dx._data, np.r_[[1,2,3],difx_d]) + assert_equal(dx._mask, np.r_[[0,0,0],difx_m]) # dx = ediff1d(x, to_end=masked) - assert_equal(dx._data, N.r_[difx_d,0]) - assert_equal(dx._mask, N.r_[difx_m,1]) + assert_equal(dx._data, np.r_[difx_d,0]) + assert_equal(dx._mask, np.r_[difx_m,1]) dx = ediff1d(x, to_end=[1,2,3]) - assert_equal(dx._data, N.r_[difx_d,[1,2,3]]) - assert_equal(dx._mask, N.r_[difx_m,[0,0,0]]) + assert_equal(dx._data, np.r_[difx_d,[1,2,3]]) + assert_equal(dx._mask, np.r_[difx_m,[0,0,0]]) # dx = ediff1d(x, to_end=masked, to_begin=masked) - assert_equal(dx._data, N.r_[0,difx_d,0]) - assert_equal(dx._mask, N.r_[1,difx_m,1]) + assert_equal(dx._data, np.r_[0,difx_d,0]) + assert_equal(dx._mask, np.r_[1,difx_m,1]) dx = ediff1d(x, to_end=[1,2,3], to_begin=masked) - assert_equal(dx._data, N.r_[0,difx_d,[1,2,3]]) - assert_equal(dx._mask, N.r_[1,difx_m,[0,0,0]]) + assert_equal(dx._data, np.r_[0,difx_d,[1,2,3]]) + assert_equal(dx._mask, np.r_[1,difx_m,[0,0,0]]) # dx = ediff1d(x._data, to_end=masked, to_begin=masked) - assert_equal(dx._data, N.r_[0,difx_d,0]) - assert_equal(dx._mask, N.r_[1,0,0,0,0,1]) + assert_equal(dx._data, np.r_[0,difx_d,0]) + assert_equal(dx._mask, np.r_[1,0,0,0,0,1]) class TestApplyAlongAxis(NumpyTestCase): "Tests 2D functions" - def check_3d(self): + def test_3d(self): a = arange(12.).reshape(2,2,3) def myfunc(b): return b[1] Modified: branches/1.1.x/numpy/ma/testutils.py =================================================================== --- branches/1.1.x/numpy/ma/testutils.py 2008-06-30 07:11:53 UTC (rev 5325) +++ branches/1.1.x/numpy/ma/testutils.py 2008-06-30 14:55:14 UTC (rev 5326) @@ -10,16 +10,18 @@ __date__ = "$Date: 2007-11-13 10:01:14 +0200 (Tue, 13 Nov 2007) $" -import numpy as N -from numpy.core import ndarray -from numpy.core.numerictypes import float_ +import operator + +import numpy as np +from numpy import ndarray, float_ import numpy.core.umath as umath -from numpy.testing import NumpyTest, NumpyTestCase +from numpy.testing import * from numpy.testing.utils import build_err_msg, rand +import numpy.testing.utils as utils import core from core import mask_or, getmask, getmaskarray, masked_array, nomask, masked -from core import filled, equal, less +from core import fix_invalid, filled, equal, less #------------------------------------------------------------------------------ def approx (a, b, fill_value=True, rtol=1.e-5, atol=1.e-8): @@ -35,12 +37,13 @@ d1 = filled(a) d2 = filled(b) if d1.dtype.char == "O" or d2.dtype.char == "O": - return N.equal(d1,d2).ravel() + return np.equal(d1,d2).ravel() x = filled(masked_array(d1, copy=False, mask=m), fill_value).astype(float_) y = filled(masked_array(d2, copy=False, mask=m), 1).astype(float_) - d = N.less_equal(umath.absolute(x-y), atol + rtol * umath.absolute(y)) + d = np.less_equal(umath.absolute(x-y), atol + rtol * umath.absolute(y)) return d.ravel() + def almost(a, b, decimal=6, fill_value=True): """Returns True if a and b are equal up to decimal places. If fill_value is True, masked values considered equal. Otherwise, masked values @@ -50,10 +53,10 @@ d1 = filled(a) d2 = filled(b) if d1.dtype.char == "O" or d2.dtype.char == "O": - return N.equal(d1,d2).ravel() + return np.equal(d1,d2).ravel() x = filled(masked_array(d1, copy=False, mask=m), fill_value).astype(float_) y = filled(masked_array(d2, copy=False, mask=m), 1).astype(float_) - d = N.around(N.abs(x-y),decimal) <= 10.0**(-decimal) + d = np.around(np.abs(x-y),decimal) <= 10.0**(-decimal) return d.ravel() @@ -69,11 +72,12 @@ """Asserts that two records are equal. Pretty crude for now.""" assert_equal(a.dtype, b.dtype) for f in a.dtype.names: - (af, bf) = (getattr(a,f), getattr(b,f)) + (af, bf) = (operator.getitem(a,f), operator.getitem(b,f)) if not (af is masked) and not (bf is masked): - assert_equal(getattr(a,f), getattr(b,f)) + assert_equal(operator.getitem(a,f), operator.getitem(b,f)) return + def assert_equal(actual,desired,err_msg=''): """Asserts that two items are equal. """ @@ -95,16 +99,18 @@ # Case #4. arrays or equivalent if ((actual is masked) and not (desired is masked)) or \ ((desired is masked) and not (actual is masked)): - msg = build_err_msg([actual, desired], err_msg, header='', names=('x', 'y')) + msg = build_err_msg([actual, desired], + err_msg, header='', names=('x', 'y')) raise ValueError(msg) - actual = N.array(actual, copy=False, subok=True) - desired = N.array(desired, copy=False, subok=True) - if actual.dtype.char in "OS" and desired.dtype.char in "OS": + actual = np.array(actual, copy=False, subok=True) + desired = np.array(desired, copy=False, subok=True) + if actual.dtype.char in "OSV" and desired.dtype.char in "OSV": return _assert_equal_on_sequences(actual.tolist(), desired.tolist(), err_msg='') return assert_array_equal(actual, desired, err_msg) -#............................. + + def fail_if_equal(actual,desired,err_msg='',): """Raises an assertion error if two items are equal. """ @@ -120,119 +126,91 @@ for k in range(len(desired)): fail_if_equal(actual[k], desired[k], 'item=%r\n%s' % (k,err_msg)) return - if isinstance(actual, N.ndarray) or isinstance(desired, N.ndarray): + if isinstance(actual, np.ndarray) or isinstance(desired, np.ndarray): return fail_if_array_equal(actual, desired, err_msg) msg = build_err_msg([actual, desired], err_msg) assert desired != actual, msg assert_not_equal = fail_if_equal -#............................ -def assert_almost_equal(actual,desired,decimal=7,err_msg=''): + + +def assert_almost_equal(actual, desired, decimal=7, err_msg='', verbose=True): """Asserts that two items are almost equal. The test is equivalent to abs(desired-actual) < 0.5 * 10**(-decimal) """ - if isinstance(actual, N.ndarray) or isinstance(desired, N.ndarray): - return assert_array_almost_equal(actual, desired, decimal, err_msg) - msg = build_err_msg([actual, desired], err_msg) + if isinstance(actual, np.ndarray) or isinstance(desired, np.ndarray): + return assert_array_almost_equal(actual, desired, decimal=decimal, + err_msg=err_msg, verbose=verbose) + msg = build_err_msg([actual, desired], + err_msg=err_msg, verbose=verbose) assert round(abs(desired - actual),decimal) == 0, msg -#............................ -def assert_array_compare(comparison, x, y, err_msg='', header='', + + +assert_close = assert_almost_equal + + +def assert_array_compare(comparison, x, y, err_msg='', verbose=True, header='', fill_value=True): """Asserts that a comparison relation between two masked arrays is satisfied elementwise.""" + # Fill the data first xf = filled(x) yf = filled(y) + # Allocate a common mask and refill m = mask_or(getmask(x), getmask(y)) - - x = masked_array(xf, copy=False, subok=False, mask=m).filled(fill_value) - y = masked_array(yf, copy=False, subok=False, mask=m).filled(fill_value) - + x = masked_array(xf, copy=False, mask=m) + y = masked_array(yf, copy=False, mask=m) if ((x is masked) and not (y is masked)) or \ ((y is masked) and not (x is masked)): - msg = build_err_msg([x, y], err_msg, header=header, names=('x', 'y')) + msg = build_err_msg([x, y], err_msg=err_msg, verbose=verbose, + header=header, names=('x', 'y')) raise ValueError(msg) + # OK, now run the basic tests on filled versions + return utils.assert_array_compare(comparison, + x.filled(fill_value), y.filled(fill_value), + err_msg=err_msg, + verbose=verbose, header=header) - if (x.dtype.char != "O") and (x.dtype.char != "S"): - x = x.astype(float_) - if isinstance(x, N.ndarray) and x.size > 1: - x[N.isnan(x)] = 0 - elif N.isnan(x): - x = 0 - if (y.dtype.char != "O") and (y.dtype.char != "S"): - y = y.astype(float_) - if isinstance(y, N.ndarray) and y.size > 1: - y[N.isnan(y)] = 0 - elif N.isnan(y): - y = 0 - try: - cond = (x.shape==() or y.shape==()) or x.shape == y.shape - if not cond: - msg = build_err_msg([x, y], - err_msg - + '\n(shapes %s, %s mismatch)' % (x.shape, - y.shape), - header=header, - names=('x', 'y')) - assert cond, msg - val = comparison(x,y) - if m is not nomask and fill_value: - val = masked_array(val, mask=m, copy=False) - if isinstance(val, bool): - cond = val - reduced = [0] - else: - reduced = val.ravel() - cond = reduced.all() - reduced = reduced.tolist() - if not cond: - match = 100-100.0*reduced.count(1)/len(reduced) - msg = build_err_msg([x, y], - err_msg - + '\n(mismatch %s%%)' % (match,), - header=header, - names=('x', 'y')) - assert cond, msg - except ValueError: - msg = build_err_msg([x, y], err_msg, header=header, names=('x', 'y')) - raise ValueError(msg) -#............................ -def assert_array_equal(x, y, err_msg=''): + +def assert_array_equal(x, y, err_msg='', verbose=True): """Checks the elementwise equality of two masked arrays.""" - assert_array_compare(equal, x, y, err_msg=err_msg, + assert_array_compare(equal, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not equal') -##............................ -def fail_if_array_equal(x, y, err_msg=''): + + +def fail_if_array_equal(x, y, err_msg='', verbose=True): "Raises an assertion error if two masked arrays are not equal (elementwise)." def compare(x,y): - - return (not N.alltrue(approx(x, y))) - assert_array_compare(compare, x, y, err_msg=err_msg, + return (not np.alltrue(approx(x, y))) + assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not equal') -#............................ -def assert_array_approx_equal(x, y, decimal=6, err_msg=''): + + +def assert_array_approx_equal(x, y, decimal=6, err_msg='', verbose=True): """Checks the elementwise equality of two masked arrays, up to a given number of decimals.""" def compare(x, y): "Returns the result of the loose comparison between x and y)." return approx(x,y, rtol=10.**-decimal) - assert_array_compare(compare, x, y, err_msg=err_msg, + assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not almost equal') -#............................ -def assert_array_almost_equal(x, y, decimal=6, err_msg=''): + + +def assert_array_almost_equal(x, y, decimal=6, err_msg='', verbose=True): """Checks the elementwise equality of two masked arrays, up to a given number of decimals.""" def compare(x, y): "Returns the result of the loose comparison between x and y)." return almost(x,y,decimal) - assert_array_compare(compare, x, y, err_msg=err_msg, + assert_array_compare(compare, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not almost equal') -#............................ -def assert_array_less(x, y, err_msg=''): + + +def assert_array_less(x, y, err_msg='', verbose=True): "Checks that x is smaller than y elementwise." - assert_array_compare(less, x, y, err_msg=err_msg, + assert_array_compare(less, x, y, err_msg=err_msg, verbose=verbose, header='Arrays are not less-ordered') -#............................ -assert_close = assert_almost_equal -#............................ + + def assert_mask_equal(m1, m2): """Asserts the equality of two masks.""" if m1 is nomask: @@ -240,6 +218,3 @@ if m2 is nomask: assert(m1 is nomask) assert_array_equal(m1, m2) - -if __name__ == '__main__': - pass