I started an enhancement request in the Github bug tracker at
https://github.com/numpy/numpy/issues/8331 , but Jaime Frio recommended I
bring it to the mailing list.
`in1d` takes two arrays, `ar1` and `ar2`, and returns a 1d array with the
same number of elements as `ar1`. The logical extension would be a function
that does the same thing but returns a (possibly multi-dimensional) array
of the same shape as `ar1`. The code already has a comment suggesting this
could be done (see https://github.com/numpy/numpy
I agree that changing the behavior of the existing function isn't an
option, since it would break backwards compatability. I'm not sure adding
an option keep_shape is good, since the name of the function ("1d")
wouldn't match what it does (returns an array that might not be 1d). I
think a new function is the way to go. This would be it, more or less:
def items_in(ar1, ar2, **kwargs):
return np.in1d(ar1, ar2, **kwargs).reshape(ar1.shape)
Questions I have are:
* Function name? I was thinking something like `items_in` or `item_in`: the
function returns whether each item in `ar1` is in `ar2`. Is "item" or
"element" the right term here?
* Are there any other changes that need to happen in arraysetops.py? Or
other files? I ask this because although the file says "Set operations for
1D numeric arrays" right at the top, it's growing increasingly not 1D:
`unique` recently changed to operate on multidimensional arrays, and I'm
proposing a multidimensional version of `in1d`. `ediff1d` could probably be
tweaked into a version that operates along an axis the same way unique does
now, fwiw. Mostly I want to know if I should put my code changes in this
file or somewhere else.
I am pleased to announce the release of NumPy 1.12.0rc1. This release
supports Python 2.7 and 3.4 - 3.6 and is the result of 406 pull requests
submitted by 139 contributors and comprises a large number of fixes and
improvements. Among the many improvements it is difficult to pick out just
a few as standing above the others, but the following may be of particular
interest or indicate areas likely to have future consequences.
* Order of operations in ``np.einsum`` can now be optimized for large speed
* New ``signature`` argument to ``np.vectorize`` for vectorizing with core
* The ``keepdims`` argument was added to many functions.
* New context manager for testing warnings
* Support for BLIS in numpy.distutils
* Much improved support for PyPy (not yet finished)
The release notes are quite sizable and rather than put them inline I've
attached them as a file. They may also be viewed at Github
<https://github.com/numpy/numpy/releases/tag/v1.12.0rc1>. Zip files and
tarballs may also be found the Github link. Wheels and a zip archive are
available from PyPI, which is the recommended method of installation.
I'm please to annouce the release of NumPy 1.11.3. This is a one bug fix
release to take care of a bug that could corrupt large files opened in
append mode and then used as an argument to ndarray.tofile. Thanks to Pavel
Potocek for the fix.
-----BEGIN PGP SIGNED MESSAGE-----
NumPy 1.11.3 Release Notes
Numpy 1.11.3 fixes a bug that leads to file corruption when very large files
opened in append mode are used in ``ndarray.tofile``. It supports Python
versions 2.6 - 2.7 and 3.2 - 3.5. Wheels for Linux, Windows, and OS X can be
found on PyPI.
Contributors to maintenance/1.11.3
A total of 2 people contributed to this release. People with a "+" by their
names contributed a patch for the first time.
- - Charles Harris
- - Pavel Potocek +
Pull Requests Merged
- - `#8341 <https://github.com/numpy/numpy/pull/8341>`__: BUG: Fix
ndarray.tofile large file corruption in append mode.
- - `#8346 <https://github.com/numpy/numpy/pull/8346>`__: TST: Fix tests in
PR #8341 for NumPy 1.11.x
-----BEGIN PGP SIGNATURE-----
-----END PGP SIGNATURE-----
It seems that PyPI will only accept one source file at this time, e.g.,
numpy-1.11.3.zip and numpy-1.11.3.tar.gz are considered duplicates. Does
anyone know if this is intentional or a bug on the PyPI end? It makes sense
in a screwy sort of way.
I'm trying to subclass an NDArray as shown here:
My problem is that when I save the new class' state with pickle, the new
attributes are lost. I don't seem to be able to override __getstate__ or
__setstate__ to achieve this?
Is it possible to allow new state to serialized when overriding an NDArray?
In my example below, __setstate__ gets called by pickle but not
In the final line, a RealisticInfoArray has been deserialized, but it has
no .info attribute.
import cPickle as pickle
import numpy as np
def __new__(cls, arr, info):
obj = np.asarray(arr).view(cls)
obj.info = info
def __array_finalize__(self, obj):
if obj is None: return
self.info = getattr(obj,"info",None)
def __setstate__(self, *args):
assert False, "EXPLODE"
arr = np.zeros((2,3), int)
arr = RealisticInfoArray(arr, "blarg")
arr2 = pickle.loads(pickle.dumps(arr))
print arr2.info # no .info attribute!
I am pleased to announce release 2016.4 of SfePy.
SfePy (simple finite elements in Python) is a software for solving systems of
coupled partial differential equations by the finite element method or by the
isogeometric analysis (limited support). It is distributed under the new BSD
Home page: http://sfepy.org
Mailing list: http://groups.google.com/group/sfepy-devel
Git (source) repository, issue tracker: https://github.com/sfepy/sfepy
Highlights of this release
- support tensor product element meshes with one-level hanging nodes
- improve homogenization support for large deformations
- parallel calculation of homogenized coefficients and related sub-problems
- evaluation of second derivatives of Lagrange basis functions
For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1
(rather long and technical).
Contributors to this release in alphabetical order: