I have generalized np.atleast_1d, np.atleast_2d, np.atleast_3d with a
function np.atleast_nd in PR#7804
As a result of this PR, I have a couple of questions about
`np.atleast_3d`. `np.atleast_3d` appears to do something weird with
the dimensions: If the input is 1D, it prepends and appends a size-1
dimension. If the input is 2D, it appends a size-1 dimension. This is
inconsistent with `np.atleast_2d`, which always prepends (as does
- Is there any reason for this behavior?
- Can it be cleaned up (e.g., by reimplementing `np.atleast_3d` in
terms of `np.atleast_nd`, which is actually much simpler)? This would
be a slight API change since the output would not be exactly the same.
The behavior of a ufuncs reduceat on empty slices seems a little strange,
and I wonder if there's a reason behind it / if there's a route to
potentially changing it. First, I'll go into a little background.
I've been making a lot of use of ufuncs reduceat functionality on staggered
arrays. In general, I'll have "n" arrays, each with size "s[n]" and I'll
store them in one array "x", such that "s.sum() == x.size". reduceat is
great because I use
ufunc.reduceat(x, np.insert(s[:-1].cumsum(), 0, 0))
to get some summary information about each array. However, reduceat seems
to behave strangely for empty slices. To make things concrete, let's assume:
import numpy as np
s = np.array([3, 0, 2])
x = np.arange(s.sum())
inds = np.insert(s[:-1].cumsum(), 0, 0)
# [0, 3, 3]
# [3, 3, 7] not [3, 0, 7]
# This is distinct from
np.fromiter(map(np.add.reduce, np.array_split(x, inds[1:])), x.dtype,
s.size - 1)
# [3, 0, 7] what I wanted
The current documentation
on reduceat first states:
For i in range(len(indices)), reduceat computes
That would suggest the outcome, that I expected. However, in the examples
section it goes into a bunch of examples which contradict that statement
and instead suggest that the actual algorithm is more akin to:
ufunc.reduce(a[indices[i]:indices[i+1]]) if indices[i+1] > indices[i] else
Looking at the source
it seems like it's copying indices[i], and then while there are more
elements to process it keeps reducing, resulting in this unexpected
behavior. It seems like the proper thing to do would be start with
ufunc.identity, and then reduce. This is slightly less performant than
what's implemented, but more "correct." There could, of course, just be a
switch to copy the identity only when the slice is empty.
Is there a reason it's implemented like this? Is it just for performance,
or is this strange behavior *useful* somewhere? It seems like "fixing" this
would be bad because you'll be changing somewhat documented functionality
in a backwards incompatible way. What would the best approach to "fixing"
this be? add another function "reduceat_"? add a flag to reduceat to do the
proper thing for empty slices?
Finally, is there a good way to work around this? I think for now I'm just
going to mask out the empty slices and use insert to add them back in, but
if I'm missing an obvious solution, I'll look at that too. I need to mask
them out because, np.add.reduceat(x, 5) would ideally return 0, but instead
it throws an error since 5 is out of range...
Thanks for indulging my curiosity,
I work at Machinalis were we use a lot of numpy (and the pydata stack in
general). Recently we've also been getting involved with mypy, which is a
tool to type check (not on runtime, think of it as a linter) annotated
python code (the way of annotating python types has been recently
standarized in PEP 484).
As part of that involvement we've started creating type annotations for the
Python libraries we use most, which include numpy. Mypy provides a way to
specify types with annotations in separate files in case you don't have
control over a library, so we have created an initial proof of concept at
, and we are actively improving it. You can find some additional
information about it and some problems we've found on the way at this
What I wanted to ask is if the people involved on the numpy project are
aware of PEP484 and if you have some interest in starting using them. The
main benefit is that annotations serve as clear (and automatically
testable) documentation for users, and secondary benefits is that users
discovers bugs more quickly and that some IDEs (like pycharm) are starting
to use this information for smart editor features (autocompletion, online
checking, refactoring tools); eventually tools like jupyter could take
advantage of these annotations in the future. And the cost of writing and
including these are relatively low.
We're doing the work anyway, but contributing our typespecs back could make
it easier for users to benefit from this, and for us to maintain it and
keep it in sync with future releases.
If you've never heard about PEP484 or mypy (it happens a lot) I'll be happy
to clarify anything about it that might helpunderstand this situation
Daniel F. Moisset - UK Country Manager
On behalf of the Bokeh team, I am pleased to announce the release of version 0.12.1 of Bokeh!
This is a minor, incremental update that adds a few new small features and fixes several bugs.
Please see the announcement post at:
which has much more information as well as live demonstrations. And as always, see the CHANGELOG and Release Notes for full details.
If you are using Anaconda/miniconda, you can install it with conda:
conda install -c bokeh bokeh
Alternatively, you can also install it with pip:
pip install bokeh
Full information including details about how to use and obtain BokehJS are at:
Issues, enhancement requests, and pull requests can be made on the Bokeh Github page: https://github.com/bokeh/bokeh
Documentation is available at http://bokeh.pydata.org/en/0.12.1
Questions can be directed to the Bokeh mailing list: bokeh(a)continuum.io or the Gitter Chat room: https://gitter.im/bokeh/bokeh
Bryan Van de Ven
Bokeh is a Python interactive visualization library that targets modern web browsers for presentation. Its goal is to provide elegant, concise construction of versatile graphics with high-performance interactivity over very large or streaming datasets. Bokeh can help anyone who would like to quickly and easily create interactive plots, dashboards, and data applications.
I have been trying to compile a relatively simple pair of Fortran files,
one referencing a subroutine from another file (mainmodule.f90 references
othermodule.f90). I have been able to compile them using a Fortran
compiler, but receive a NotImplementedError when using f2py.
Steps I use for f2py:
$gfortran -shared -o othermodule.so othermodule.f90 -fPIC
$f2py -c -l/path/othermodule -m mainmodule mainmodule.f90
I am running this on linux and wasn't sure how to correct the error.
integer*1 :: i
integer*8 :: fact,len
real*8,dimension(:),allocatable :: ex
integer*1 :: ii
integer*8 :: ff
real*8 :: exex
end subroutine submarine
end module moderator
if (i==1) then
print*,"here's your ",i,"st number: ",ex(i)
elseif (i==2) then
print*,"here's your ",i,"nd number: ",ex(i)
elseif (i==3) then
print*,"here's your ",i,"rd number: ",ex(i)
print*,"here's your ",i,"th number: ",ex(i)
Thanks for the help,
Thank you so much for your answer.
I finally figured out that the problem was because Numpy 1.9 was not linked
to BLAS. I do not know why because I simply installed numpy 1.9 via the
apt-get install python3-numpy
If anybody has the same problem, you may want to take a look into this:
On Fri, Jul 22, 2016 at 5:19 PM Ecem sogancıoglu <ecemsogancioglu(a)gmail.com>
> Dear Ralf,
> Thank you so much for your answer.
> I finally figured out that the problem was because Numpy 1.9 was not
> linked to BLAS. I do not know why because I simply installed numpy 1.9 via
> the commands:
> apt-get install python3-numpy
> If anybody has the same problem, you may want to take a look into this:
> Best Regards,
> On Tue, Jul 19, 2016 at 9:44 PM Ralf Gommers <ralf.gommers(a)gmail.com>
>> On Tue, Jul 19, 2016 at 3:53 PM, Ecem sogancıoglu <
>> ecemsogancioglu(a)gmail.com> wrote:
>>> Hello All,
>>> there seems to be a performance issue with the covariance function in
>>> numpy 1.9 and later.
>>> Code example:
>>> In numpy 1.8, this line of code requires 4.5755 seconds.
>>> In numpy 1.9 and later, the same line of code requires more than 30.3709
>>> s execution time.
>> Hi Ecem, can you make sure to use the exact same random array as input to
>> np.cov when testing this? Also timing just the function call you're
>> interested in would be good; the creating of your 2-D array takes longer
>> than the np.cov call:
>> In : np.random.seed(1234)
>> In : x = np.random.randn(700,37000)
>> In : %timeit np.cov(x)
>> 1 loops, best of 3: 572 ms per loop
>> In : %timeit np.random.randn(700, 37000)
>> 1 loops, best of 3: 1.26 s per loop
>>> Has anyone else observed this problem and is there a known bugfix?
>>> NumPy-Discussion mailing list
>> NumPy-Discussion mailing list
StackOverflow now also has documentation, and there already is a NumPy tag:
Not sure what, if anything, do we want to do with this, nor how to handle
not having to different sources with the same information. Any thoughts?
( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
de dominación mundial.
See issue #7780: https://github.com/numpy/numpy/issues/7780
Would be good if the numpy.polynomial fit methods (e.g. numpy.polynomial.legendre.Legendre.fit) could return the covariance matrix of the fitted parameters, in the same way as numpy.polyfit.
Let me know if there are any suggestions/concerns. Otherwise I’m happy to add the code and update the docs.