Hello,
I'm happy to announce the fist beta release of Numpy 1.9.0.
1.9.0 will be a new feature release supporting Python 2.6 - 2.7 and 3.2
- 3.4.
Due to low demand windows binaries for the beta are only available for
Python 2.7, 3.3 and 3.4.
Please try it and report any issues to the numpy-discussion mailing list
or on github.
The 1.9 release will consists of mainly of many small improvements and
bugfixes. The highlights are:
* Addition of __numpy_ufunc__ to allow overriding ufuncs in ndarray
subclasses. Please note that there are still some known issues with this
mechanism which we hope to resolve before the final release (e.g. #4753)
* Numerous performance improvements in various areas, most notably
indexing and operations on small arrays are significantly faster.
Indexing operations now also release the GIL.
* Addition of nanmedian and nanpercentile rounds out the nanfunction set.
The changes involve a lot of small changes that might affect some
applications, please read the release notes for the full details on all
changes:
https://github.com/numpy/numpy/blob/maintenance/1.9.x/doc/release/1.9.0-not…
Please also take special note of the future changes section which will
apply to the following release 1.10.0 and make sure to check if your
applications would be affected by them.
Source tarballs, windows installers and release notes can be found at
https://sourceforge.net/projects/numpy/files/NumPy/1.9.0b1
Cheers,
Julian Taylor

NumPy seems to define BLAS and LAPACK functions with gfortran ABI:
https://github.com/numpy/numpy/blob/master/numpy/linalg/umath_linalg.c.src#…
extern float
FNAME(sdot)(int *n,
float *sx, int *incx,
float *sy, int *incy);
What happens on OS X where Accelerate Framework uses f2c ABI? SciPy has C
wrappers for Accelerate to deal with this. Should NumPy adopt those as
well?
And while we are at it, why are LAPACK subroutines declared to return int?
I thought a Fortran subroutine should map to a void function in C. From the
same file:
extern int
FNAME(dgesv)(int *n, int *nrhs,
double a[], int *lda,
int ipiv[],
double b[], int *ldb,
int *info);
Sturla

Hello everybody,
i have a Problem, with three minimize Algorithmen in scipy. I need
Algorithmen to support a minimize Algorithmen with boundarys. My favorits
are
the 'l-bfgs-b', 'slsqp' and 'tnc' but all of them give a failure meanwhile
solve my problem.
By call the methode SLSQP it give me failure back out of methode in file
slsqp.py in the line 397 of code: "failed in converting 8th argument 'g' of
_slsqp.slsqp to C/Fortran".
my Jacobi Matrix ist a differentiate from the jccurve(in the Code below)
formula and i code this
in Lambda formalism. The Variable "x" are the optimize Parameter in this
case
also x[0]=a,x[1]=b,x[2]=n,x[3]=c
In my case sigma and epsilon is the flow curve of metal and i optimize the
Johnson-Cook curve with this Code. That mean sigma ist a array with 128
elements also epsilon is a array with 128 elements. The Paramete a,b,n,c are
number in this case.
The Solver l-bfgs-b give me a failure back out of the methode in the file
lbfgsb.py in the line 264 in the function g=fac(x,*args). That function is a
referenz to my Jacobi-matrix "jcjac" and it give me a TypeError
"'builtin_function_or_method' object has no attribute '__getitem__'" I don´t
what that mean. This failure also in the Solver from TNC.
I hope i could explain my Problem and everbody can help me.
Here begin my extract from code:
jcjac = lambda x,sig,eps,peps : array[(1+x[3]*log(peps/1)),
((eps**x[2])+(eps**x[2])*log(peps/1)+ x[0]*log(peps/1)), (
x[0]*log(peps/1)),
(x[1]*(eps**x[2])*log(eps)+(eps**x[2])*log(eps)*log(peps/1))]
def __jccurve__(self,a,b,n,c,epsilon,pointepsilon):
sig_jc=(a+b*epsilon**n)*(1+c*numpy.log(pointepsilon/1))
return sig_jc
def __residuals__(self,const,sig,epsilon,pointepsilon):
return
sig-self.__jccurve__(const[0],const[1],const[2],const[3],epsilon,pointepsilon)
p_guess=a1,b1,n1,c
max_a=a1+(a1*0.9)
min_a=a1-(a1*0.9)
min_a=0
max_b=b1+(b1*0.9)
min_b=b1-(b1*0.9)
min_b=0
max_n=n1+(n1*10)
min_n=n1-(n1*10)
min_n=0
max_c=c+(c*100)
min_c=c-(c*100)
min_c=0
optimize.minimize(self.__residuals__,p_guess,args=(sigma,epsilon,pointepsilon),method='slsqp',jac=jcjac,
hess=None,hessp=None,bounds=((min_a,max_a),(min_b,max_b),(min_n,max_n),(min_c,max_c)))
best greets
Lars
--
View this message in context: http://numpy-discussion.10968.n7.nabble.com/minimize-Algorithmen-Problem-wi…
Sent from the Numpy-discussion mailing list archive at Nabble.com.

On Wed, Jun 4, 2014 at 7:18 AM, Travis Oliphant <travis(a)continuum.io> wrote:
> Even relatively simple changes can have significant impact at this point.
> Nathaniel has laid out a fantastic list of great features. These are the
> kind of features I have been eager to see as well. This is why I have been
> working to fund and help explore these ideas in the Numba array object as
> well as in Blaze. Gnumpy, Theano, Pandas, and other projects also have
> useful tales to tell regarding a potential NumPy 2.0.
I think this is somewhat missing the main point of my message :-). I
was specifically laying out a list of features that we could start
working on *right now*, *without* waiting for the mythical "numpy
2.0".
> Ultimately, I do think it is time to talk seriously about NumPy 2.0, and
> what it might look like. I personally think it looks a lot more like a
> re-write, than a continuation of the modifications of Numeric that became
> NumPy 1.0. Right out of the gate, for example, I would make sure that
> NumPy 2.0 objects somehow used PyObject_VAR_HEAD so that they were
> variable-sized objects where the strides and dimension information was
> stored directly in the object structure itself instead of allocated
> separately (thus requiring additional loads and stores from memory). This
> would be a relatively simple change. But, it can't be done and preserve ABI
> compatibility. It may also, at this point, have impact on Cython code, or
> other code that is deeply-aware of the NumPy code-structure. Some of the
> changes that should be made will ultimately require a porting exercise for
> new code --- at which point why not just use a new project.
I'm not aware of any obstacles to packing strides/dimension/data into
the ndarray object right now, tomorrow if you like -- we've even
discussed doing this recently in the tracker. PyObject_VAR_HEAD in
particular seems... irrelevant? All it is is syntactic sugar for
adding an integer field called "ob_size" to a Python object struct,
plus a few macros for working with this field. We don't need or want
such a field anyway (for shape/strides it would be redundant with
ndim), and even if we did want such a field we could add it any time
without breaking ABI. And if someday we do discover some compelling
advantage to breaking ABI by rearranging the ndarray struct, then we
can do this with a bit of planning by using #ifdef's to make the
rearrangement coincide with a new Python release. E.g., people
building against python 3.5 get the new struct layout, people building
against 3.4 get the old, and in a few years we drop support for the
old. No compatibility breaks needed, never mind rewrites.
More generally: I wouldn't rule out "numpy 2.0" entirely, but we need
to remember the immense costs that a rewrite-and-replace strategy will
incur. Writing a new library is very expensive, so that's one cost.
But that cost is nothing compared to the costs of getting that new
library to the same level of maturity that numpy has already reached.
And those costs, in turn, are absolutely dwarfed by the transition
costs of moving the whole ecosystem from one foundation to a
different, incompatible one. And probably even these costs are small
compared to the opportunity costs -- all the progress that *doesn't*
get made in the mean time because fragmented ecosystems suck and make
writing code hard, and the best hackers are busy porting code instead
of writing awesome new stuff. I'm sure dynd is great, but we have to
be realistic: the hard truth is that even if it's production-ready
today, that only brings us a fraction of a fraction of a percent
closer to making it a real replacement for numpy.
Consider the python 2 to python 3 transition: Python 3 itself was an
immense amount of work for a large number of people, with intense
community scrutiny of the design. It came out in 2008. 6 years and
many many improvements later, it's maybe sort-of starting to look like
a plurality of users might start transitioning soonish? It'll be years
yet before portable libraries can start taking advantage of python 3's
new awesomeness. And in the mean time, the progress of the whole
Python ecosystem has been seriously disrupted: think of how much
awesome stuff we'd have if all the time that's been spent porting and
testing different packages had been spent on moving them forward
instead. We also have experience closer to home -- did anyone enjoy
the numeric/numarray->numpy transition so much they want to do it
again? And numpy will be much harder to replace than numeric --
numeric wasn't the most-imported package in the pythonverse ;-). And
my biggest worry is that if anyone even tries to convince everyone to
make this kind of transition, then if they're successful at all then
they'll create a substantial period where the ecosystem is a big
incompatible mess (and they might still eventually fail, providing no
long-term benefit to make up for the immediate costs). This scenario
is a nightmare for end-users all around.
By comparison, if we improve numpy incrementally, then we can in most
cases preserve compatibility totally, and in the rare cases where it's
necessary to break something we can do it mindfully, minimally, and
with a managed transition. (Downstream packages are already used to
handling a few limited API changes at a time, it's not that hard to
support both APIs during the transition period, etc., so this way we
bring the ecosystem with us.) Every incremental improvement to numpy
immediately benefits its immense user base, and gets feedback and
testing from that immense user base. And if we incrementally improve
interoperability between numpy and other libraries like dynd, then
instead of creating fragmentation, it will let downstream packages use
both in a complementary way, switching back and forth depending on
which provides more utility on a case-by-case basis. If this means
that numpy eventually withers away because users vote with their feet,
then great, that'd be compelling evidence that whatever they were
migrating to really is better, which I trust a lot more than any
guesses we make on a mailing list. The gradual approach does require
that we be grown-ups and hold our noses while refactoring out legacy
spaghetti and writing unaesthetic compatibility hacks. But if you
compare this to the alternative... the benefits of incrementalism are,
IMO, overwhelming.
The only exception is when two specific criteria are met: (1) there
are changes that are absolutely necessary for the ecosystem's long
term health (e.g., py3's unicode-for-mere-mortals and true division),
AND (2) it's absolutely impossible to make these changes incrementally
(unicode and true division first entered Python in 2000 and 2001,
respectively, and immense effort went into finding the smoothest
transition, so it's pretty clear that as painful as py3 has been,
there isn't really anything better.).
What features could meet these two criteria in numpy's case? If I were
the numpy ecosystem and you tried to convince me to suffer through a
big-bang transition for the sake of PyObject_VAR_HEAD then I think I'd
be kinda unconvinced. And it only took me a few minutes to rattle off
a whole list of incremental changes that haven't even been tried yet.
-n
--
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org

I just d/l numpy-1.8.1 and try to build. I uncomment:
[fftw]
libraries = fftw3
This is fedora 20. fftw3 (and devel) is installed as fftw.
I see nothing written to stderr during the build that has any reference to fftw.

Hi!
Maybe there's someone who can answer the following question at stackoverflow:
http://stackoverflow.com/q/24019099/1606022?sem=2
It is about extending meshgrid functionality to 2D input arrays...
Regards,
--
Marek

Dear all,
It seems that there is not a percentile function for masked array in numpy
or scipy?
I checked numpy.percentile and scipy.percentile, it seems not support only
nonmasked array? And there is no percentile function in scipy.stats.mstats,
so I have to use np.percentile(arr.compressed()) I guess.
Thanks for any comments.
Best,
Chao
--
please visit:
http://www.globalcarbonatlas.org/
***********************************************************************************
Chao YUE
Laboratoire des Sciences du Climat et de l'Environnement (LSCE-IPSL)
UMR 1572 CEA-CNRS-UVSQ
Batiment 712 - Pe 119
91191 GIF Sur YVETTE Cedex
Tel: (33) 01 69 08 29 02; Fax:01.69.08.77.16
************************************************************************************

Hi,
I'm sorry to ask this, I guess I should know - but is there any way in
disutils or numpy distutils to check whether a compiler flag is valid
before doing extension building?
I'm thinking of something like this, to check whether the compiler can
handle '-fopenmp':
have_openmp = check_compiler_flag('-fopenmp')
flags = ['-fopenmp'] if have_openmp else []
ext = Extension('myext', ['myext.c'],
extra_compile_args = flags,
extra_link_args = flags])
I guess this would have to go somewhere in the main setup() call in
order to pick up custom compilers on the command line and such?
Cheers,
Matthew