Hi,
For a numpy array:
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
I do some calculation with 0, 1... and get a value = 2.5, now use this value
to do the repeat the same calculation with next element for example...
2.5, 2 and get a value = 3.1
3.1, 3 and get a value = 4.2
4.2, 4 and get a value = 5.1
....
.... and get a value = 8.5
8.5, 9 and get a value = 9.8
So I should be getting a new array like array([0, 2.5, 3.1, 4.2, 5.1, .....
8.5,9.8])
Is it where numpy or scipy can help?
Thanks
Vishal

Hello,
I'm passing a numpy array into a C-extension. I would like my C-extension to
take ownership of the data and handle deallocating the memory when it is no
longer needed. (The data is large so I want to avoid unnecessarily copying
the data).
So my question is what is the best way to ensure I'm using the correct
memory deallocator for the buffer? i.e the deallocator for what ever
allocator numpy used to allocate the array?
Thanks
Jeremy
Jeremy Lewi
Engineering Scientist
The Intellisis Corporation
jlewi(a)intellisis.com

Hello all,
I'm relatively new to numpy. I'm working with text images as 512x512 arrays. I would like to set elements of the array whose value fall within a specified range to zero (eg 23 < x < 45). Any advice is much appreciated.
Sean

Hi,
Does anyone have an idea how fft functions are implemented? Is it pure
python? based on BLAS/LAPACK? or is it using fftw?
I successfully used numpy.fft in 3D. I would like to know if I can
calculate a specific a plane using the numpy.fft.
I have in 3D:
r(x, y, z)=\sum_h^N-1 \sum_k^M-1 \sum_l^O-1 f_{hkl}
\exp(-2\pi \i (hx/N+ky/M+lz/O))
So for the plane, z is no longer independant.
I need to solve the system:
ax+by+cz+d=0
r(x, y, z)=\sum_h^N-1 \sum_k^M-1 \sum_l^O-1 f_{hkl}
\exp(-2\pi \i (hx/N+ky/M+lz/O))
Do you think it's possible to use numpy.fft for this?
Regards,
Pascal

Hi,
In my setup.py, I have
from numpy.distutils.misc_util import Configuration
fflags= '-fdefault-real-8 -ffixed-form'
config = Configuration(
'foo',
parent_package=None,
top_path=None,
f2py_options='--f77flags=\'%s\' --f90flags=\'%s\'' % (fflags,
fflags)
)
However I am still getting stuff returned in 'real' variables as
dtype=float32. Am I doing something wrong?
Thanks,
David

> Message: 5
> Date: Sun, 28 Mar 2010 00:24:01 +0000
> From: Andrea Gavana <andrea.gavana(a)gmail.com>
> Subject: [Numpy-discussion] Interpolation question
> To: Discussion of Numerical Python <numpy-discussion(a)scipy.org>
> Message-ID:
> <d5ff27201003271724o6c82ec75v225d819c84140b46(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi All,
>
> I have an interpolation problem and I am having some difficulties
> in tackling it. I hope I can explain myself clearly enough.
>
> Basically, I have a whole bunch of 3D fluid flow simulations (close to
> 1000), and they are a result of different combinations of parameters.
> I was planning to use the Radial Basis Functions in scipy, but for the
> moment let's assume, to simplify things, that I am dealing only with
> one parameter (x). In 1000 simulations, this parameter x has 1000
> values, obviously. The problem is, the outcome of every single
> simulation is a vector of oil production over time (let's say 40
> values per simulation, one per year), and I would like to be able to
> interpolate my x parameter (1000 values) against all the simulations
> (1000x40) and get an approximating function that, given another x
> parameter (of size 1x1) will give me back an interpolated production
> profile (of size 1x40).
[I posted the following earlier but forgot to change the subject - it
appears as a new thread called "NumPy-Discussion Digest, Vol 42, Issue
85" - please ignore that thread]
Andrea, may I suggest a different approach to RBF's.
Realize that your vector of 40 values for each row in y are not
independent of each other (they will be correlated). First build a
principal component analysis (PCA) model on this 1000 x 40 matrix and
reduce it down to a 1000 x A matrix, called your scores matrix, where
A is the number of independent components. A is selected so that it
adequately summarizes Y without over-fitting and you will find A <<
40, maybe A = 2 or 3. There are tools, such as cross-validation, that
will help select a reasonable value of A.
Then you can relate your single column of X to these independent
columns in A using a tool such as least squares: one least squares
model per column in the scores matrix. This works because each column
in the score vector is independent (contains totally orthogonal
information) to the others. But I would be surprised if this works
well enough, unless A = 1.
But it sounds like your don't just have a single column in your
X-variables (you hinted that the single column was just for
simplification). In that case, I would build a projection to latent
structures model (PLS) model that builds a single latent-variable
model that simultaneously models the X-matrix, the Y-matrix as well as
providing the maximal covariance between these two matrices.
If you need some references and an outline of code, then I can readily
provide these.
This is a standard problem with data from spectroscopic instruments
and with batch processes. They produce hundreds, sometimes 1000's of
samples per row. PCA and PLS are very effective at summarizing these
down to a much smaller number of independent columns, very often just
a handful, and relating them (i.e. building a predictive model) to
other data matrices.
I also just saw the suggestions of others to center the data by
subtracting the mean from each column in Y and scaling (by dividing
through by the standard deviation). This is a standard data
preprocessing step, called autoscaling and makes sense for any data
analysis, as you already discovered.
Hope that helps,
Kevin
> Something along these lines:
>
> import numpy as np
> from scipy.interpolate import Rbf
>
> # x.shape = (1000, 1)
> # y.shape = (1000, 40)
>
> rbf = Rbf(x, y)
>
> # New result with xi.shape = (1, 1) --> fi.shape = (1, 40)
> fi = rbf(xi)
>
>
> Does anyone have a suggestion on how I could implement this? Sorry if
> it sounds confused... Please feel free to correct any wrong
> assumptions I have made, or to propose other approaches if you think
> RBFs are not suitable for this kind of problems.
>
> Thank you in advance for your suggestions.
>
> Andrea.
>
> "Imagination Is The Only Weapon In The War Against Reality."
> http://xoomer.alice.it/infinity77/
>
> ==> Never *EVER* use RemovalGroup for your house removal. You'll
> regret it forever.
> http://thedoomedcity.blogspot.com/2010/03/removal-group-nightmare.html <==

Hi,
In an array I want to replace all NANs with some number say 100, I found a
method* **nan_to_num *but it only replaces with zero.
Any solution for this?
*
*Thanks
Vishal

This one bit me again, and I am trying to understand it better so I can
anticipate when it will happen.
What I want to do is get rid of singleton dimensions, and index into the
last dimension with an array.
In [1]: import numpy as np
In [2]: x=np.zeros((10,1,1,1,14,1024))
In [3]: x[:,0,0,0,:,[1,2,3]].shape
Out[3]: (3, 10, 14)
Whoa! Trimming my array to a desired number ends up moving the last
dimension to the first!
In [4]: np.__version__
Out[4]: '1.3.0'
...
In [7]: x[:,:,:,:,:,[1,2,3]].shape
Out[7]: (10, 1, 1, 1, 14, 3)
This looks right...
In [8]: x[...,[1,2,3]].shape
Out[8]: (10, 1, 1, 1, 14, 3)
and this...
In [9]: x[...,[1,2,3]][:,0,0,0].shape
Out[9]: (10, 14, 3)
...
In [11]: x[:,0,0,0][...,[1,2,3]].shape
Out[11]: (10, 14, 3)
Either of the last 2 attempts above results in what I want, so I can do
that... I just need some help deciphering when and why the first thing
happens.
--
View this message in context: http://old.nabble.com/indexing-question-tp28083162p28083162.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

Email not displaying correctly? View it in your browser.
Hello Amenity,
Spring is upon us and arrangements for SciPy 2010 are in full swing.
We're already nearing on some important deadlines for conference
participants: April 11th is the deadline for submitting an abstract
for a paper, and April 15th is the deadline for submitting a tutorial
proposal.
Help choose tutorials for SciPy 2010...
We set up a UserVoice page to brainstorm tutorial topics last week and
we already have some great ideas. The top ones at the moment are:
Effective multi-core programming with Cython and Python
Building your own app with Mayavi
High performance computing with Python
Propose your own or vote on the existing suggestions here.
...Or instruct a tutorial and cover your conference costs.
Did you know that we're awarding generous stipends to tutorial
instructors this year? So if you believe you could lead a tutorial,
by all means submit your proposal — soon! They're due April 15th.
Call for Papers Continues
Submitting a paper for to present at SciPy 2010 is easy, so remember
to prepare one and have your friends and colleagues follow suit. Send
us your abstract before April 11th and let us know whether you'd like
to speak at the main conference or one of the specialized tracks.
Details here.
Have you registered?
Booking your tickets early should save you money — not to mention the
early registration prices you will qualify for if you register before
May 10th.
Best,
The SciPy 2010 Team
@SciPy2010 on Twitter
You are receiving this email because you have registered for the SciPy
2010 conference in Austin, TX.
Unsubscribe amenity(a)enthought.com from this list | Forward to a friend
| Update your profile
Our mailing address is:
Enthought, Inc.
515 Congress Ave.
Austin, TX 78701
Add us to your address book
Copyright (C) 2010 Enthought, Inc. All rights reserved.

Hi,
Currently, when building numpy with python 3, the 2to3 conversion
happens before calling any distutils command. Was there a reason for
doing it as it is done now ?
I would like to make a proper numpy.distutils command for it, so that it
can be more finely controlled (in particular, using the -j option). It
would also avoid duplication in scipy.
cheers,
David