On Wed, Dec 27, 2006 at 10:33:28AM +0200, Stefan van der Walt wrote:
On Sun, Dec 24, 2006 at 10:53:04PM -0400, A. M. Archibald wrote:
On 24/12/06, Matt Knox <mattknox_ca@hotmail.com> wrote:
Ok, thanks. So it sounds like there is no easy way to do this generally without doing some looping in python (for the standard deviation anyway), or writing some C code. I'm more interested in the general strategy for doing these kinds of calculations than these specific examples themselves necessarily. Although they are useful for making pretty charts sometimes.
Fancy indexing will do the job, more or less.
This is the general idea I was trying to get at:
chunk = 10 data = arange(10000)/10000.
indices = arange(len(data)-chunk+1)[:,newaxis]+arange(chunk)[newaxis,:]
data_chunks = data[indices]
average(data_chunks,axis=0) var(data_chunks,axis=0) and so on...
The only problem is that it expands your data rather a lot. Personally I wouldn't worry about that until my application was written. I posted, some time ago, a function that would allow the easy construction of data_chunks as a view of the original array, that is, with no data copying. (Sorry, I don't remember how to find the thread.)
http://thread.gmane.org/gmane.comp.python.scientific.user/9570/focus=9579
Or even better: http://thread.gmane.org/gmane.comp.python.numeric.general/12365/focus=12367 Cheers Stéfan