Hi all!!!I'm pretty new here!! I'm just use my free time to learn something more about python....I just discovered a new word named 'Vectorizing'.... I'm going to explain my question...I have a matrix (no care about the size) and I want made some mathematical operations like mean,standard deviation,variance,ecc...
I post my result so can be usefull for all the newbie to understand the vectorizing mean:
Mean using a 3x3 window and 2 FOR cicle (a=matrix of interest, b same size then a but filled with zeros at the start):
for i in range(1,row-1): for j in range(1,col-1): b[i][j]= numpy.mean([a[i-1][j-1],a[i][j-1],a[i+1][j-1],a[i-1][j],a[i][j],a[i+1][j],a[i-1][j+1],a[i][j+1],a[i+1][j+1]])
Very disappointing in term of time..This is with the vectorizing(a=matrix of interest, c same size then a but filled with zeros at the start):
I have seen that I get a big advantage!!!But my question is:
If I want to calculate the variance in a 3x3 window, can I use the vectorizing method and 'numpy.var' or I must explain the variance formula?
I don't know if the question is understandable! I have thought something like:
But doesn't work because I think that numpy.var work over all the matrix and not only in the 3x3 window!!Is it correct???
Thanks in advance for any answers!!!
I'm not sure if I am addressing your question on vectorizing directly, but consider the following code, which does (maybe?) what your asking.
import scipy from numpy import reshape,ones, zeros, arange, array
B=zeros(A.shape) #initialize array #calculate averages filt=ones([3,3])/9.0 #the weighting matrix B[1:-1,1:-1]=(scipy.signal.convolve2d(A,filt, mode='same'))[1:-1,1:-1]
#initialize variance matrix for interior elements B_var =zeros(array(A.shape)-2)
for i in arange(-1,2): for j in arange(-1,2): B_var+=(A[(1+i):(nr-1+i),(1+j):(nc-1+j)]-B[1:-1,1:-1])**2
B_var/=9.0 #variance of 8x8 interior elements
This may or may not be the best or fastest way to get your answer. In particular, it may be just as fast to use the same loop structure for B as for B_var since convolve2D is not known as a speed demon. On the whole, though, I usually like to use scipy/numpy functions when available.
If you need to do this calculation a lot, it would be pretty straightforward to parallelize with c or cuda. You'd just need to take care with minimizing memory accesses (by doing things like calculating average and variance after doing one fetch of the 9 elements).