[Numpy-discussion] ignore NAN in numpy.true_divide()
questions.anon at gmail.com
Mon Dec 5 17:29:43 EST 2011
Maybe I am asking the wrong question or could go about this another way.
I have thousands of numpy arrays to flick through, could I just identify
which arrays have NAN's and for now ignore the entire array. is there a
simple way to do this?
any feedback will be greatly appreciated.
On Thu, Dec 1, 2011 at 12:16 PM, questions anon <questions.anon at gmail.com>wrote:
> I am trying to calculate the mean across many netcdf files. I cannot use
> numpy.mean because there are too many files to concatenate and I end up
> with a memory error. I have enabled the below code to do what I need but I
> have a few nan values in some of my arrays. Is there a way to ignore these
> somewhere in my code. I seem to face this problem often so I would love a
> command that ignores blanks in my array before I continue on to the next
> processing step.
> Any feedback is greatly appreciated.
> for dir in glob.glob(MainFolder + '*/01/')+ glob.glob(MainFolder +
> '*/02/')+ glob.glob(MainFolder + '*/12/'):
> for ncfile in glob.glob(dir + '*.nc'):
> print netCDF_list
> for filename in netCDF_list:
> TSFC=MA.masked_values(TSFC, fillvalue)
> for i in xrange(0,len(TSFC)-1,1):
> slice_counter +=1
> #print slice_counter
> running_sum=N.add(running_sum, TSFC[i])
> except NameError:
> print "Initiating the running total of my
> TSFC_avg=N.true_divide(running_sum, slice_counter)
> print "the TSFC_avg is:", TSFC_avg
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion