losing-end-of-row values when manipulating CSV input
nawijn at gmail.com
Wed Jul 13 22:44:45 CEST 2011
On Jul 13, 10:22 pm, Neil Berg <nb... at atmos.ucla.edu> wrote:
> Hello all,
> I am having an issue with my attempts to accurately filter some data from a CSV file I am importing. I have attached both a sample of the CSV data and my script.
> The attached CSV file contains two rows and 27 columns of data. The first column is the station ID "BLS", the second column is the sensor number "4", the third column is the date, and the remaining 24 columns are hourly temperature readings.
> In my attached script, I read in row[3:] to extract just the temperatures, do a sanity check to make sure there are 24 values, remove any missing or "m" values, and then append the non-missing values into the "hour_list".
> Strangely the the first seven rows appear to be empty after reading into the CSV file, so that's what I had to incorporate the if len(temps) == 24 statement.
> But the real issue is that for days with no missing values, for example the second row of data, the length of the hour_list should be 24. My script, however, is returning 23. I think this is because the end-of-row-values have a trailing "\". This must mark these numbers as non-digits and are lost in my "isdig" filter line. I've tried several ways to remove this trailing "\", but to no success.
> Do you have any suggestions on how to fix this issue?
> Many thanks in advance,
> Neil Berg
> < 1KViewDownload
Don't know if this is a double post (previous post seems to be gone),
but val = val.rstrip('\\') should fix your problem. Note the double
More information about the Python-list