[Tutor] Parsing data from a set of files iteratively
s.charonis at gmail.com
Sun May 27 17:47:47 CEST 2012
Returning to this original problem, I have modified my program from a
single long procedure to
3 functions which do the following:
serialize_pipeline_model(f): takes as input a file, reads it and parses
(numerical entries in the file) into a list
write_to_binary(): writes the generated list to a binary file (pickles it)
read_binary(): unpickles the aggregate of merged lists that should be one
The code goes like so:
z_coords1 = 
# z_coords1 =  has been declared global
charged_groups = lys_charged_group + arg_charged_group + his_charged_group
+ asp_charged_group + glu_charged_group
for i in range(len(charged_groups)):
import pickle, shelve
print '\nPickling z-coordinates list'
""" iteratively write successively generated z_coords1 to a binary file """
f = open("z_coords1.dat", "ab")
""" read the binary list """
print '\nUnpickling z-coordinates list'
f = open("z_coords1.dat", "rb")
### LOOP OVER DIRECTORY
for f in
print '\n Z-VALUES FOR ALL CHARGED RESIDUES'
The problem is that the list (z_coords1) returns as an empty list. I know
the code works (too large to post here)
in a procedural format (z_coords1 can be generated correctly), so as a
diagnostic I included a print statement
in the serialize function to see that the list that is generated for each
of the 500 files.
Short of some intricacy with the scopes of the program I may be missing, I
am not sure why this is happening? Deos anybody have
any ideas? Many thanks for your time.
On Fri, May 18, 2012 at 7:23 PM, Spyros Charonis <s.charonis at gmail.com>wrote:
> Dear Python community,
> I have a set of ~500 files which I would like to run a script on. My
> script extracts certain information and
> generates several lists with items I need. For one of these lists, I need
> to combine the information from all
> 500 files into one super-list. Is there a way in which I can iteratively
> execute my script over all 500 files
> and get them to write the list I need into a new file? Many thanks in
> advance for your time.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Tutor