[Tutor] Need help to print outputs on seperate files and avoid some unwanted error messages which can stop my code at some mid point
David L Neil
PyTutor at DancesWithMice.info
Thu Feb 27 17:42:17 EST 2020
On 28/02/20 9:53 AM, SATYABRATA DATTA wrote:
> There is some code
> *import math class Vector(): def __init__(self,vx,vy,vz): self.x=vx
> self.y=vy self.z=vz def norm(self): xx=self.x**2 yy=self.y**2 zz=self.z**2
> return math.sqrt(xx+yy+zz)*
>
> Now in the run file
> *import math*
> *import numpy as np*
>
> *from Desktop import Test*
> *def random_range(n, min, max):*
> *return min + np.random.random(n) * (max - min)*
> *filenumber = 0*
> *x=random_range(20,2,9)*
> *y=random_range(20,2,9)*
> *z=random_range(20,2,9)*
>
> *trial_args = np.stack((x, y, z), axis=-1)*
> *for x, y, z in trial_args:*
> * model=Test.Vector(x,y,z)*
> * if model.norm() > 5:*
> * filenumber += 1*
> * filename = str(filenumber)*
> * with open(filename+'.txt', 'w') as f:*
> * print(x, y, z, '=>', model.norm(), *
>
> Now let say I have a similar program which need two operator ouptput means
> model.operation1 #that produces some output which is used by operation2
> model.operation2
> Not I want to print
> print(x,y,z,’=>’,model.operation1,\n model.operation2,file=(my specified
> folder)f
> How to write that(In case of my original program the model.operation1 and
> model.operation2 when given print command gives ouput in python shell). But
> since there happens to be some error at some specific points(x,y,z) which
> causes my program to stop at some intermediate point. So my trick is to
> print each output to seperate files wherether errorful or right and than I
> can manually search which outputs are ok for me
Firstly, are you aware of Python's exception handling features? An
"exception" may be the result of some fault, but does not have to be an
"error". For example, 'raising an exception' (which is not a fault) is a
common way to 'escape' from multiple layers of program-logic, ie loops
within loops within loops...
Accordingly, exception handling may be a good way to deal with those
"some specific points(x,y,z)". As well as reporting the 'discovery' (and
relevant source-data as the cause), it might also be possible to
continue processing the rest of the data-set!
Speaking personally, (from the users' perspective) I prefer to find
'all' such errors during a single run of the input phase, rather than to
be forced to run once, find one 'error', correct it, run again, find
another/later 'error', correct that, run again... This is efficiently
accomplished by 'trapping' each error, as above, and setting a flag. At
the end of all input processing, the flag can be checked, and if it is
showing 'data is good' status, then the main process/analysis can proceed.
Suggested reading/WebRef should appear here*
Are you aware that we are permitted to have more than one file open at a
time? Previously we coded three steps: open the file, read-from/write-to
the file, and close the file. People quickly comprehended that because
each file had an identifier (a "file descriptor"), it was quite possible
to have multiple files (and fd-s), and therefore to output different
types of data according to the purpose of each file.
Recommend reading about files*
These days we have an elegant and powerful construct at our disposal:
the Context Manager (ie the with... construct). Interestingly, it is a
common misunderstanding that only one entity can be handled (by one
with... statement), at a time. In this case (which is perhaps the most
common example of using a Context Manager!), that thinking leads to the
self-imposed idea that one may only access a single file at a time.
(which may or may not be the case for you - just something I've noticed
with other learner-coders)
There is a version of with ... which allows for multiple 'contexts'
within a single code-block. (see manual) Accordingly, no reason why you
shouldn't code multiple file-objects to be used in a single code-context!
Recommend reading about Context Managers and the with... statement*
That said, please review another discussion 'here' on the list, answered
a few minutes ago: "Logging all requests...".
For a while now, I have been making heavy use of the Python Standard
Library's logging facility in all of my statistical/big-data projects -
and not just for "logging" in its narrowest, ComSc, sense! The logging
library performs a lot of 'file management' functions on my behalf. I
will often have three 'logs' (better termed "output files"), recording
the activities of the three phases, eg data-cleaning/input/selection,
analysis, and reporting (yes, much of the time, even the 'output report'
has also been produced as if it were a "log"!
In the case of the input- and analytical-phases, 'messages' are highly
uniform in format. Once a format is devised, the logger will happily
churn-out line-after-line!
The beauty of this approach, combined with some thoughts expressed above
(IMHO), is that once the data/selection process runs 'clean', that
entire log file can be comfortably ignored by the users, who are more
interested in checking the analysis against their hypothesis! Plus,
'printing' to a log (actually a disk file) is *much* faster than having
myriad debug-print statements cluttering the console (which is often
very slow compared to the time required for the actual statistical
analysis!)
Recommend reading about the logging library*
Lastly, am not sure what happened when the code-example was copy-pasted
into the original post. All the extra asterisks (*), letter-fs, etc,
present a severe challenge to (old) eyes...
* Plus, I apologise for not providing Web.Refs - my 'broadband'
connection is operating at tens of KB/sec, so waiting for 'heavy' web
pages to display is more than mildly painful! I trust you will be able
to find your way around the comprehensive Python 'docs' documentation
web site!
--
Regards =dn
More information about the Tutor
mailing list