correct way to catch exception with Python 'with' statement
torriem at gmail.com
Fri Dec 2 12:32:57 EST 2016
On 12/01/2016 08:39 PM, Ned Batchelder wrote:
> On Thursday, December 1, 2016 at 7:26:18 PM UTC-5, DFS wrote:
>> How is it possible that the 'if' portion runs, then 44/100,000ths of a
>> second later my process yields to another process which deletes the
>> file, then my process continues.
> A modern computer is running dozens or hundreds (or thousands!) of
> processes "all at once". How they are actually interleaved on the
> small number of actual processors is completely unpredictable. There
> can be an arbitrary amount of time passing between any two processor
I'm not seeing DFS's original messages, so I'm assuming he's blocked on
the mailing list somehow (I have a vague memory of that).
But reading your quotes of his message reminds me that when I was a
system administrator, I saw this behavior all the time when using rsync
on a busy server. It is very common for files to disappear out from
under rsync. Often we just ignore this, but it can lead to incomplete
backups depending on what disappeared. It was a great day when I moved
to using ZFS snapshots in conjunction with rsync to prevent this problem.
Anyway the point is to confirm what you're saying that yes this race
condition is a very real problem and actually happens in the real world.
He's obviously thinking way too narrowly about the issue and neglecting
to consider the effects of multiple processes other than the running
python program. It could even be something as simple as trying to read
a config program that might be opened in an editor by the user. Many
editors will move/rename the old file, then write the new version, and
then delete the old file. If the reading of the file happens to be
attempted after the old file was renamed but before the new version was
written, the file is gone. This has happened to me before, especially
if load average was high and the disk has fallen behind.
More information about the Python-list