[Tutor] SOLVED: Re: Trapping HTTP Authentication Failure
michael at trollope.org
Sat Sep 11 20:44:59 CEST 2010
It is bloody Winblows. The script works as designed and traps the 401
exception on my slackware box ... something in the implementation of
urllib2 on Windoze is broken. This has to be a known issue. Just did
not see it known anywhere.
On Sat, Sep 11, 2010 at 02:16:07PM -0400, Michael Powe wrote:
> On Sat, Sep 11, 2010 at 10:48:13AM -0400, Michael Powe wrote:
> > On Sat, Sep 11, 2010 at 02:25:24PM +0200, Evert Rol wrote:
> > > <snip />
> > > >> I'm not sure what you're exactly doing here, or what you're getting,
> > > >> but I did get curious and dug around urllib2.py. Apparently, there is
> > > >> a hardcoded 5 retries before the authentication really fails. So any
> > > >> stack trace would be the normal stack trace times 5. Not the 30 you
> > > >> mentioned, but annoying enough anyway (I don't see how it would fail
> > > >> for every element in the loop though. Once it raises an exception,
> > > >> the program basically ends).
> > > > It never throws an exception. Or, if it does, something about the way
> > > > I'm calling suppresses it. IOW, I can put in a bogus credential and
> > > > start the script and sit here for 5 minutes and see nothing. Then ^C
> > > > and I get a huge stacktrace that shows the repeated calls. After the
> > > > timeout on one element in the list, it goes to the next element, times
> > > > out, goes to the next.
> More experimentation revealed that one problem was testing the script
> in Idle. Idle does something to suppress the script failure for that
> particular case (IOW, it correctly returns HTTPError for things like
> '404' and URLError for things like a bad domain name).
> When I run the script from the command line (cmd), it actually ignores
> the '5' retry limit, seemingly. I added another catch block:
> except Exception as e:
> print "exception: ",e
> That prints out "exception: maximum recursion depth exceeded."
> I wonder if there is something hinky in Windows that is causing this
> to happen.
> > > Ok, now I had to try and recreate something myself. So my processData is:
> > > def processData(f):
> > > global overview_url
> > > overview_url = baseurl + f
> > > getData(authHeaders)
> > >
> > > (f being a filename, out of a list of many). Other code same as yours.
> > > It definitely throws a 401 exception after 5 retries. No time-outs,
> > > no long waits. In fact, a time-out would to me indicate another
> > > problem (it still should throw an exception, though). So, unless
> > > you're catching the exception in processData somehow, I don't see
> > > where things could go wrong.
> > > I assume you have no problem with correct credentials or simply
> > > using a webbrowser?
> > Hello,
> > Yes, I can retrieve data without any problem. I can break the URL and
> > generate a 404 exception that is trapped and I can break it in other
> > ways that generate other types of exceptions. And trap them.
> > I went back and looked at the code in urllib2.py and I see the
> > timeout counter and that it raises an HTTPError after 5 tries. But I
> > don't get anything back. If I just let the code run to completion, I
> > get sent back to the prompt. I put a try/catch in the method and I
> > already have one on the call in main.
> > > >> I don't know why it's hard-coded that way, and not just an option
> > > >> with a default of 5, but that's currently how it is (maybe someone
> > > >> else on this list knows?).
> > > >
> > > > I don't know, but even if I could set it to 1, I'm not helped unless
> > > > there's a way for me to make it throw an exception and exit the loop.
> > Actually, there's a comment in the code about why it is set to 5 --
> > it's arbitrary, and allows for the Password Manager to prompt for
> > credentials while not letting the request be reissued until 'recursion
> > depth is exceeded.'
> > I guess I'll have to go back to ground zero and write a stub to
> > generate the error and then build back up to where it disappears.
> > Thanks.
> > mp
> > --
> > Michael Powe michael at trollope.org Naugatuck CT USA
> > It turns out that it will be easier to simply block the top offenders
> > manually; the rules for pattern matching are too arcane, obscure, and
> > difficult to program. -- t. pascal, comp.mail.misc, "procmail to
> > filter spam"
> > _______________________________________________
> > Tutor maillist - Tutor at python.org
> > To unsubscribe or change subscription options:
> > http://mail.python.org/mailman/listinfo/tutor
> Michael Powe michael at trollope.org Naugatuck CT USA
> I hate a fellow whom pride, or cowardice, or laziness drives into a
> corner, and who does nothing when he is there but sit and <growl>; let
> him come out as I do, and <bark>. -- Samuel Johnson
> Tutor maillist - Tutor at python.org
> To unsubscribe or change subscription options:
Michael Powe michael at trollope.org Naugatuck CT USA
"If you don't like the news, go out and make some of your own."
-- Scoop Nisker, KSAN-FM, San Francisco
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 197 bytes
Desc: not available
More information about the Tutor