[Tutor] Re: Tutor Testing for response from a Web site

Charlie Clark charlie@begeistert.org
Wed Apr 9 03:33:01 2003

> To All:
> How can I test the response of a server on a Web site to a request that 
> occurs when I attempt to access the Web site? I am using the Windows XP 
> operating system.
> For example, if I select a link to a Web site, the server can respond in 
> (I think) only 3 different ways:
> 1)  It can simply ignore the request.
> 2)  It can honor the request and I can therefore download
>      some data, or...
> 3)  It might send my PC some other response that I can 
>      test for in a Python program, that tells the program that the site 
>      is not accessible or available.  
> Here is the code I am using:
> _________________________________________________
>    metURL = "http://isl715.nws.noaa.gov/tdl/forecast/tdl_etamet.txt" 
>    f_object = open("c:\\python_pgms\\plot_guidance\\met_data.txt", 'w')
>    try:
>       metall = urllib.urlopen(metURL).read()
>       f_object.write(metall)
>       f_object.close()
>    except:
>       print "Could not obtain data from Web"
>       f_object.close()

> As it stands now, the program will just say the data is old, when I 
really should say 
> something like "could not connect with the site."
> What additional test(s) can I perform to catch other responses from a 
> server, especially one in which the server will not honor my request?

Hi Henry,

first tip: use "/" rather than "\\" in your paths, it'll make your life 
easier and your code more portable! Python makes sure that the path is 
correct. Does anybody know why Microsoft / Seattle Computer Products 
decided to use "\" instead of "/" as a path separator in QDOS? Was it like 
this in CP/M?

I don't understand how your script can say that the data is old - you 
aren't comparing it with anything but the reason it's not telling you if it 
can't connect is that your except clause catches everything. If Python 
can't open a url it will raise an IOError: [Errno socket error] host not 
found for invalid hosts but not an error message for missing pages - they 
just return empty strings when read. If I understand you correctly, you 
want to know if a file is on the server and if so how old it is. To do this 
you need to look at the http-header returned by the server and 
urllib.urlopen() is too high level for this. Like many things in Python 
urllib actually uses other modules to do the work for it, ie. the sockets 
module to make connections. Depending on what you want to do you might want 
to use sockets directly but you should also look at urllib2 which is 
supposed to be customisable for exactly this kind of thing.