Python and remote sensing

Richard Barrett R.Barrett at ftel.co.uk
Mon Jun 24 12:23:55 CEST 2002


At 11:23 24/06/2002 +0200, Siegfried Gonzi wrote:
>Hi,
>
>
>a technician of us asked me whether it is possible to do the following:
>
>
>Starting a script on Unix at lets say 2 am and calling a remote site 
>(http://www.whatever...) and that site should become passively processed 
>(the site holds 41 observing stations). Is it possible to start a remote 
>script (automatically) connect to a http page and retrieve information 
>from that html-page?
>I answerd Perl or Python can do the job. But I am not sure whether that is 
>correct.
>
>I am sure the above topic has been asked 1000 times. Or is there a tool or 
>program availabe which can do the job better?; I mean our technican has 
>never heard of Python and I am not interested to make web based (put in 
>the right term here) programming or the like.
>
>If Python can do it - okay; if not please say so (if Python is dead - 
>okay; if Python lives - okay)!
>
>
>Thanks,
>S. Gonzi

You can write a script in Python or Perl to be executed by the UNIX cron 
daemon at the time you specify (see the UNIX crontab command man page). 
There is nothing out of the ordinary about such scripts beyond their being 
listed in an appropriate user's installed crontab and you will not want it 
to prompt for input parameters.

In the case of Python you can simply use the urllib, urllib2 or httplib to 
retrieve data using HTTP; these modules being listed in order of reducing 
simplicity of use.

Try using urllib from the python command line and you'll see how easy it is.

Probably the biggest issue will be if you have to use any form of 
authentication in order to get a response from the HTTP server. But that 
isn't a problem

Perl almost certainly has equivalent modules if you prefer it to Python.

With Python 2.2.1 a basic script like the one that follows will do it if 
you have to use HTTP basic authentication. You'll need to add some extras 
to handle problems with opening the URLs, supplying parameters if the URLs 
need to incorporate them etc. But basically its really quite easy to do.


#! /usr/bin/env python

import urllib

# Hardwired HTTP basic authentication parameters
basic_auth_username = '....'
basic_auth_passwd = '....'
# URL we want data from
url = 'http://.......'
# Destination filename for data retrieved
destination_filename = '.....'

class myURLopener(urllib.FancyURLopener):

     def prompt_user_passwd(self, host, realm):
         return basic_auth_username, basic_auth_passwd

urllib._urlopener = myURLopener()

stuff = urllib.urlretrieve(url)

open(destination_filename, 'w').write(open(f[0], 'r').read())







More information about the Python-list mailing list