[Tutor] Downloading from http

Mark Kels mark.kels at gmail.com
Sat Feb 12 20:47:49 CET 2005


On Sat, 12 Feb 2005 09:25:10 -0500, Jacob S. <keridee at jayco.net> wrote:
> urllib or urllib2 or maybe httplib maybe?
> 
>         urlopen( url[, data])
> 
>   Open the URL url, which can be either a string or a Request object.
>   data should be a string, which specifies additional data to send to the
> server. In HTTP requests, which are the only ones that support data, it
> should be a buffer in the format of application/x-www-form-urlencoded, for
> example one returned from urllib.urlencode().
> 
>   This function returns a file-like object with two additional methods:
> 
>     a.. geturl() -- return the URL of the resource retrieved
>     b.. info() -- return the meta-information of the page, as a
> dictionary-like object
>   Raises URLError on errors.
> 
>   Note that None may be returned if no handler handles the request (though
> the default installed global OpenerDirector uses UnknownHandler to ensure
> this never happens).
> 
> This is taken from the docs on urllib2. I think that's what you want, right?
> The tutorial or whatever, called "Dive into python", goes into accessing web
> pages a little more in depth than the docs do, I think.  You can google for
> it, I believe.
> 
> HTH,
> Jacob
> 
I'm sorry, but I didn't understood a thing (maybe its because of my
bad english, and mybe its because I'm just dumb :). Anyway, can you
give me a code example or a link to a tutorial that talks about this ?

Thanks alot.

-- 
1. The day Microsoft makes something that doesn't suck is probably the
day they start making vacuum cleaners.
2. Unix is user friendly - it's just picky about it's friends.
3. Documentation is like sex: when it is good, it is very, very good.
And when it is bad, it is better than nothing. - Dick Brandon


More information about the Tutor mailing list