downloading from links within a webpage

Joel Goldstick joel.goldstick at gmail.com
Tue Oct 14 16:57:22 CEST 2014


On Tue, Oct 14, 2014 at 10:54 AM, Chris Angelico <rosuav at gmail.com> wrote:
> On Wed, Oct 15, 2014 at 1:42 AM, Shiva
> <shivaji_tn at yahoo.com.dmarc.invalid> wrote:
>> Here is a small code that I wrote that downloads images from a webpage url
>> specified (you can limit to how many downloads you want). However, I am
>> looking at adding functionality and searching external links from this page
>> and downloading the same number of images from that page as well.(And
>> limiting the depth it can go to)
>>
>> Any ideas?  (I am using Python 3.4 & I am a beginner)
>
> First idea: Use wget, it does all this for you :)

You might look at Requests and BeautifulSoup python modules.  Requests
is easier for many things than urllib.  BS is an HTML parser that may
be easier, and more powerful than what you can do with regex
>
> ChrisA
> --
> https://mail.python.org/mailman/listinfo/python-list



-- 
Joel Goldstick
http://joelgoldstick.com



More information about the Python-list mailing list