Hi all,<br><br>Im learning web scraping with python from the following link<br clear="all"><a href="http://www.packtpub.com/article/web-scraping-with-python">http://www.packtpub.com/article/web-scraping-with-python</a><br>
<br>To work with it,  mechanize to be installed<br>I installed mechanize using<br><br><div style="margin-left: 40px;">sudo apt-get install python-mechanize<br></div><br>As given in the tutorial, i tried the code as below<br>
<br><div style="margin-left: 40px;">import mechanize<br></div><div style="margin-left: 40px;">BASE_URL = "<a href="http://www.packtpub.com/article-network">http://www.packtpub.com/article-network</a>"<br></div><div style="margin-left: 40px;">
br = mechanize.Browser()<br></div><div style="margin-left: 40px;">data = br.open(BASE_URL).get_data()<br></div><br>Received the following error<br><br>File "webscrap.py", line 4, in <module><br>    data = br.open(BASE_URL).get_data()<br>
  File "/usr/lib/python2.6/dist-packages/mechanize/_mechanize.py", line 209, in open<br>    return self._mech_open(url, data, timeout=timeout)<br>  File "/usr/lib/python2.6/dist-packages/mechanize/_mechanize.py", line 261, in _mech_open<br>
    raise response<br>mechanize._response.httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txt<br><br><br>Any Ideas? Welcome<br><br>