[Python-checkins] cpython (2.7): Issue 21469: Mitigate risk of false positives with robotparser.
raymond.hettinger
python-checkins at python.org
Tue May 13 07:19:01 CEST 2014
http://hg.python.org/cpython/rev/d4fd55278cec
changeset: 90682:d4fd55278cec
branch: 2.7
parent: 90656:670fb496f1f6
user: Raymond Hettinger <python at rcn.com>
date: Mon May 12 22:18:50 2014 -0700
summary:
Issue 21469: Mitigate risk of false positives with robotparser.
* Repair the broken link to norobots-rfc.txt.
* HTTP response codes >= 500 treated as a failed read rather than as a not
found. Not found means that we can assume the entire site is allowed. A 5xx
server error tells us nothing.
* A successful read() or parse() updates the mtime (which is defined to be "the
time the robots.txt file was last fetched").
* The can_fetch() method returns False unless we've had a read() with a 2xx or
4xx response. This avoids false positives in the case where a user calls
can_fetch() before calling read().
* I don't see any easy way to test this patch without hitting internet
resources that might change or without use of mock objects that wouldn't
provide must reassurance.
files:
Lib/robotparser.py | 14 ++++++++++++--
Misc/NEWS | 4 ++++
2 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/Lib/robotparser.py b/Lib/robotparser.py
--- a/Lib/robotparser.py
+++ b/Lib/robotparser.py
@@ -7,7 +7,8 @@
2) PSF license for Python 2.2
The robots.txt Exclusion Protocol is implemented as specified in
- http://info.webcrawler.com/mak/projects/robots/norobots-rfc.html
+ http://www.robotstxt.org/norobots-rfc.txt
+
"""
import urlparse
import urllib
@@ -60,7 +61,7 @@
self.errcode = opener.errcode
if self.errcode in (401, 403):
self.disallow_all = True
- elif self.errcode >= 400:
+ elif self.errcode >= 400 and self.errcode < 500:
self.allow_all = True
elif self.errcode == 200 and lines:
self.parse(lines)
@@ -86,6 +87,7 @@
linenumber = 0
entry = Entry()
+ self.modified()
for line in lines:
linenumber += 1
if not line:
@@ -131,6 +133,14 @@
return False
if self.allow_all:
return True
+
+ # Until the robots.txt file has been read or found not
+ # to exist, we must assume that no url is allowable.
+ # This prevents false positives when a user erronenously
+ # calls can_fetch() before calling read().
+ if not self.last_checked:
+ return False
+
# search for given user agent matches
# the first match counts
parsed_url = urlparse.urlparse(urllib.unquote(url))
diff --git a/Misc/NEWS b/Misc/NEWS
--- a/Misc/NEWS
+++ b/Misc/NEWS
@@ -52,6 +52,10 @@
- Issue #21306: Backport hmac.compare_digest from Python 3. This is part of PEP
466.
+- Issue #21469: Reduced the risk of false positives in robotparser by
+ checking to make sure that robots.txt has been read or does not exist
+ prior to returning True in can_fetch().
+
- Issue #21321: itertools.islice() now releases the reference to the source
iterator when the slice is exhausted. Patch by Anton Afanasyev.
--
Repository URL: http://hg.python.org/cpython
More information about the Python-checkins
mailing list