[New-bugs-announce] [issue21469] Hazards in robots.txt parser

Raymond Hettinger report at bugs.python.org
Sat May 10 18:55:10 CEST 2014

New submission from Raymond Hettinger:

* The can_fetch() method is not checking to see if read() has been called, so it returns false positives if read() has not been called.

* When read() is called, it fails to call modified() so that mtime() returns an incorrect result.  The user has to manually call modified() to update the mtime().

>>> from urllib.robotparser import RobotFileParser
>>> rp = RobotFileParser('http://en.wikipedia.org/robots.txt')
>>> rp.can_fetch('UbiCrawler', 'http://en.wikipedia.org/index.html')
>>> rp.read()
>>> rp.can_fetch('UbiCrawler', 'http://en.wikipedia.org/index.html')
>>> rp.mtime()
>>> rp.modified()
>>> rp.mtime()

Suggested improvements:

1) Trigger internal calls to modified() every time the parse is modified using read() or add_entry().  That would assure that mtime() actually reflects the modification time.

2) Raise an exception or return False whenever can_fetch() is called and the mtime() is zero (meaning that the parser has not be initialized with any rules).

components: Library (Lib)
messages: 218226
nosy: rhettinger
priority: normal
severity: normal
status: open
title: Hazards in robots.txt parser
type: behavior
versions: Python 2.7, Python 3.4, Python 3.5

Python tracker <report at bugs.python.org>

More information about the New-bugs-announce mailing list