From python-checkins at python.org Wed Feb 1 01:03:24 2006 From: python-checkins at python.org (andrew.kuchling) Date: Wed, 1 Feb 2006 01:03:24 +0100 (CET) Subject: [Python-checkins] r42213 - sandbox/trunk/pycon/parse_sched.py Message-ID: <20060201000324.3AAF71E4004@bag.python.org> Author: andrew.kuchling Date: Wed Feb 1 01:03:22 2006 New Revision: 42213 Modified: sandbox/trunk/pycon/parse_sched.py Log: Patch from Duncan McGreggor: add iCal output to parse_sched Modified: sandbox/trunk/pycon/parse_sched.py ============================================================================== --- sandbox/trunk/pycon/parse_sched.py (original) +++ sandbox/trunk/pycon/parse_sched.py Wed Feb 1 01:03:22 2006 @@ -199,11 +199,44 @@ format_day(day_data, output) +def event (day, talk, output): + location, texttime, duration, title = talk + idx = title.replace('#', '') + try: + title = "%s (%s)" % (talks.talk_dict.get(int(idx)), title) + except ValueError: + pass + talk_title = talks.talk_dict.get(title.replace('#', '')) or title + date = list(day) + [ int(x) for x in texttime.split(':') ] + date = datetime.datetime(*date).strftime("%Y%m%dT%H%M00") + print >>output, 'BEGIN:VEVENT\n' + print >>output, 'DTSTART;TZID=US-Eastern:%s\n' % date + print >>output, 'LOCATION:Dallas, TX\n' + print >>output, 'SUMMARY: %s\n' % talk_title + print >>output, 'UID:%s@%s\n' % (date, 'pycon.org') + print >>output, 'SEQUENCE:1\n' + # XXX what are these two? + print >>output, 'DTSTAMP: %s\n' % date + print >>output, 'DURATION:PT%iM\n' % duration + print >>output, 'END:VEVENT\n\n' + +def output_ical (d, output): + print >>output, 'BEGIN:VCALENDAR\n' + print >>output, 'VERSION:2.0\n' + print >>output, 'PRODID:-//Conference Software//EN\n' + print >>output, 'CALSCALE:GREGORIAN\n' + print >>output, 'X-WR-CALNAME: PyCon 2006 Talks\n' + print >>output, '\n' + for day in d: + for talk in d[day]: + event(day, talk, output) + print >>output, 'END:VCALENDAR\r\n' + def main (): parser = optparse.OptionParser(usage="usage: %prog [options] < final-schedule") parser.add_option('--format', type='choice', - choices=['pickle', 'python', 'print', 'html'], + choices=['pickle', 'python', 'print', 'html', 'ical'], default='print', action="store", dest="format", help = "Select output format") @@ -221,6 +254,8 @@ cPickle.dump(d, sys.stdout) elif fmt == 'html': output_html(d, sys.stdout) + elif fmt == 'ical': + output_ical(d, sys.stdout) else: print >>sys.stderr, "Unknown format %r" % fmt sys.exit(1) From python-checkins at python.org Wed Feb 1 01:05:16 2006 From: python-checkins at python.org (andrew.kuchling) Date: Wed, 1 Feb 2006 01:05:16 +0100 (CET) Subject: [Python-checkins] r42214 - sandbox/trunk/pycon/parse_sched.py Message-ID: <20060201000516.8E85B1E4004@bag.python.org> Author: andrew.kuchling Date: Wed Feb 1 01:05:12 2006 New Revision: 42214 Modified: sandbox/trunk/pycon/parse_sched.py Log: Change title Modified: sandbox/trunk/pycon/parse_sched.py ============================================================================== --- sandbox/trunk/pycon/parse_sched.py (original) +++ sandbox/trunk/pycon/parse_sched.py Wed Feb 1 01:05:12 2006 @@ -225,7 +225,7 @@ print >>output, 'VERSION:2.0\n' print >>output, 'PRODID:-//Conference Software//EN\n' print >>output, 'CALSCALE:GREGORIAN\n' - print >>output, 'X-WR-CALNAME: PyCon 2006 Talks\n' + print >>output, 'X-WR-CALNAME: PyCon 2006 Events\n' print >>output, '\n' for day in d: for talk in d[day]: From python-checkins at python.org Wed Feb 1 01:08:16 2006 From: python-checkins at python.org (andrew.kuchling) Date: Wed, 1 Feb 2006 01:08:16 +0100 (CET) Subject: [Python-checkins] r42215 - sandbox/trunk/pycon/parse_sched.py Message-ID: <20060201000816.6C7921E4004@bag.python.org> Author: andrew.kuchling Date: Wed Feb 1 01:08:14 2006 New Revision: 42215 Modified: sandbox/trunk/pycon/parse_sched.py Log: Include room location Modified: sandbox/trunk/pycon/parse_sched.py ============================================================================== --- sandbox/trunk/pycon/parse_sched.py (original) +++ sandbox/trunk/pycon/parse_sched.py Wed Feb 1 01:08:14 2006 @@ -211,7 +211,8 @@ date = datetime.datetime(*date).strftime("%Y%m%dT%H%M00") print >>output, 'BEGIN:VEVENT\n' print >>output, 'DTSTART;TZID=US-Eastern:%s\n' % date - print >>output, 'LOCATION:Dallas, TX\n' + if location != '---': + print >>output, 'LOCATION: %s\n' % location print >>output, 'SUMMARY: %s\n' % talk_title print >>output, 'UID:%s@%s\n' % (date, 'pycon.org') print >>output, 'SEQUENCE:1\n' From python-checkins at python.org Wed Feb 1 01:23:07 2006 From: python-checkins at python.org (andrew.kuchling) Date: Wed, 1 Feb 2006 01:23:07 +0100 (CET) Subject: [Python-checkins] r42216 - sandbox/trunk/pycon/Makefile Message-ID: <20060201002307.63EDD1E4004@bag.python.org> Author: andrew.kuchling Date: Wed Feb 1 01:23:05 2006 New Revision: 42216 Modified: sandbox/trunk/pycon/Makefile Log: Add target to make iCal file Modified: sandbox/trunk/pycon/Makefile ============================================================================== --- sandbox/trunk/pycon/Makefile (original) +++ sandbox/trunk/pycon/Makefile Wed Feb 1 01:23:05 2006 @@ -2,3 +2,7 @@ html: ./parse_sched.py --format=html schedule.html +# Make iCal file and put it in web tree; only works on AMK's Mac. +WEBROOT=$$HOME/source/p/pydotorg/pydotorg/pycon/2006 +ical: + ./parse_sched.py --format=ical $(WEBROOT)/schedule.ics From python-checkins at python.org Wed Feb 1 01:52:56 2006 From: python-checkins at python.org (david.goodger) Date: Wed, 1 Feb 2006 01:52:56 +0100 (CET) Subject: [Python-checkins] r42217 - sandbox/trunk/pycon/sched2sessions.py Message-ID: <20060201005256.D45DB1E4004@bag.python.org> Author: david.goodger Date: Wed Feb 1 01:52:48 2006 New Revision: 42217 Added: sandbox/trunk/pycon/sched2sessions.py (contents, props changed) Log: process FinalSchedule data into reST sessions Added: sandbox/trunk/pycon/sched2sessions.py ============================================================================== --- (empty file) +++ sandbox/trunk/pycon/sched2sessions.py Wed Feb 1 01:52:48 2006 @@ -0,0 +1,107 @@ +#! /usr/bin/env python + +""" +Processes the schedule data retrieved by get.sh from +http://wiki.python.org/moin/PyCon2006/FinalSchedule into reST session tables +suitable for http://wiki.python.org/moin/PyCon2006/SessionChairs. +""" + +import sys +import calendar +import datetime +from pprint import pprint, pformat +import parse_sched +import talks + + +start_date = (2006, 02, 24) +end_date = (2006, 02, 26) +weekdays = 'Monday Tuesday Wednesday Thursday Friday Saturday Sunday'.split() +rooms = ('Ballroom A-E', 'Ballroom F-J', 'Preston Trail') + +# template strings: + +list_table = """ +.. list-table:: + :header-rows: 1 + :widths: 1 3 3 3 + + * - + - Ballroom A-E + - Ballroom F-J + - Preston Trail +""" + +time_start = """ + * - %s +""" + +session_start = u""" + - Session %s \u2014 Chair: UNASSIGNED + +""" + +empty_session = """ + - (no talks) +""" + +talk_line = """\ + * %s + (`%s `__) +""" + +def process_sessions(data): + dates = data.keys() + dates.sort() + all_sessions = [] + for date in dates: + if date < start_date: + continue + if date > end_date: + break + times = [] + all_sessions.append((date, times)) + for room, time, duration, talk in data[date]: + if room == '---': + sessions = None + if talk.startswith('#'): + talk = int(talk[1:]) + else: + continue + if sessions is None: + sessions = {} + times.append((time, sessions)) + sessions.setdefault(room, []).append(talk) + return all_sessions + +def format_sessions(sessions): + lines = [] + s = 0 + for date, times in sessions: + weekday = weekdays[datetime.date(*date).weekday()] + datestr = '%s, February %s' % (weekday, date[2]) + lines.append('\n\n%s\n%s\n%s' % (datestr, '=' * len(datestr), + list_table)) + for time, sessions in times: + lines.append(time_start % time) + for room in rooms: + s += 1 + if room in sessions: + lines.append(session_start % s) + for talk in sessions[room]: + title = talks.get_title(talk) + lines.append(talk_line % (title, talk, talk)) + else: + lines.append(empty_session) + return ''.join(lines) + +def main(): + lines = sys.stdin.readlines() + data = parse_sched.parse(lines) + sessions = process_sessions(data) + output = format_sessions(sessions) + sys.stdout.write(output.encode('utf-8')) + + +if __name__ == '__main__': + main() From python-checkins at python.org Wed Feb 1 03:01:39 2006 From: python-checkins at python.org (andrew.kuchling) Date: Wed, 1 Feb 2006 03:01:39 +0100 (CET) Subject: [Python-checkins] r42218 - sandbox/trunk/pycon/parse_sched.py Message-ID: <20060201020139.625271E4005@bag.python.org> Author: andrew.kuchling Date: Wed Feb 1 03:01:37 2006 New Revision: 42218 Modified: sandbox/trunk/pycon/parse_sched.py Log: Remove extra newlines Modified: sandbox/trunk/pycon/parse_sched.py ============================================================================== --- sandbox/trunk/pycon/parse_sched.py (original) +++ sandbox/trunk/pycon/parse_sched.py Wed Feb 1 03:01:37 2006 @@ -209,25 +209,25 @@ talk_title = talks.talk_dict.get(title.replace('#', '')) or title date = list(day) + [ int(x) for x in texttime.split(':') ] date = datetime.datetime(*date).strftime("%Y%m%dT%H%M00") - print >>output, 'BEGIN:VEVENT\n' - print >>output, 'DTSTART;TZID=US-Eastern:%s\n' % date + print >>output, 'BEGIN:VEVENT' + print >>output, 'DTSTART;TZID=US-Eastern:%s' % date if location != '---': - print >>output, 'LOCATION: %s\n' % location - print >>output, 'SUMMARY: %s\n' % talk_title - print >>output, 'UID:%s@%s\n' % (date, 'pycon.org') - print >>output, 'SEQUENCE:1\n' + print >>output, 'LOCATION: %s' % location + print >>output, 'SUMMARY: %s' % talk_title + print >>output, 'UID:%s@%s' % (date, 'pycon.org') + print >>output, 'SEQUENCE:1' # XXX what are these two? - print >>output, 'DTSTAMP: %s\n' % date - print >>output, 'DURATION:PT%iM\n' % duration - print >>output, 'END:VEVENT\n\n' + print >>output, 'DTSTAMP: %s' % date + print >>output, 'DURATION:PT%iM' % duration + print >>output, 'END:VEVENT\n' def output_ical (d, output): - print >>output, 'BEGIN:VCALENDAR\n' - print >>output, 'VERSION:2.0\n' - print >>output, 'PRODID:-//Conference Software//EN\n' - print >>output, 'CALSCALE:GREGORIAN\n' - print >>output, 'X-WR-CALNAME: PyCon 2006 Events\n' - print >>output, '\n' + print >>output, 'BEGIN:VCALENDAR' + print >>output, 'VERSION:2.0' + print >>output, 'PRODID:-//Conference Software//EN' + print >>output, 'CALSCALE:GREGORIAN' + print >>output, 'X-WR-CALNAME: PyCon 2006 Events' + print >>output for day in d: for talk in d[day]: event(day, talk, output) From python-checkins at python.org Wed Feb 1 03:32:33 2006 From: python-checkins at python.org (david.goodger) Date: Wed, 1 Feb 2006 03:32:33 +0100 (CET) Subject: [Python-checkins] r42219 - peps/trunk/pep-0355.txt Message-ID: <20060201023233.3ED1F1E4005@bag.python.org> Author: david.goodger Date: Wed Feb 1 03:32:32 2006 New Revision: 42219 Modified: peps/trunk/pep-0355.txt Log: update from B. Lindqvist Modified: peps/trunk/pep-0355.txt ============================================================================== --- peps/trunk/pep-0355.txt (original) +++ peps/trunk/pep-0355.txt Wed Feb 1 03:32:32 2006 @@ -18,6 +18,26 @@ and recommended. +Background + + The ideas expressed in this PEP are not recent, but have been + debated in the Python community for many years. Many have felt + that the API for manipulating file paths as offered in the os.path + module is inadequate. The first proposal for a Path object was + raised by Just van Rossum on python-dev in 2001 [2]. In 2003, + Jason Orendorff released version 1.0 of the "path module" which + was the first public implementation that used objects to represent + paths [3]. + + The path module quickly became very popular and numerous attempts + were made to get the path module included in the Python standard + library; [4], [5], [6], [7]. + + This PEP summarizes the the ideas and suggestions people have + expressed about the path module and proposes that a modified + version should be included in the standard library. + + Motivation Dealing with filesystem paths is a common task in any programming @@ -180,19 +200,23 @@ def islink(self): ... def ismount(self): ... def samefile(self, other): ... [1] - def getatime(self): ... - def getmtime(self): ... - def getctime(self): ... - def getsize(self): ... + def atime(self): ... + """Last access time of the file.""" + def mtime(self): ... + """Last-modified time of the file.""" + def ctime(self): ... + """ + Return the system's ctime which, on some systems (like + Unix) is the time of the last change, and, on others (like + Windows), is the creation time for path. + """ + def size(self): ... def access(self, mode): ... [1] def stat(self): ... def lstat(self): ... def statvfs(self): ... [1] def pathconf(self, name): ... [1] - # Filesystem properties for path. - atime, mtime, ctime, size - # Methods for manipulating information about the filesystem # path. def utime(self, times) => None @@ -318,10 +342,10 @@ islink() os.path.islink() ismount() os.path.ismount() samefile() os.path.samefile() - getatime()/atime os.path.getatime() - getctime()/ctime os.path.getctime() - getmtime()/mtime os.path.getmtime() - getsize()/size os.path.getsize() + atime() os.path.getatime() + ctime() os.path.getctime() + mtime() os.path.getmtime() + size() os.path.getsize() cwd() os.getcwd() access() os.access() stat() os.stat() @@ -371,6 +395,18 @@ the Path object. These changes obsoleted the problematic joinpath() method which was removed. + * The methods and the properties getatime()/atime, + getctime()/ctime, getmtime()/mtime and getsize()/size duplicated + each other. These methods and properties have been merged to + atime(), ctime(), mtime() and size(). The reason they are not + properties instead, is because there is a possibility that they + may change unexpectedly. The following example is not + guaranteed to always pass the assertion: + + p = Path("foobar") + s = p.size() + assert p.size() == s + Open Issues @@ -392,12 +428,31 @@ * The name obviously has to be either "path" or "Path," but where should it live? In its own module or in os? - * Path implements two ways to retrieve some filesystem - information. Both the properties atime, mtime, ctime and size - and the getters getatime(), getmtime(), getctime() and - getsize(). This is clearly not optimal, the information should - *either* be retrieved using properties or getters. Both methods - have advantages and disadvantages. + * Due to Path subclassing either str or unicode, the following + non-magic, public methods are availible on Path objects: + + capitalize(), center(), count(), decode(), encode(), + endswith(), expandtabs(), find(), index(), isalnum(), + isalpha(), isdigit(), islower(), isspace(), istitle(), + isupper(), join(), ljust(), lower(), lstrip(), replace(), + rfind(), rindex(), rjust(), rsplit(), rstrip(), split(), + splitlines(), startswith(), strip(), swapcase(), title(), + translate(), upper(), zfill() + + On python-dev it has been argued whether this inheritance is + sane or not. Most persons debating said that most string + methods doesn't make sense in the context of filesystem paths -- + they are just dead weight. The other position, also argued on + python-dev, is that inheriting from string is very convenient + because it allows code to "just work" with Path objects without + having to be adapted for them. + + One of the problems is that at the Python level, there is no way + to make an object "string-like enough," so that it can be passed + to the builtin function open() (and other builtins expecting a + string or buffer), unless the object inherits from either str or + unicode. Therefore, to not inherit from string requires changes + in CPython's core. The functions and modules that this new module is trying to replace (os.path, shutil, fnmatch, glob and parts of os) are @@ -472,24 +527,22 @@ [1] Method is not guaranteed to be availible on all platforms. - Related articles & threads: - - * http://mail.python.org/pipermail/python-dev/2005-June/054439.html - - * http://mail.python.org/pipermail/python-list/2005-July/291071.html + [2] "(idea) subclassable string: path object?", van Rossum, 2001 + http://mail.python.org/pipermail/python-dev/2001-August/016663.html - * http://mail.python.org/pipermail/python-list/2003-July/174289.html + [3] "path module v1.0 released", Orendorff, 2003 + http://mail.python.org/pipermail/python-announce-list/2003-January/001984.html - * "(idea) subclassable string: path object?", van Rossum, 2001 - http://mail.python.org/pipermail/python-dev/2001-August/016663.html + [4] "Some RFE for review", Birkenfeld, 2005 + http://mail.python.org/pipermail/python-dev/2005-June/054438.html - * "path module v1.0 released", Orendorff, 2003 - http://mail.python.org/pipermail/python-announce-list/2003-January/001984.html + [5] "path module", Orendorff, 2003 + http://mail.python.org/pipermail/python-list/2003-July/174289.html - * http://wiki.python.org/moin/PathClass + [6] "PRE-PEP: new Path class", Roth, 2004 + http://mail.python.org/pipermail/python-list/2004-January/201672.html - * "PRE-PEP: new Path class", - http://mail.python.org/pipermail/python-list/2004-January/201672.html + [7] http://wiki.python.org/moin/PathClass Copyright From martin at v.loewis.de Wed Feb 1 07:21:02 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 01 Feb 2006 07:21:02 +0100 Subject: [Python-checkins] r42219 - peps/trunk/pep-0355.txt In-Reply-To: <20060201023233.3ED1F1E4005@bag.python.org> References: <20060201023233.3ED1F1E4005@bag.python.org> Message-ID: <43E0534E.5030706@v.loewis.de> david.goodger wrote: > update from B. Lindqvist > > Modified: peps/trunk/pep-0355.txt If such edits are frequent, it would also be possible to give the PEP author temporary commit privileges. He should be told to modify only his own PEP(s), and they should be revoked once the PEP goes into some final state, or becomes dormant. Regards, Martin From python-checkins at python.org Wed Feb 1 21:04:48 2006 From: python-checkins at python.org (andrew.kuchling) Date: Wed, 1 Feb 2006 21:04:48 +0100 (CET) Subject: [Python-checkins] r42220 - sandbox/trunk/pycon/parse_sched.py Message-ID: <20060201200448.688D31E4005@bag.python.org> Author: andrew.kuchling Date: Wed Feb 1 21:04:47 2006 New Revision: 42220 Modified: sandbox/trunk/pycon/parse_sched.py Log: Correction from Jeffrey Harris of OSAF: remove extra space after DTSTAMP; include VTIMEZONE section, and correct timezone to Central Modified: sandbox/trunk/pycon/parse_sched.py ============================================================================== --- sandbox/trunk/pycon/parse_sched.py (original) +++ sandbox/trunk/pycon/parse_sched.py Wed Feb 1 21:04:47 2006 @@ -210,14 +210,14 @@ date = list(day) + [ int(x) for x in texttime.split(':') ] date = datetime.datetime(*date).strftime("%Y%m%dT%H%M00") print >>output, 'BEGIN:VEVENT' - print >>output, 'DTSTART;TZID=US-Eastern:%s' % date + print >>output, 'DTSTART;TZID=US-Central:%s' % date if location != '---': print >>output, 'LOCATION: %s' % location print >>output, 'SUMMARY: %s' % talk_title print >>output, 'UID:%s@%s' % (date, 'pycon.org') print >>output, 'SEQUENCE:1' # XXX what are these two? - print >>output, 'DTSTAMP: %s' % date + print >>output, 'DTSTAMP:%s' % date print >>output, 'DURATION:PT%iM' % duration print >>output, 'END:VEVENT\n' @@ -228,6 +228,23 @@ print >>output, 'CALSCALE:GREGORIAN' print >>output, 'X-WR-CALNAME: PyCon 2006 Events' print >>output + print >>output, """BEGIN:VTIMEZONE +TZID:US/Central +LAST-MODIFIED:20060201T013214Z +BEGIN:STANDARD +DTSTART:20051030T070000 +TZOFFSETTO:-0600 +TZOFFSETFROM:+0000 +TZNAME:CST +END:STANDARD +BEGIN:DAYLIGHT +DTSTART:20060402T010000 +TZOFFSETTO:-0500 +TZOFFSETFROM:-0600 +TZNAME:CDT +END:DAYLIGHT +END:VTIMEZONE""" + for day in d: for talk in d[day]: event(day, talk, output) From python-checkins at python.org Wed Feb 1 22:32:05 2006 From: python-checkins at python.org (thomas.wouters) Date: Wed, 1 Feb 2006 22:32:05 +0100 (CET) Subject: [Python-checkins] r42221 - python/trunk/Objects/longobject.c Message-ID: <20060201213205.AAAA71E4005@bag.python.org> Author: thomas.wouters Date: Wed Feb 1 22:32:04 2006 New Revision: 42221 Modified: python/trunk/Objects/longobject.c Log: As discussed on python-dev, silence three gcc-4.0.x warnings, using assert() to protect against actual uninitialized usage. Objects/longobject.c: In function ?PyLong_AsDouble?: Objects/longobject.c:655: warning: ?e? may be used uninitialized in this function Objects/longobject.c: In function ?long_true_divide?: Objects/longobject.c:2263: warning: ?aexp? may be used uninitialized in this function Objects/longobject.c:2263: warning: ?bexp? may be used uninitialized in this function Modified: python/trunk/Objects/longobject.c ============================================================================== --- python/trunk/Objects/longobject.c (original) +++ python/trunk/Objects/longobject.c Wed Feb 1 22:32:04 2006 @@ -652,7 +652,7 @@ double PyLong_AsDouble(PyObject *vv) { - int e; + int e = -1; double x; if (vv == NULL || !PyLong_Check(vv)) { @@ -662,6 +662,9 @@ x = _PyLong_AsScaledDouble(vv, &e); if (x == -1.0 && PyErr_Occurred()) return -1.0; + /* 'e' initialized to -1 to silence gcc-4.0.x, but it should be + set correctly after a successful _PyLong_AsScaledDouble() call */ + assert(e >= 0); if (e > INT_MAX / SHIFT) goto overflow; errno = 0; @@ -2260,7 +2263,7 @@ { PyLongObject *a, *b; double ad, bd; - int aexp, bexp, failed; + int failed, aexp = -1, bexp = -1; CONVERT_BINOP(v, w, &a, &b); ad = _PyLong_AsScaledDouble((PyObject *)a, &aexp); @@ -2270,6 +2273,10 @@ Py_DECREF(b); if (failed) return NULL; + /* 'aexp' and 'bexp' were initialized to -1 to silence gcc-4.0.x, + but should really be set correctly after sucessful calls to + _PyLong_AsScaledDouble() */ + assert(aexp >= 0 && bexp >= 0); if (bd == 0.0) { PyErr_SetString(PyExc_ZeroDivisionError, From python-checkins at python.org Wed Feb 1 23:05:31 2006 From: python-checkins at python.org (david.goodger) Date: Wed, 1 Feb 2006 23:05:31 +0100 (CET) Subject: [Python-checkins] r42222 - sandbox/trunk/pycon/parse_sched.py Message-ID: <20060201220531.C69FB1E401B@bag.python.org> Author: david.goodger Date: Wed Feb 1 23:05:31 2006 New Revision: 42222 Modified: sandbox/trunk/pycon/parse_sched.py Log: * The grammar of RFC 2445 specifies CRLF line endings, so I added a wrapper for sys.stdout. * Added unique IDs (UID values), based on a combination of date, time, & room of the event. * Removed extra spaces after "LOCATION:" and "SUMMARY:" labels. Modified: sandbox/trunk/pycon/parse_sched.py ============================================================================== --- sandbox/trunk/pycon/parse_sched.py (original) +++ sandbox/trunk/pycon/parse_sched.py Wed Feb 1 23:05:31 2006 @@ -16,6 +16,8 @@ line_pat = re.compile('[|]{2}.*[|]{2}\s*$') talk_pat = re.compile('#(\d+)') +datestamp_utc = datetime.datetime.utcnow().strftime("%Y%m%dT%H%M%SZ") + def parse (lines): lines = map(string.strip, lines) d = {} @@ -198,6 +200,18 @@ print >>output, date.strftime('

%A, %B %d %Y

') format_day(day_data, output) +location_codes = {'---': 'P', # plenary, or break/lunch + 'Ballroom': 'BR', + 'Ballroom A-E': 'BR-AE', + 'Ballroom F-J': 'BR-FJ', + 'Bent Tree': 'BT', + 'Bent Tree I': 'BT1', + 'Bent Tree II': 'BT2', + 'Bent Tree III': 'BT3', + 'Preston Trail': 'PT', + 'Preston Trail I': 'PT1', + 'Preston Trail II': 'PT2', + 'Preston Trail III': 'PT3',} def event (day, talk, output): location, texttime, duration, title = talk @@ -212,12 +226,14 @@ print >>output, 'BEGIN:VEVENT' print >>output, 'DTSTART;TZID=US-Central:%s' % date if location != '---': - print >>output, 'LOCATION: %s' % location - print >>output, 'SUMMARY: %s' % talk_title - print >>output, 'UID:%s@%s' % (date, 'pycon.org') + print >>output, 'LOCATION:%s' % location + print >>output, 'SUMMARY:%s' % talk_title + # Unique ID: + print >>output, 'UID:%s-%s at pycon.org' % (date, location_codes[location]) + # revision number, per VEVENT: print >>output, 'SEQUENCE:1' - # XXX what are these two? - print >>output, 'DTSTAMP:%s' % date + # date stamp must be UTC (ends with 'Z'): + print >>output, 'DTSTAMP:%s' % datestamp_utc print >>output, 'DURATION:PT%iM' % duration print >>output, 'END:VEVENT\n' @@ -243,12 +259,22 @@ TZOFFSETFROM:-0600 TZNAME:CDT END:DAYLIGHT -END:VTIMEZONE""" - +END:VTIMEZONE +""" for day in d: for talk in d[day]: event(day, talk, output) - print >>output, 'END:VCALENDAR\r\n' + print >>output, 'END:VCALENDAR' + + +class CRLF_LineEndingWrapper: + + def __init__(self, stream): + self.stream = stream + + def write(self, text): + self.stream.write(text.replace('\n', '\r\n')) + def main (): parser = optparse.OptionParser(usage="usage: %prog [options] < final-schedule") @@ -273,7 +299,7 @@ elif fmt == 'html': output_html(d, sys.stdout) elif fmt == 'ical': - output_ical(d, sys.stdout) + output_ical(d, CRLF_LineEndingWrapper(sys.stdout)) else: print >>sys.stderr, "Unknown format %r" % fmt sys.exit(1) From python-checkins at python.org Wed Feb 1 23:35:30 2006 From: python-checkins at python.org (brett.cannon) Date: Wed, 1 Feb 2006 23:35:30 +0100 (CET) Subject: [Python-checkins] r42223 - peps/trunk/pep-0352.txt Message-ID: <20060201223530.D41801E4005@bag.python.org> Author: brett.cannon Date: Wed Feb 1 23:35:30 2006 New Revision: 42223 Modified: peps/trunk/pep-0352.txt Log: Add mention of Michael Hudson's patch for new-style exceptions as a basis for any patch to implement this PEP. Also wrap a few lines that went past the 70 character fill length for Emacs. Modified: peps/trunk/pep-0352.txt ============================================================================== --- peps/trunk/pep-0352.txt (original) +++ peps/trunk/pep-0352.txt Wed Feb 1 23:35:30 2006 @@ -190,13 +190,14 @@ specifically listed. Deprecation of features in Python 2.9 is optional. This is because it -is not known at this time if Python 2.9 (which is slated to be the last version -in the 2.x series) will actively deprecate features that will not be in 3.0 . -It is conceivable that no deprecation warnings will be used in 2.9 since -there could be such a difference between 2.9 and 3.0 that it would make 2.9 too -"noisy" in terms of warnings. Thus the proposed deprecation warnings for -Python 2.9 will be revisited when development of that version begins to -determine if they are still desired. +is not known at this time if Python 2.9 (which is slated to be the +last version in the 2.x series) will actively deprecate features that +will not be in 3.0 . It is conceivable that no deprecation warnings +will be used in 2.9 since there could be such a difference between 2.9 +and 3.0 that it would make 2.9 too "noisy" in terms of warnings. Thus +the proposed deprecation warnings for Python 2.9 will be revisited +when development of that version begins to determine if they are still +desired. * Python 2.5 @@ -231,6 +232,16 @@ - drop ``args`` and ``__getitem__`` +Implementation +============== + +An initial patch to make exceptions new-style classes has been +authored by Michael Hudson can be found at SF patch #1104669 +[#SF_1104669]_. While it does not implement all points mentioned in +this PEP, it will most likely be used as a basis for the final path +to implement this PEP. + + References ========== @@ -240,6 +251,9 @@ .. [#hierarchy-good] python-dev Summary for 2004-08-01 through 2004-08-15 http://www.python.org/dev/summary/2004-08-01_2004-08-15.html#an-exception-is-an-exception-unless-it-doesn-t-inherit-from-exception +.. [#SF_1104669] SF patch #1104669 (new-style exceptions) + http://www.python.org/sf/1104669 + Copyright ========= From python-checkins at python.org Thu Feb 2 02:01:06 2006 From: python-checkins at python.org (andrew.kuchling) Date: Thu, 2 Feb 2006 02:01:06 +0100 (CET) Subject: [Python-checkins] r42224 - sandbox/trunk/pycon/parse_sched.py Message-ID: <20060202010106.023D71E4005@bag.python.org> Author: andrew.kuchling Date: Thu Feb 2 02:01:05 2006 New Revision: 42224 Modified: sandbox/trunk/pycon/parse_sched.py Log: Make timezone name match the VTIMEZONE name Modified: sandbox/trunk/pycon/parse_sched.py ============================================================================== --- sandbox/trunk/pycon/parse_sched.py (original) +++ sandbox/trunk/pycon/parse_sched.py Thu Feb 2 02:01:05 2006 @@ -224,7 +224,7 @@ date = list(day) + [ int(x) for x in texttime.split(':') ] date = datetime.datetime(*date).strftime("%Y%m%dT%H%M00") print >>output, 'BEGIN:VEVENT' - print >>output, 'DTSTART;TZID=US-Central:%s' % date + print >>output, 'DTSTART;TZID=US/Central:%s' % date if location != '---': print >>output, 'LOCATION:%s' % location print >>output, 'SUMMARY:%s' % talk_title From python-checkins at python.org Thu Feb 2 03:11:12 2006 From: python-checkins at python.org (david.goodger) Date: Thu, 2 Feb 2006 03:11:12 +0100 (CET) Subject: [Python-checkins] r42225 - sandbox/trunk/pycon/parse_sched.py Message-ID: <20060202021112.CCB661E4002@bag.python.org> Author: david.goodger Date: Thu Feb 2 03:11:09 2006 New Revision: 42225 Modified: sandbox/trunk/pycon/parse_sched.py Log: datestamp is required but irrelevant: avoid unnecessary checkin diffs (after the next one) Modified: sandbox/trunk/pycon/parse_sched.py ============================================================================== --- sandbox/trunk/pycon/parse_sched.py (original) +++ sandbox/trunk/pycon/parse_sched.py Thu Feb 2 03:11:09 2006 @@ -16,8 +16,6 @@ line_pat = re.compile('[|]{2}.*[|]{2}\s*$') talk_pat = re.compile('#(\d+)') -datestamp_utc = datetime.datetime.utcnow().strftime("%Y%m%dT%H%M%SZ") - def parse (lines): lines = map(string.strip, lines) d = {} @@ -232,8 +230,9 @@ print >>output, 'UID:%s-%s at pycon.org' % (date, location_codes[location]) # revision number, per VEVENT: print >>output, 'SEQUENCE:1' - # date stamp must be UTC (ends with 'Z'): - print >>output, 'DTSTAMP:%s' % datestamp_utc + # date stamp is required, must be UTC (ends with 'Z'), + # but the information itself is irrelevant: + print >>output, 'DTSTAMP:20060201T000000Z' print >>output, 'DURATION:PT%iM' % duration print >>output, 'END:VEVENT\n' From python-checkins at python.org Thu Feb 2 22:58:56 2006 From: python-checkins at python.org (fredrik.lundh) Date: Thu, 2 Feb 2006 22:58:56 +0100 (CET) Subject: [Python-checkins] r42226 - python/trunk/Doc/ref/ref6.tex Message-ID: <20060202215856.621B11E4005@bag.python.org> Author: fredrik.lundh Date: Thu Feb 2 22:58:55 2006 New Revision: 42226 Modified: python/trunk/Doc/ref/ref6.tex Log: SF patch #1421726 fixed typo in language reference Modified: python/trunk/Doc/ref/ref6.tex ============================================================================== --- python/trunk/Doc/ref/ref6.tex (original) +++ python/trunk/Doc/ref/ref6.tex Thu Feb 2 22:58:55 2006 @@ -875,7 +875,7 @@ Python statements which is then executed (unless a syntax error occurs). If it is an open file, the file is parsed until \EOF{} and executed. If it is a code object, it is simply executed. In all -cases, the code that's executed is expected to be be valid as file +cases, the code that's executed is expected to be valid as file input (see section~\ref{file-input}, ``File input''). Be aware that the \keyword{return} and \keyword{yield} statements may not be used outside of function definitions even within the context of code passed From python-checkins at python.org Fri Feb 3 05:41:25 2006 From: python-checkins at python.org (barry.warsaw) Date: Fri, 3 Feb 2006 05:41:25 +0100 (CET) Subject: [Python-checkins] r42227 - in python/branches/release23-maint/Lib/email: _parseaddr.py test/test_email.py Message-ID: <20060203044125.234421E4005@bag.python.org> Author: barry.warsaw Date: Fri Feb 3 05:41:24 2006 New Revision: 42227 Modified: python/branches/release23-maint/Lib/email/_parseaddr.py python/branches/release23-maint/Lib/email/test/test_email.py Log: parsedate_tz(): Return a 1 in the tm_yday field so that the value is acceptable to Python 2.4's time.strftime(). This fix mirrors the behavior in email 3.0. That field is documented as being "not useable" so it might as well not be buggy too . Add a test for this behavior and update a few tests that were expecting a 0 in this field. After committing I will run the entire Python 2.3 test suite to ensure this doesn't break any Python tests. Modified: python/branches/release23-maint/Lib/email/_parseaddr.py ============================================================================== --- python/branches/release23-maint/Lib/email/_parseaddr.py (original) +++ python/branches/release23-maint/Lib/email/_parseaddr.py Fri Feb 3 05:41:24 2006 @@ -1,4 +1,4 @@ -# Copyright (C) 2002 Python Software Foundation +# Copyright (C) 2002-2006 Python Software Foundation """Email address parsing code. @@ -123,8 +123,7 @@ else: tzsign = 1 tzoffset = tzsign * ( (tzoffset/100)*3600 + (tzoffset % 100)*60) - tuple = (yy, mm, dd, thh, tmm, tss, 0, 0, 0, tzoffset) - return tuple + return yy, mm, dd, thh, tmm, tss, 0, 1, 0, tzoffset def parsedate(data): Modified: python/branches/release23-maint/Lib/email/test/test_email.py ============================================================================== --- python/branches/release23-maint/Lib/email/test/test_email.py (original) +++ python/branches/release23-maint/Lib/email/test/test_email.py Fri Feb 3 05:41:24 2006 @@ -1949,12 +1949,21 @@ def test_parsedate_no_dayofweek(self): eq = self.assertEqual eq(Utils.parsedate_tz('25 Feb 2003 13:47:26 -0800'), - (2003, 2, 25, 13, 47, 26, 0, 0, 0, -28800)) + (2003, 2, 25, 13, 47, 26, 0, 1, 0, -28800)) def test_parsedate_compact_no_dayofweek(self): eq = self.assertEqual eq(Utils.parsedate_tz('5 Feb 2003 13:47:26 -0800'), - (2003, 2, 5, 13, 47, 26, 0, 0, 0, -28800)) + (2003, 2, 5, 13, 47, 26, 0, 1, 0, -28800)) + + def test_parsedate_acceptable_to_time_functions(self): + eq = self.assertEqual + timetup = Utils.parsedate('5 Feb 2003 13:47:26 -0800') + eq(int(time.mktime(timetup)), 1044470846) + eq(int(time.strftime('%Y', timetup)), 2003) + timetup = Utils.parsedate_tz('5 Feb 2003 13:47:26 -0800') + eq(int(time.mktime(timetup[:9])), 1044470846) + eq(int(time.strftime('%Y', timetup[:9])), 2003) def test_parseaddr_empty(self): self.assertEqual(Utils.parseaddr('<>'), ('', '')) From python-checkins at python.org Fri Feb 3 05:44:52 2006 From: python-checkins at python.org (barry.warsaw) Date: Fri, 3 Feb 2006 05:44:52 +0100 (CET) Subject: [Python-checkins] r42228 - in python/trunk/Lib/email: _parseaddr.py test/test_email.py Message-ID: <20060203044452.999D81E4005@bag.python.org> Author: barry.warsaw Date: Fri Feb 3 05:44:52 2006 New Revision: 42228 Modified: python/trunk/Lib/email/_parseaddr.py python/trunk/Lib/email/test/test_email.py Log: parsedate_tz(): Minor cleanup. Port from Python 2.3/email 2.5: Add a test for the tm_yday field is 1 in the return of parsedate(). Modified: python/trunk/Lib/email/_parseaddr.py ============================================================================== --- python/trunk/Lib/email/_parseaddr.py (original) +++ python/trunk/Lib/email/_parseaddr.py Fri Feb 3 05:44:52 2006 @@ -1,4 +1,4 @@ -# Copyright (C) 2002-2004 Python Software Foundation +# Copyright (C) 2002-2006 Python Software Foundation # Contact: email-sig at python.org """Email address parsing code. @@ -117,8 +117,7 @@ else: tzsign = 1 tzoffset = tzsign * ( (tzoffset//100)*3600 + (tzoffset % 100)*60) - tuple = (yy, mm, dd, thh, tmm, tss, 0, 1, 0, tzoffset) - return tuple + return yy, mm, dd, thh, tmm, tss, 0, 1, 0, tzoffset def parsedate(data): Modified: python/trunk/Lib/email/test/test_email.py ============================================================================== --- python/trunk/Lib/email/test/test_email.py (original) +++ python/trunk/Lib/email/test/test_email.py Fri Feb 3 05:44:52 2006 @@ -2104,6 +2104,15 @@ eq(Utils.parsedate_tz('5 Feb 2003 13:47:26 -0800'), (2003, 2, 5, 13, 47, 26, 0, 1, 0, -28800)) + def test_parsedate_acceptable_to_time_functions(self): + eq = self.assertEqual + timetup = Utils.parsedate('5 Feb 2003 13:47:26 -0800') + eq(int(time.mktime(timetup)), 1044470846) + eq(int(time.strftime('%Y', timetup)), 2003) + timetup = Utils.parsedate_tz('5 Feb 2003 13:47:26 -0800') + eq(int(time.mktime(timetup[:9])), 1044470846) + eq(int(time.strftime('%Y', timetup[:9])), 2003) + def test_parseaddr_empty(self): self.assertEqual(Utils.parseaddr('<>'), ('', '')) self.assertEqual(Utils.formataddr(Utils.parseaddr('<>')), '') From python-checkins at python.org Fri Feb 3 06:41:36 2006 From: python-checkins at python.org (barry.warsaw) Date: Fri, 3 Feb 2006 06:41:36 +0100 (CET) Subject: [Python-checkins] r42229 - in python/branches/release24-maint/Lib/email: _parseaddr.py test/test_email.py Message-ID: <20060203054136.53DE81E4005@bag.python.org> Author: barry.warsaw Date: Fri Feb 3 06:41:33 2006 New Revision: 42229 Modified: python/branches/release24-maint/Lib/email/_parseaddr.py python/branches/release24-maint/Lib/email/test/test_email.py Log: Port r42228 from the trunk. Modified: python/branches/release24-maint/Lib/email/_parseaddr.py ============================================================================== --- python/branches/release24-maint/Lib/email/_parseaddr.py (original) +++ python/branches/release24-maint/Lib/email/_parseaddr.py Fri Feb 3 06:41:33 2006 @@ -1,4 +1,4 @@ -# Copyright (C) 2002-2004 Python Software Foundation +# Copyright (C) 2002-2006 Python Software Foundation # Contact: email-sig at python.org """Email address parsing code. @@ -117,8 +117,7 @@ else: tzsign = 1 tzoffset = tzsign * ( (tzoffset//100)*3600 + (tzoffset % 100)*60) - tuple = (yy, mm, dd, thh, tmm, tss, 0, 1, 0, tzoffset) - return tuple + return yy, mm, dd, thh, tmm, tss, 0, 1, 0, tzoffset def parsedate(data): Modified: python/branches/release24-maint/Lib/email/test/test_email.py ============================================================================== --- python/branches/release24-maint/Lib/email/test/test_email.py (original) +++ python/branches/release24-maint/Lib/email/test/test_email.py Fri Feb 3 06:41:33 2006 @@ -2104,6 +2104,15 @@ eq(Utils.parsedate_tz('5 Feb 2003 13:47:26 -0800'), (2003, 2, 5, 13, 47, 26, 0, 1, 0, -28800)) + def test_parsedate_acceptable_to_time_functions(self): + eq = self.assertEqual + timetup = Utils.parsedate('5 Feb 2003 13:47:26 -0800') + eq(int(time.mktime(timetup)), 1044470846) + eq(int(time.strftime('%Y', timetup)), 2003) + timetup = Utils.parsedate_tz('5 Feb 2003 13:47:26 -0800') + eq(int(time.mktime(timetup[:9])), 1044470846) + eq(int(time.strftime('%Y', timetup[:9])), 2003) + def test_parseaddr_empty(self): self.assertEqual(Utils.parseaddr('<>'), ('', '')) self.assertEqual(Utils.formataddr(Utils.parseaddr('<>')), '') From python-checkins at python.org Fri Feb 3 13:54:19 2006 From: python-checkins at python.org (martin.v.loewis) Date: Fri, 3 Feb 2006 13:54:19 +0100 (CET) Subject: [Python-checkins] r42230 - in python/trunk: Misc/NEWS Modules/posixmodule.c Message-ID: <20060203125419.14A581E4005@bag.python.org> Author: martin.v.loewis Date: Fri Feb 3 13:54:16 2006 New Revision: 42230 Modified: python/trunk/Misc/NEWS python/trunk/Modules/posixmodule.c Log: Drop C library for stat/fstat on Windows. Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Fri Feb 3 13:54:16 2006 @@ -216,6 +216,10 @@ Extension Modules ----------------- +- Use Win32 API to implement os.stat/fstat. As a result, subsecond timestamps + are reported, the limit on path name lengths is removed, and stat reports + WindowsError now (instead of OSError). + - Add bsddb.db.DBEnv.set_tx_timestamp allowing time based database recovery. - Bug #1413192, fix seg fault in bsddb if a transaction was deleted Modified: python/trunk/Modules/posixmodule.c ============================================================================== --- python/trunk/Modules/posixmodule.c (original) +++ python/trunk/Modules/posixmodule.c Fri Feb 3 13:54:16 2006 @@ -277,9 +277,9 @@ /* choose the appropriate stat and fstat functions and return structs */ #undef STAT #if defined(MS_WIN64) || defined(MS_WINDOWS) -# define STAT _stati64 -# define FSTAT _fstati64 -# define STRUCT_STAT struct _stati64 +# define STAT win32_stat +# define FSTAT win32_fstat +# define STRUCT_STAT struct win32_stat #else # define STAT stat # define FSTAT fstat @@ -668,6 +668,188 @@ return Py_None; } +#ifdef MS_WINDOWS +/* The CRT of Windows has a number of flaws wrt. its stat() implementation: + - time stamps are restricted to second resolution + - file modification times suffer from forth-and-back conversions between + UTC and local time + Therefore, we implement our own stat, based on the Win32 API directly. +*/ +#define HAVE_STAT_NSEC 1 + +struct win32_stat{ + int st_dev; + __int64 st_ino; + unsigned short st_mode; + int st_nlink; + int st_uid; + int st_gid; + int st_rdev; + __int64 st_size; + int st_atime; + int st_atime_nsec; + int st_mtime; + int st_mtime_nsec; + int st_ctime; + int st_ctime_nsec; +}; + +static __int64 secs_between_epochs = 11644473600; /* Seconds between 1.1.1601 and 1.1.1970 */ + +static void +FILE_TIME_to_time_t_nsec(FILETIME *in_ptr, int *time_out, int* nsec_out) +{ + /* XXX endianness */ + __int64 in = *(__int64*)in_ptr; + *nsec_out = (int)(in % 10000000) * 100; /* FILETIME is in units of 100 nsec. */ + /* XXX Win32 supports time stamps past 2038; we currently don't */ + *time_out = Py_SAFE_DOWNCAST((in / 10000000) - secs_between_epochs, __int64, int); +} + +/* Below, we *know* that ugo+r is 0444 */ +#if _S_IREAD != 0400 +#error Unsupported C library +#endif +static int +attributes_to_mode(DWORD attr) +{ + int m = 0; + if (attr & FILE_ATTRIBUTE_DIRECTORY) + m |= _S_IFDIR | 0111; /* IFEXEC for user,group,other */ + else + m |= _S_IFREG; + if (attr & FILE_ATTRIBUTE_READONLY) + m |= 0444; + else + m |= 0666; + return m; +} + +static int +attribute_data_to_stat(WIN32_FILE_ATTRIBUTE_DATA *info, struct win32_stat *result) +{ + memset(result, 0, sizeof(*result)); + result->st_mode = attributes_to_mode(info->dwFileAttributes); + result->st_size = (((__int64)info->nFileSizeHigh)<<32) + info->nFileSizeLow; + FILE_TIME_to_time_t_nsec(&info->ftCreationTime, &result->st_ctime, &result->st_ctime_nsec); + FILE_TIME_to_time_t_nsec(&info->ftLastWriteTime, &result->st_mtime, &result->st_mtime_nsec); + FILE_TIME_to_time_t_nsec(&info->ftLastAccessTime, &result->st_atime, &result->st_atime_nsec); + + return 0; +} + +static int +win32_stat(const char* path, struct win32_stat *result) +{ + WIN32_FILE_ATTRIBUTE_DATA info; + int code; + char *dot; + /* XXX not supported on Win95 and NT 3.x */ + if (!GetFileAttributesExA(path, GetFileExInfoStandard, &info)) { + /* Protocol violation: we explicitly clear errno, instead of + setting it to a POSIX error. Callers should use GetLastError. */ + errno = 0; + return -1; + } + code = attribute_data_to_stat(&info, result); + if (code != 0) + return code; + /* Set S_IFEXEC if it is an .exe, .bat, ... */ + dot = strrchr(path, '.'); + if (dot) { + if (stricmp(dot, ".bat") == 0 || + stricmp(dot, ".cmd") == 0 || + stricmp(dot, ".exe") == 0 || + stricmp(dot, ".com") == 0) + result->st_mode |= 0111; + } + return code; +} + +static int +win32_wstat(const wchar_t* path, struct win32_stat *result) +{ + int code; + const wchar_t *dot; + WIN32_FILE_ATTRIBUTE_DATA info; + /* XXX not supported on Win95 and NT 3.x */ + if (!GetFileAttributesExW(path, GetFileExInfoStandard, &info)) { + /* Protocol violation: we explicitly clear errno, instead of + setting it to a POSIX error. Callers should use GetLastError. */ + errno = 0; + return -1; + } + code = attribute_data_to_stat(&info, result); + if (code < 0) + return code; + /* Set IFEXEC if it is an .exe, .bat, ... */ + dot = wcsrchr(path, '.'); + if (dot) { + if (_wcsicmp(dot, L".bat") == 0 || + _wcsicmp(dot, L".cmd") == 0 || + _wcsicmp(dot, L".exe") == 0 || + _wcsicmp(dot, L".com") == 0) + result->st_mode |= 0111; + } + return code; +} + +static int +win32_fstat(int file_number, struct win32_stat *result) +{ + BY_HANDLE_FILE_INFORMATION info; + HANDLE h; + int type; + + h = (HANDLE)_get_osfhandle(file_number); + + /* Protocol violation: we explicitly clear errno, instead of + setting it to a POSIX error. Callers should use GetLastError. */ + errno = 0; + + if (h == INVALID_HANDLE_VALUE) { + /* This is really a C library error (invalid file handle). + We set the Win32 error to the closes one matching. */ + SetLastError(ERROR_INVALID_HANDLE); + return -1; + } + memset(result, 0, sizeof(*result)); + + type = GetFileType(h); + if (type == FILE_TYPE_UNKNOWN) { + DWORD error = GetLastError(); + if (error != 0) { + return -1; + } + /* else: valid but unknown file */ + } + + if (type != FILE_TYPE_DISK) { + if (type == FILE_TYPE_CHAR) + result->st_mode = _S_IFCHR; + else if (type == FILE_TYPE_PIPE) + result->st_mode = _S_IFIFO; + return 0; + } + + if (!GetFileInformationByHandle(h, &info)) { + return -1; + } + + /* similar to stat() */ + result->st_mode = attributes_to_mode(info.dwFileAttributes); + result->st_size = (((__int64)info.nFileSizeHigh)<<32) + info.nFileSizeLow; + FILE_TIME_to_time_t_nsec(&info.ftCreationTime, &result->st_ctime, &result->st_ctime_nsec); + FILE_TIME_to_time_t_nsec(&info.ftLastWriteTime, &result->st_mtime, &result->st_mtime_nsec); + FILE_TIME_to_time_t_nsec(&info.ftLastAccessTime, &result->st_atime, &result->st_atime_nsec); + /* specific to fstat() */ + result->st_nlink = info.nNumberOfLinks; + result->st_ino = (((__int64)info.nFileIndexHigh)<<32) + info.nFileIndexLow; + return 0; +} + +#endif /* MS_WINDOWS */ + PyDoc_STRVAR(stat_result__doc__, "stat_result: Result from stat or lstat.\n\n\ This object may be accessed either as a tuple of\n\ @@ -861,76 +1043,78 @@ /* pack a system stat C structure into the Python stat tuple (used by posix_stat() and posix_fstat()) */ static PyObject* -_pystat_fromstructstat(STRUCT_STAT st) +_pystat_fromstructstat(STRUCT_STAT *st) { unsigned long ansec, mnsec, cnsec; PyObject *v = PyStructSequence_New(&StatResultType); if (v == NULL) return NULL; - PyStructSequence_SET_ITEM(v, 0, PyInt_FromLong((long)st.st_mode)); + PyStructSequence_SET_ITEM(v, 0, PyInt_FromLong((long)st->st_mode)); #ifdef HAVE_LARGEFILE_SUPPORT PyStructSequence_SET_ITEM(v, 1, - PyLong_FromLongLong((PY_LONG_LONG)st.st_ino)); + PyLong_FromLongLong((PY_LONG_LONG)st->st_ino)); #else - PyStructSequence_SET_ITEM(v, 1, PyInt_FromLong((long)st.st_ino)); + PyStructSequence_SET_ITEM(v, 1, PyInt_FromLong((long)st->st_ino)); #endif #if defined(HAVE_LONG_LONG) && !defined(MS_WINDOWS) PyStructSequence_SET_ITEM(v, 2, - PyLong_FromLongLong((PY_LONG_LONG)st.st_dev)); + PyLong_FromLongLong((PY_LONG_LONG)st->st_dev)); #else - PyStructSequence_SET_ITEM(v, 2, PyInt_FromLong((long)st.st_dev)); + PyStructSequence_SET_ITEM(v, 2, PyInt_FromLong((long)st->st_dev)); #endif - PyStructSequence_SET_ITEM(v, 3, PyInt_FromLong((long)st.st_nlink)); - PyStructSequence_SET_ITEM(v, 4, PyInt_FromLong((long)st.st_uid)); - PyStructSequence_SET_ITEM(v, 5, PyInt_FromLong((long)st.st_gid)); + PyStructSequence_SET_ITEM(v, 3, PyInt_FromLong((long)st->st_nlink)); + PyStructSequence_SET_ITEM(v, 4, PyInt_FromLong((long)st->st_uid)); + PyStructSequence_SET_ITEM(v, 5, PyInt_FromLong((long)st->st_gid)); #ifdef HAVE_LARGEFILE_SUPPORT PyStructSequence_SET_ITEM(v, 6, - PyLong_FromLongLong((PY_LONG_LONG)st.st_size)); + PyLong_FromLongLong((PY_LONG_LONG)st->st_size)); #else - PyStructSequence_SET_ITEM(v, 6, PyInt_FromLong(st.st_size)); + PyStructSequence_SET_ITEM(v, 6, PyInt_FromLong(st->st_size)); #endif -#ifdef HAVE_STAT_TV_NSEC - ansec = st.st_atim.tv_nsec; - mnsec = st.st_mtim.tv_nsec; - cnsec = st.st_ctim.tv_nsec; -#else -#ifdef HAVE_STAT_TV_NSEC2 - ansec = st.st_atimespec.tv_nsec; - mnsec = st.st_mtimespec.tv_nsec; - cnsec = st.st_ctimespec.tv_nsec; +#if defined(HAVE_STAT_TV_NSEC) + ansec = st->st_atim.tv_nsec; + mnsec = st->st_mtim.tv_nsec; + cnsec = st->st_ctim.tv_nsec; +#elif defined(HAVE_STAT_TV_NSEC2) + ansec = st->st_atimespec.tv_nsec; + mnsec = st->st_mtimespec.tv_nsec; + cnsec = st->st_ctimespec.tv_nsec; +#elif defined(HAVE_STAT_NSEC) + ansec = st->st_atime_nsec; + mnsec = st->st_mtime_nsec; + cnsec = st->st_ctime_nsec; #else ansec = mnsec = cnsec = 0; #endif -#endif - fill_time(v, 7, st.st_atime, ansec); - fill_time(v, 8, st.st_mtime, mnsec); - fill_time(v, 9, st.st_ctime, cnsec); + fill_time(v, 7, st->st_atime, ansec); + fill_time(v, 8, st->st_mtime, mnsec); + fill_time(v, 9, st->st_ctime, cnsec); #ifdef HAVE_STRUCT_STAT_ST_BLKSIZE PyStructSequence_SET_ITEM(v, ST_BLKSIZE_IDX, - PyInt_FromLong((long)st.st_blksize)); + PyInt_FromLong((long)st->st_blksize)); #endif #ifdef HAVE_STRUCT_STAT_ST_BLOCKS PyStructSequence_SET_ITEM(v, ST_BLOCKS_IDX, - PyInt_FromLong((long)st.st_blocks)); + PyInt_FromLong((long)st->st_blocks)); #endif #ifdef HAVE_STRUCT_STAT_ST_RDEV PyStructSequence_SET_ITEM(v, ST_RDEV_IDX, - PyInt_FromLong((long)st.st_rdev)); + PyInt_FromLong((long)st->st_rdev)); #endif #ifdef HAVE_STRUCT_STAT_ST_GEN PyStructSequence_SET_ITEM(v, ST_GEN_IDX, - PyInt_FromLong((long)st.st_gen)); + PyInt_FromLong((long)st->st_gen)); #endif #ifdef HAVE_STRUCT_STAT_ST_BIRTHTIME { PyObject *val; unsigned long bsec,bnsec; - bsec = (long)st.st_birthtime; + bsec = (long)st->st_birthtime; #ifdef HAVE_STAT_TV_NSEC2 - bnsec = st.st_birthtimespec.tv_nsec; + bnsec = st.st_birthtimespec->tv_nsec; #else bnsec = 0; #endif @@ -945,7 +1129,7 @@ #endif #ifdef HAVE_STRUCT_STAT_ST_FLAGS PyStructSequence_SET_ITEM(v, ST_FLAGS_IDX, - PyInt_FromLong((long)st.st_flags)); + PyInt_FromLong((long)st->st_flags)); #endif if (PyErr_Occurred()) { @@ -1031,12 +1215,7 @@ char *path = NULL; /* pass this to stat; do not free() it */ char *pathfree = NULL; /* this memory must be free'd */ int res; - -#ifdef MS_WINDOWS - int pathlen; - char pathcopy[MAX_PATH]; -#endif /* MS_WINDOWS */ - + PyObject *result; #ifdef Py_WIN_WIDE_FILENAMES /* If on wide-character-capable OS see if argument @@ -1044,43 +1223,17 @@ if (unicode_file_names()) { PyUnicodeObject *po; if (PyArg_ParseTuple(args, wformat, &po)) { - Py_UNICODE wpath[MAX_PATH+1]; - pathlen = wcslen(PyUnicode_AS_UNICODE(po)); - /* the library call can blow up if the file name is too long! */ - if (pathlen > MAX_PATH) { - errno = ENAMETOOLONG; - return posix_error(); - } - wcscpy(wpath, PyUnicode_AS_UNICODE(po)); - /* Remove trailing slash or backslash, unless it's the current - drive root (/ or \) or a specific drive's root (like c:\ or c:/). - */ - if (pathlen > 0) { - if (ISSLASHW(wpath[pathlen-1])) { - /* It does end with a slash -- exempt the root drive cases. */ - if (pathlen == 1 || (pathlen == 3 && wpath[1] == L':') || - IsUNCRootW(wpath, pathlen)) - /* leave it alone */; - else { - /* nuke the trailing backslash */ - wpath[pathlen-1] = L'\0'; - } - } - else if (ISSLASHW(wpath[1]) && pathlen < ARRAYSIZE(wpath)-1 && - IsUNCRootW(wpath, pathlen)) { - /* UNC root w/o trailing slash: add one when there's room */ - wpath[pathlen++] = L'\\'; - wpath[pathlen] = L'\0'; - } - } + Py_UNICODE *wpath = PyUnicode_AS_UNICODE(po); + Py_BEGIN_ALLOW_THREADS /* PyUnicode_AS_UNICODE result OK without thread lock as it is a simple dereference. */ res = wstatfunc(wpath, &st); Py_END_ALLOW_THREADS + if (res != 0) - return posix_error_with_unicode_filename(wpath); - return _pystat_fromstructstat(st); + return win32_error_unicode("stat", wpath); + return _pystat_fromstructstat(&st); } /* Drop the argument parsing error as narrow strings are also valid. */ @@ -1093,53 +1246,24 @@ return NULL; pathfree = path; -#ifdef MS_WINDOWS - pathlen = strlen(path); - /* the library call can blow up if the file name is too long! */ - if (pathlen > MAX_PATH) { - PyMem_Free(pathfree); - errno = ENAMETOOLONG; - return posix_error(); - } - - /* Remove trailing slash or backslash, unless it's the current - drive root (/ or \) or a specific drive's root (like c:\ or c:/). - */ - if (pathlen > 0) { - if (ISSLASHA(path[pathlen-1])) { - /* It does end with a slash -- exempt the root drive cases. */ - if (pathlen == 1 || (pathlen == 3 && path[1] == ':') || - IsUNCRootA(path, pathlen)) - /* leave it alone */; - else { - /* nuke the trailing backslash */ - strncpy(pathcopy, path, pathlen); - pathcopy[pathlen-1] = '\0'; - path = pathcopy; - } - } - else if (ISSLASHA(path[1]) && pathlen < ARRAYSIZE(pathcopy)-1 && - IsUNCRootA(path, pathlen)) { - /* UNC root w/o trailing slash: add one when there's room */ - strncpy(pathcopy, path, pathlen); - pathcopy[pathlen++] = '\\'; - pathcopy[pathlen] = '\0'; - path = pathcopy; - } - } -#endif /* MS_WINDOWS */ - Py_BEGIN_ALLOW_THREADS res = (*statfunc)(path, &st); Py_END_ALLOW_THREADS - if (res != 0) - return posix_error_with_allocated_filename(pathfree); + + if (res != 0) { +#ifdef MS_WINDOWS + result = win32_error("stat", pathfree); +#else + result = posix_error_with_filename(pathfree); +#endif + } + else + result = _pystat_fromstructstat(&st); PyMem_Free(pathfree); - return _pystat_fromstructstat(st); + return result; } - /* POSIX methods */ PyDoc_STRVAR(posix_access__doc__, @@ -1940,7 +2064,7 @@ posix_stat(PyObject *self, PyObject *args) { #ifdef MS_WINDOWS - return posix_do_stat(self, args, "et:stat", STAT, "U:stat", _wstati64); + return posix_do_stat(self, args, "et:stat", STAT, "U:stat", win32_wstat); #else return posix_do_stat(self, args, "et:stat", STAT, NULL, NULL); #endif @@ -5051,7 +5175,7 @@ return posix_do_stat(self, args, "et:lstat", lstat, NULL, NULL); #else /* !HAVE_LSTAT */ #ifdef MS_WINDOWS - return posix_do_stat(self, args, "et:lstat", STAT, "U:lstat", _wstati64); + return posix_do_stat(self, args, "et:lstat", STAT, "U:lstat", win32_wstat); #else return posix_do_stat(self, args, "et:lstat", STAT, NULL, NULL); #endif @@ -5497,10 +5621,15 @@ Py_BEGIN_ALLOW_THREADS res = FSTAT(fd, &st); Py_END_ALLOW_THREADS - if (res != 0) + if (res != 0) { +#ifdef MS_WINDOWS + return win32_error("fstat", NULL); +#else return posix_error(); +#endif + } - return _pystat_fromstructstat(st); + return _pystat_fromstructstat(&st); } From python-checkins at python.org Sat Feb 4 01:03:26 2006 From: python-checkins at python.org (phillip.eby) Date: Sat, 4 Feb 2006 01:03:26 +0100 (CET) Subject: [Python-checkins] r42231 - sandbox/trunk/setuptools/pkg_resources.py Message-ID: <20060204000326.2EFCC1E4005@bag.python.org> Author: phillip.eby Date: Sat Feb 4 01:03:25 2006 New Revision: 42231 Modified: sandbox/trunk/setuptools/pkg_resources.py Log: Honor get_platform() for Mac OS X if it starts with 'macosx-' Modified: sandbox/trunk/setuptools/pkg_resources.py ============================================================================== --- sandbox/trunk/setuptools/pkg_resources.py (original) +++ sandbox/trunk/setuptools/pkg_resources.py Sat Feb 4 01:03:25 2006 @@ -142,7 +142,9 @@ XXX Currently this is the same as ``distutils.util.get_platform()``, but it needs some hacks for Linux and Mac OS X. """ - if sys.platform == "darwin": + from distutils.util import get_platform + plat = get_platform() + if sys.platform == "darwin" and not plat.startswith('macosx-'): try: version = _macosx_vers() machine = os.uname()[4].replace(" ", "_") @@ -152,9 +154,7 @@ # if someone is running a non-Mac darwin system, this will fall # through to the default implementation pass - - from distutils.util import get_platform - return get_platform() + return plat macosVersionString = re.compile(r"macosx-(\d+)\.(\d+)-(.*)") darwinVersionString = re.compile(r"darwin-(\d+)\.(\d+)\.(\d+)-(.*)") From python-checkins at python.org Sat Feb 4 04:26:21 2006 From: python-checkins at python.org (neal.norwitz) Date: Sat, 4 Feb 2006 04:26:21 +0100 (CET) Subject: [Python-checkins] r42232 - python/trunk/Lib/test/test_pty.py Message-ID: <20060204032621.44A0F1E4005@bag.python.org> Author: neal.norwitz Date: Sat Feb 4 04:26:20 2006 New Revision: 42232 Modified: python/trunk/Lib/test/test_pty.py Log: Fix typo Modified: python/trunk/Lib/test/test_pty.py ============================================================================== --- python/trunk/Lib/test/test_pty.py (original) +++ python/trunk/Lib/test/test_pty.py Sat Feb 4 04:26:20 2006 @@ -4,7 +4,7 @@ TEST_STRING_1 = "I wish to buy a fish license.\n" TEST_STRING_2 = "For my pet fish, Eric.\n" -# Solaris (at least 2.9 and 2.10) seem to have a ficke isatty(). The first +# Solaris (at least 2.9 and 2.10) seem to have a fickle isatty(). The first # test below, testing the result of os.openpty() for tty-ness, sometimes # (but not always) fails. The second isatty test, in the sub-process, always # works. Allow that fickle first test to fail on these platforms, since it From python-checkins at python.org Sat Feb 4 04:29:56 2006 From: python-checkins at python.org (neal.norwitz) Date: Sat, 4 Feb 2006 04:29:56 +0100 (CET) Subject: [Python-checkins] r42233 - python/branches/release24-maint/Lib/test/test_pty.py Message-ID: <20060204032956.74CB81E4005@bag.python.org> Author: neal.norwitz Date: Sat Feb 4 04:29:52 2006 New Revision: 42233 Modified: python/branches/release24-maint/Lib/test/test_pty.py Log: Fix typo Modified: python/branches/release24-maint/Lib/test/test_pty.py ============================================================================== --- python/branches/release24-maint/Lib/test/test_pty.py (original) +++ python/branches/release24-maint/Lib/test/test_pty.py Sat Feb 4 04:29:52 2006 @@ -4,7 +4,7 @@ TEST_STRING_1 = "I wish to buy a fish license.\n" TEST_STRING_2 = "For my pet fish, Eric.\n" -# Solaris (at least 2.9 and 2.10) seem to have a ficke isatty(). The first +# Solaris (at least 2.9 and 2.10) seem to have a fickle isatty(). The first # test below, testing the result of os.openpty() for tty-ness, sometimes # (but not always) fails. The second isatty test, in the sub-process, always # works. Allow that fickle first test to fail on these platforms, since it From python-checkins at python.org Sat Feb 4 11:34:32 2006 From: python-checkins at python.org (martin.v.loewis) Date: Sat, 4 Feb 2006 11:34:32 +0100 (CET) Subject: [Python-checkins] r42234 - peps/trunk/pep-0101.txt Message-ID: <20060204103432.618931E4005@bag.python.org> Author: martin.v.loewis Date: Sat Feb 4 11:34:32 2006 New Revision: 42234 Modified: peps/trunk/pep-0101.txt Log: Mention updating general.ht. Modified: peps/trunk/pep-0101.txt ============================================================================== --- peps/trunk/pep-0101.txt (original) +++ peps/trunk/pep-0101.txt Sat Feb 4 11:34:32 2006 @@ -548,6 +548,10 @@ python-announce at python.org python-dev at python.org + ___ Mention the release as the most recent stable one in + pydotorg:doc/faq/general.ht (section "How stable is + Python?") + ___ Send a SourceForge News Item about the release. From the project's "menu bar", select the "News" link; once in News, select the "Submit" link. Type a suitable subject (e.g. "Python From python-checkins at python.org Sat Feb 4 20:12:39 2006 From: python-checkins at python.org (martin.v.loewis) Date: Sat, 4 Feb 2006 20:12:39 +0100 (CET) Subject: [Python-checkins] r42235 - in python/trunk: Doc/lib/libnis.tex Misc/ACKS Misc/NEWS Modules/nismodule.c Message-ID: <20060204191239.5A7571E4005@bag.python.org> Author: martin.v.loewis Date: Sat Feb 4 20:12:37 2006 New Revision: 42235 Modified: python/trunk/Doc/lib/libnis.tex python/trunk/Misc/ACKS python/trunk/Misc/NEWS python/trunk/Modules/nismodule.c Log: Patch #1422385: Changes to nis module to support multiple NIS domains Modified: python/trunk/Doc/lib/libnis.tex ============================================================================== --- python/trunk/Doc/lib/libnis.tex (original) +++ python/trunk/Doc/lib/libnis.tex Sat Feb 4 20:12:37 2006 @@ -15,7 +15,7 @@ The \module{nis} module defines the following functions: -\begin{funcdesc}{match}{key, mapname} +\begin{funcdesc}{match}{key, mapname[, domain=default_domain]} Return the match for \var{key} in map \var{mapname}, or raise an error (\exception{nis.error}) if there is none. Both should be strings, \var{key} is 8-bit clean. @@ -24,9 +24,13 @@ Note that \var{mapname} is first checked if it is an alias to another name. + +\versionchanged[The \var{domain} argument allows to override +the NIS domain used for the lookup. If unspecified, lookup is in the +default NIS domain]{2.5} \end{funcdesc} -\begin{funcdesc}{cat}{mapname} +\begin{funcdesc}{cat}{mapname[, domain=default_domain]} Return a dictionary mapping \var{key} to \var{value} such that \code{match(\var{key}, \var{mapname})==\var{value}}. Note that both keys and values of the dictionary are arbitrary @@ -34,12 +38,23 @@ Note that \var{mapname} is first checked if it is an alias to another name. + +\versionchanged[The \var{domain} argument allows to override +the NIS domain used for the lookup. If unspecified, lookup is in the +default NIS domain]{2.5} \end{funcdesc} -\begin{funcdesc}{maps}{} + \begin{funcdesc}{maps}{[domain=default_domain]} Return a list of all valid maps. + +\versionchanged[The \var{domain} argument allows to override +the NIS domain used for the lookup. If unspecified, lookup is in the +default NIS domain]{2.5} \end{funcdesc} + \begin{funcdesc}{get_default_domain}{} +Return the system default NIS domain. \versionadded{2.5} +\end{funcdesc} The \module{nis} module defines the following exception: Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Sat Feb 4 20:12:37 2006 @@ -47,6 +47,7 @@ Robin Becker Bill Bedford Reimer Behrends +Ben Bell Thomas Bellman Juan M. Bello Rivas Alexander Belopolsky Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Sat Feb 4 20:12:37 2006 @@ -216,6 +216,9 @@ Extension Modules ----------------- +- Patch #1422385: The nis module now supports access to domains other + than the system default domain. + - Use Win32 API to implement os.stat/fstat. As a result, subsecond timestamps are reported, the limit on path name lengths is removed, and stat reports WindowsError now (instead of OSError). Modified: python/trunk/Modules/nismodule.c ============================================================================== --- python/trunk/Modules/nismodule.c (original) +++ python/trunk/Modules/nismodule.c Sat Feb 4 20:12:37 2006 @@ -23,6 +23,27 @@ extern int yp_get_default_domain(char **); #endif +PyDoc_STRVAR(get_default_domain__doc__, +"get_default_domain() -> str\n\ +Corresponds to the C library yp_get_default_domain() call, returning\n\ +the default NIS domain.\n"); + +PyDoc_STRVAR(match__doc__, +"match(key, map, domain = defaultdomain)\n\ +Corresponds to the C library yp_match() call, returning the value of\n\ +key in the given map. Optionally domain can be specified but it\n\ +defaults to the system default domain.\n"); + +PyDoc_STRVAR(cat__doc__, +"cat(map, domain = defaultdomain)\n\ +Returns the entire map as a dictionary. Optionally domain can be\n\ +specified but it defaults to the system default domain.\n"); + +PyDoc_STRVAR(maps__doc__, +"maps(domain = defaultdomain)\n\ +Returns an array of all available NIS maps within a domain. If domain\n\ +is not specified it defaults to the system default domain.\n"); + static PyObject *NisError; static PyObject * @@ -116,19 +137,36 @@ } static PyObject * -nis_match (PyObject *self, PyObject *args) +nis_get_default_domain (PyObject *self) { - char *match; char *domain; + int err; + PyObject *res; + + if ((err = yp_get_default_domain(&domain)) != 0) + return nis_error(err); + + res = PyString_FromStringAndSize (domain, strlen(domain)); + return res; +} + +static PyObject * +nis_match (PyObject *self, PyObject *args, PyObject *kwdict) +{ + char *match; + char *domain = NULL; int keylen, len; char *key, *map; int err; PyObject *res; int fix; + static const char *kwlist[] = {"key", "map", "domain", NULL}; - if (!PyArg_ParseTuple(args, "t#s:match", &key, &keylen, &map)) + if (!PyArg_ParseTupleAndKeywords(args, kwdict, + "t#s|s:match", kwlist, + &key, &keylen, &map, &domain)) return NULL; - if ((err = yp_get_default_domain(&domain)) != 0) + if (!domain && ((err = yp_get_default_domain(&domain)) != 0)) return nis_error(err); map = nis_mapname (map, &fix); if (fix) @@ -146,18 +184,20 @@ } static PyObject * -nis_cat (PyObject *self, PyObject *args) +nis_cat (PyObject *self, PyObject *args, PyObject *kwdict) { - char *domain; + char *domain = NULL; char *map; struct ypall_callback cb; struct ypcallback_data data; PyObject *dict; int err; + static const char *kwlist[] = {"map", "domain", NULL}; - if (!PyArg_ParseTuple(args, "s:cat", &map)) + if (!PyArg_ParseTupleAndKeywords(args, kwdict, "s|s:cat", + kwlist, &map, &domain)) return NULL; - if ((err = yp_get_default_domain(&domain)) != 0) + if (!domain && ((err = yp_get_default_domain(&domain)) != 0)) return nis_error(err); dict = PyDict_New (); if (dict == NULL) @@ -301,19 +341,12 @@ static nismaplist * -nis_maplist (void) +nis_maplist (char *dom) { nisresp_maplist *list; - char *dom; CLIENT *cl; char *server = NULL; int mapi = 0; - int err; - - if ((err = yp_get_default_domain (&dom)) != 0) { - nis_error(err); - return NULL; - } while (!server && aliases[mapi].map != 0L) { yp_master (dom, aliases[mapi].map, &server); @@ -344,12 +377,23 @@ } static PyObject * -nis_maps (PyObject *self) +nis_maps (PyObject *self, PyObject *args, PyObject *kwdict) { + char *domain = NULL; nismaplist *maps; PyObject *list; + int err; + static const char *kwlist[] = {"domain", NULL}; - if ((maps = nis_maplist ()) == NULL) + if (!PyArg_ParseTupleAndKeywords(args, kwdict, + "|s:maps", kwlist, &domain)) + return NULL; + if (!domain && ((err = yp_get_default_domain (&domain)) != 0)) { + nis_error(err); + return NULL; + } + + if ((maps = nis_maplist (domain)) == NULL) return NULL; if ((list = PyList_New(0)) == NULL) return NULL; @@ -368,17 +412,29 @@ } static PyMethodDef nis_methods[] = { - {"match", nis_match, METH_VARARGS}, - {"cat", nis_cat, METH_VARARGS}, - {"maps", (PyCFunction)nis_maps, METH_NOARGS}, - {NULL, NULL} /* Sentinel */ + {"match", (PyCFunction)nis_match, + METH_VARARGS | METH_KEYWORDS, + match__doc__}, + {"cat", (PyCFunction)nis_cat, + METH_VARARGS | METH_KEYWORDS, + cat__doc__}, + {"maps", (PyCFunction)nis_maps, + METH_VARARGS | METH_KEYWORDS, + maps__doc__}, + {"get_default_domain", (PyCFunction)nis_get_default_domain, + METH_NOARGS, + get_default_domain__doc__}, + {NULL, NULL} /* Sentinel */ }; +PyDoc_STRVAR(nis__doc__, +"This module contains functions for accessing NIS maps.\n"); + void initnis (void) { PyObject *m, *d; - m = Py_InitModule("nis", nis_methods); + m = Py_InitModule3("nis", nis_methods, nis__doc__); if (m == NULL) return; d = PyModule_GetDict(m); From python-checkins at python.org Sat Feb 4 23:59:59 2006 From: python-checkins at python.org (neal.norwitz) Date: Sat, 4 Feb 2006 23:59:59 +0100 (CET) Subject: [Python-checkins] r42236 - python/branches/release24-maint/Doc/lib/libsubprocess.tex Message-ID: <20060204225959.C3CC51E4005@bag.python.org> Author: neal.norwitz Date: Sat Feb 4 23:59:56 2006 New Revision: 42236 Modified: python/branches/release24-maint/Doc/lib/libsubprocess.tex Log: SF bug 1415455, fix typo in module name Modified: python/branches/release24-maint/Doc/lib/libsubprocess.tex ============================================================================== --- python/branches/release24-maint/Doc/lib/libsubprocess.tex (original) +++ python/branches/release24-maint/Doc/lib/libsubprocess.tex Sat Feb 4 23:59:56 2006 @@ -371,7 +371,7 @@ (child_stdout, child_stdin) = (p.stdout, p.stdin) \end{verbatim} -The popen2.Popen3 and popen3.Popen4 basically works as subprocess.Popen, +The popen2.Popen3 and popen2.Popen4 basically works as subprocess.Popen, except that: \begin{itemize} From python-checkins at python.org Sun Feb 5 00:00:53 2006 From: python-checkins at python.org (neal.norwitz) Date: Sun, 5 Feb 2006 00:00:53 +0100 (CET) Subject: [Python-checkins] r42237 - python/trunk/Doc/lib/libsubprocess.tex Message-ID: <20060204230053.121B71E401B@bag.python.org> Author: neal.norwitz Date: Sun Feb 5 00:00:48 2006 New Revision: 42237 Modified: python/trunk/Doc/lib/libsubprocess.tex Log: SF bug 1415455, fix typo in module name Modified: python/trunk/Doc/lib/libsubprocess.tex ============================================================================== --- python/trunk/Doc/lib/libsubprocess.tex (original) +++ python/trunk/Doc/lib/libsubprocess.tex Sun Feb 5 00:00:48 2006 @@ -387,7 +387,7 @@ (child_stdout, child_stdin) = (p.stdout, p.stdin) \end{verbatim} -The popen2.Popen3 and popen3.Popen4 basically works as subprocess.Popen, +The popen2.Popen3 and popen2.Popen4 basically works as subprocess.Popen, except that: \begin{itemize} From python-checkins at python.org Sun Feb 5 00:32:27 2006 From: python-checkins at python.org (barry.warsaw) Date: Sun, 5 Feb 2006 00:32:27 +0100 (CET) Subject: [Python-checkins] r42238 - python/trunk/Lib/email/test/test_email.py Message-ID: <20060204233227.40C271E4005@bag.python.org> Author: barry.warsaw Date: Sun Feb 5 00:32:26 2006 New Revision: 42238 Modified: python/trunk/Lib/email/test/test_email.py Log: Resolves SF bug #1423972. Modified: python/trunk/Lib/email/test/test_email.py ============================================================================== --- python/trunk/Lib/email/test/test_email.py (original) +++ python/trunk/Lib/email/test/test_email.py Sun Feb 5 00:32:26 2006 @@ -2107,10 +2107,12 @@ def test_parsedate_acceptable_to_time_functions(self): eq = self.assertEqual timetup = Utils.parsedate('5 Feb 2003 13:47:26 -0800') - eq(int(time.mktime(timetup)), 1044470846) + t = int(time.mktime(timetup)) + eq(time.localtime(t)[:6], timetup[:6]) eq(int(time.strftime('%Y', timetup)), 2003) timetup = Utils.parsedate_tz('5 Feb 2003 13:47:26 -0800') - eq(int(time.mktime(timetup[:9])), 1044470846) + t = int(time.mktime(timetup[:9])) + eq(time.localtime(t)[:6], timetup[:6]) eq(int(time.strftime('%Y', timetup[:9])), 2003) def test_parseaddr_empty(self): From python-checkins at python.org Sun Feb 5 00:45:14 2006 From: python-checkins at python.org (barry.warsaw) Date: Sun, 5 Feb 2006 00:45:14 +0100 (CET) Subject: [Python-checkins] r42239 - python/branches/release24-maint/Lib/email/test/test_email.py Message-ID: <20060204234514.13B441E4005@bag.python.org> Author: barry.warsaw Date: Sun Feb 5 00:45:12 2006 New Revision: 42239 Modified: python/branches/release24-maint/Lib/email/test/test_email.py Log: Resolves SF bug #1423972. Modified: python/branches/release24-maint/Lib/email/test/test_email.py ============================================================================== --- python/branches/release24-maint/Lib/email/test/test_email.py (original) +++ python/branches/release24-maint/Lib/email/test/test_email.py Sun Feb 5 00:45:12 2006 @@ -2107,10 +2107,12 @@ def test_parsedate_acceptable_to_time_functions(self): eq = self.assertEqual timetup = Utils.parsedate('5 Feb 2003 13:47:26 -0800') - eq(int(time.mktime(timetup)), 1044470846) + t = int(time.mktime(timetup)) + eq(time.localtime(t)[:6], timetup[:6]) eq(int(time.strftime('%Y', timetup)), 2003) timetup = Utils.parsedate_tz('5 Feb 2003 13:47:26 -0800') - eq(int(time.mktime(timetup[:9])), 1044470846) + t = int(time.mktime(timetup[:9])) + eq(time.localtime(t)[:6], timetup[:6]) eq(int(time.strftime('%Y', timetup[:9])), 2003) def test_parseaddr_empty(self): From python-checkins at python.org Sun Feb 5 00:48:23 2006 From: python-checkins at python.org (barry.warsaw) Date: Sun, 5 Feb 2006 00:48:23 +0100 (CET) Subject: [Python-checkins] r42240 - python/branches/release23-maint/Lib/email/test/test_email.py Message-ID: <20060204234823.6C32E1E4005@bag.python.org> Author: barry.warsaw Date: Sun Feb 5 00:48:22 2006 New Revision: 42240 Modified: python/branches/release23-maint/Lib/email/test/test_email.py Log: Resolves SF bug #1423972. Modified: python/branches/release23-maint/Lib/email/test/test_email.py ============================================================================== --- python/branches/release23-maint/Lib/email/test/test_email.py (original) +++ python/branches/release23-maint/Lib/email/test/test_email.py Sun Feb 5 00:48:22 2006 @@ -1959,10 +1959,12 @@ def test_parsedate_acceptable_to_time_functions(self): eq = self.assertEqual timetup = Utils.parsedate('5 Feb 2003 13:47:26 -0800') - eq(int(time.mktime(timetup)), 1044470846) + t = int(time.mktime(timetup)) + eq(time.localtime(t)[:6], timetup[:6]) eq(int(time.strftime('%Y', timetup)), 2003) timetup = Utils.parsedate_tz('5 Feb 2003 13:47:26 -0800') - eq(int(time.mktime(timetup[:9])), 1044470846) + t = int(time.mktime(timetup[:9])) + eq(time.localtime(t)[:6], timetup[:6]) eq(int(time.strftime('%Y', timetup[:9])), 2003) def test_parseaddr_empty(self): From python-checkins at python.org Sun Feb 5 02:54:32 2006 From: python-checkins at python.org (neal.norwitz) Date: Sun, 5 Feb 2006 02:54:32 +0100 (CET) Subject: [Python-checkins] r42241 - in python/branches/ast-objects: Include/compile.h Include/pythonrun.h Include/symtable.h Parser/asdl_c.py Python/Python-ast.c Python/ast.c Python/compile.c Python/future.c Python/import.c Python/pythonrun.c Python/symtable.c Message-ID: <20060205015432.186251E4005@bag.python.org> Author: neal.norwitz Date: Sun Feb 5 02:54:29 2006 New Revision: 42241 Modified: python/branches/ast-objects/Include/compile.h python/branches/ast-objects/Include/pythonrun.h python/branches/ast-objects/Include/symtable.h python/branches/ast-objects/Parser/asdl_c.py python/branches/ast-objects/Python/Python-ast.c python/branches/ast-objects/Python/ast.c python/branches/ast-objects/Python/compile.c python/branches/ast-objects/Python/future.c python/branches/ast-objects/Python/import.c python/branches/ast-objects/Python/pythonrun.c python/branches/ast-objects/Python/symtable.c Log: Changes from Simon Burton to get everything to compile. I made some formatting changes and fixed a few warnings. Here are some of his notes: (0) still a lot of XDECREF'ing to do (and probably some other refcount woes) (1) runs simple code OK, site.py OK (2) using function intrumentation, I've checked the AST control flow against the trunk (42025) and it's OK (except it seems that the trunk has changed ast_for_try_stmt slightly) (3) bombs on "import sys" -> python: Python/compile.c:2809: compiler_nameop: Assertion `scope || (((PyStringObject *)(name))->ob_sval)[0] == '_'' failed. Bombs for me in a different place. Modified: python/branches/ast-objects/Include/compile.h ============================================================================== --- python/branches/ast-objects/Include/compile.h (original) +++ python/branches/ast-objects/Include/compile.h Sun Feb 5 02:54:29 2006 @@ -23,9 +23,9 @@ #define FUTURE_GENERATORS "generators" #define FUTURE_DIVISION "division" -PyAPI_FUNC(PyCodeObject *) PyAST_Compile(PyTypeObject *, const char *, +PyAPI_FUNC(PyCodeObject *) PyAST_Compile(PyObject *, const char *, PyCompilerFlags *); -PyAPI_FUNC(PyFutureFeatures *) PyFuture_FromAST(PyTypeObject *, const char *); +PyAPI_FUNC(PyFutureFeatures *) PyFuture_FromAST(PyObject *, const char *); #define ERR_LATE_FUTURE \ "from __future__ imports must occur at the beginning of the file" Modified: python/branches/ast-objects/Include/pythonrun.h ============================================================================== --- python/branches/ast-objects/Include/pythonrun.h (original) +++ python/branches/ast-objects/Include/pythonrun.h Sun Feb 5 02:54:29 2006 @@ -36,9 +36,9 @@ PyAPI_FUNC(int) PyRun_InteractiveOneFlags(FILE *, const char *, PyCompilerFlags *); PyAPI_FUNC(int) PyRun_InteractiveLoopFlags(FILE *, const char *, PyCompilerFlags *); -PyAPI_FUNC(PyTypeObject *) PyParser_ASTFromString(const char *, const char *, +PyAPI_FUNC(PyObject *) PyParser_ASTFromString(const char *, const char *, int, PyCompilerFlags *flags); -PyAPI_FUNC(PyTypeObject *) PyParser_ASTFromFile(FILE *, const char *, int, +PyAPI_FUNC(PyObject *) PyParser_ASTFromFile(FILE *, const char *, int, char *, char *, PyCompilerFlags *, int *); #define PyParser_SimpleParseString(S, B) \ Modified: python/branches/ast-objects/Include/symtable.h ============================================================================== --- python/branches/ast-objects/Include/symtable.h (original) +++ python/branches/ast-objects/Include/symtable.h Sun Feb 5 02:54:29 2006 @@ -52,7 +52,7 @@ PySTEntry_New(struct symtable *, PyObject *name, _Py_block_ty, void *, int); PyAPI_FUNC(int) PyST_GetScope(PySTEntryObject *, PyObject *); -PyAPI_FUNC(struct symtable *) PySymtable_Build(PyTypeObject *, const char *, +PyAPI_FUNC(struct symtable *) PySymtable_Build(PyObject *, const char *, PyFutureFeatures *); PyAPI_FUNC(PySTEntryObject *) PySymtable_Lookup(struct symtable *, void *); Modified: python/branches/ast-objects/Parser/asdl_c.py ============================================================================== --- python/branches/ast-objects/Parser/asdl_c.py (original) +++ python/branches/ast-objects/Parser/asdl_c.py Sun Feb 5 02:54:29 2006 @@ -258,6 +258,8 @@ emit("%s = PyList_New(0);" % f.name, 2) emit("Py_INCREF(%s);" % f.name, 1) emit("result->%s = %s;" % (f.name, f.name), 1) + if str(name)[0].isupper(): # HACK ! + emit("result->_base._kind = %s_kind;" % name, 1) for argtype, argname, opt in attrs: if argtype == "PyObject*": emit("Py_INCREF(%s);" % argname, 1) @@ -308,7 +310,7 @@ depth = 1 def emit(s): self.emit(s, depth) - emit("if (obj->%s != Py_None) /* empty */;" % f.name) + emit("if (obj->%s == Py_None) /* empty */;" % f.name) check = self.check(f.type) emit("else if (!%s(obj->%s)) {" % (check, f.name)) emit(' failed_check("%s", "%s", obj->%s);' % (f.name, f.type, f.name)) Modified: python/branches/ast-objects/Python/Python-ast.c ============================================================================== --- python/branches/ast-objects/Python/Python-ast.c (original) +++ python/branches/ast-objects/Python/Python-ast.c Sun Feb 5 02:54:29 2006 @@ -183,6 +183,7 @@ body = PyList_New(0); Py_INCREF(body); result->body = body; + result->_base._kind = Module_kind; return (PyObject*)result; } @@ -268,6 +269,7 @@ body = PyList_New(0); Py_INCREF(body); result->body = body; + result->_base._kind = Interactive_kind; return (PyObject*)result; } @@ -351,6 +353,7 @@ return NULL; Py_INCREF(body); result->body = body; + result->_base._kind = Expression_kind; return (PyObject*)result; } @@ -427,6 +430,7 @@ body = PyList_New(0); Py_INCREF(body); result->body = body; + result->_base._kind = Suite_kind; return (PyObject*)result; } @@ -620,6 +624,7 @@ decorators = PyList_New(0); Py_INCREF(decorators); result->decorators = decorators; + result->_base._kind = FunctionDef_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -736,6 +741,7 @@ body = PyList_New(0); Py_INCREF(body); result->body = body; + result->_base._kind = ClassDef_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -843,6 +849,7 @@ } Py_INCREF(value); result->value = value; + result->_base._kind = Return_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -903,7 +910,7 @@ Return_validate(PyObject *_obj) { struct _Return *obj = (struct _Return*)_obj; - if (obj->value != Py_None) /* empty */; + if (obj->value == Py_None) /* empty */; else if (!expr_Check(obj->value)) { failed_check("value", "expr", obj->value); return -1; @@ -923,6 +930,7 @@ targets = PyList_New(0); Py_INCREF(targets); result->targets = targets; + result->_base._kind = Delete_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -1012,6 +1020,7 @@ result->targets = targets; Py_INCREF(value); result->value = value; + result->_base._kind = Assign_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -1106,6 +1115,7 @@ result->op = op; Py_INCREF(value); result->value = value; + result->_base._kind = AugAssign_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -1201,6 +1211,7 @@ result->values = values; Py_INCREF(nl); result->nl = nl; + result->_base._kind = Print_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -1264,7 +1275,7 @@ { struct _Print *obj = (struct _Print*)_obj; int i; - if (obj->dest != Py_None) /* empty */; + if (obj->dest == Py_None) /* empty */; else if (!expr_Check(obj->dest)) { failed_check("dest", "expr", obj->dest); return -1; @@ -1310,6 +1321,7 @@ orelse = PyList_New(0); Py_INCREF(orelse); result->orelse = orelse; + result->_base._kind = For_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -1426,6 +1438,7 @@ orelse = PyList_New(0); Py_INCREF(orelse); result->orelse = orelse; + result->_base._kind = While_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -1537,6 +1550,7 @@ orelse = PyList_New(0); Py_INCREF(orelse); result->orelse = orelse; + result->_base._kind = If_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -1656,6 +1670,7 @@ } Py_INCREF(tback); result->tback = tback; + result->_base._kind = Raise_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -1718,21 +1733,21 @@ Raise_validate(PyObject *_obj) { struct _Raise *obj = (struct _Raise*)_obj; - if (obj->type != Py_None) /* empty */; + if (obj->type == Py_None) /* empty */; else if (!expr_Check(obj->type)) { failed_check("type", "expr", obj->type); return -1; } else if (expr_validate(obj->type) < 0) return -1; - if (obj->inst != Py_None) /* empty */; + if (obj->inst == Py_None) /* empty */; else if (!expr_Check(obj->inst)) { failed_check("inst", "expr", obj->inst); return -1; } else if (expr_validate(obj->inst) < 0) return -1; - if (obj->tback != Py_None) /* empty */; + if (obj->tback == Py_None) /* empty */; else if (!expr_Check(obj->tback)) { failed_check("tback", "expr", obj->tback); return -1; @@ -1761,6 +1776,7 @@ orelse = PyList_New(0); Py_INCREF(orelse); result->orelse = orelse; + result->_base._kind = TryExcept_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -1880,6 +1896,7 @@ finalbody = PyList_New(0); Py_INCREF(finalbody); result->finalbody = finalbody; + result->_base._kind = TryFinally_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -1984,6 +2001,7 @@ } Py_INCREF(msg); result->msg = msg; + result->_base._kind = Assert_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -2049,7 +2067,7 @@ failed_check("test", "expr", obj->test); return -1; } - if (obj->msg != Py_None) /* empty */; + if (obj->msg == Py_None) /* empty */; else if (!expr_Check(obj->msg)) { failed_check("msg", "expr", obj->msg); return -1; @@ -2069,6 +2087,7 @@ names = PyList_New(0); Py_INCREF(names); result->names = names; + result->_base._kind = Import_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -2158,6 +2177,7 @@ names = PyList_New(0); Py_INCREF(names); result->names = names; + result->_base._kind = ImportFrom_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -2260,6 +2280,7 @@ } Py_INCREF(locals); result->locals = locals; + result->_base._kind = Exec_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -2326,14 +2347,14 @@ failed_check("body", "expr", obj->body); return -1; } - if (obj->globals != Py_None) /* empty */; + if (obj->globals == Py_None) /* empty */; else if (!expr_Check(obj->globals)) { failed_check("globals", "expr", obj->globals); return -1; } else if (expr_validate(obj->globals) < 0) return -1; - if (obj->locals != Py_None) /* empty */; + if (obj->locals == Py_None) /* empty */; else if (!expr_Check(obj->locals)) { failed_check("locals", "expr", obj->locals); return -1; @@ -2353,6 +2374,7 @@ names = PyList_New(0); Py_INCREF(names); result->names = names; + result->_base._kind = Global_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -2436,6 +2458,7 @@ return NULL; Py_INCREF(value); result->value = value; + result->_base._kind = Expr_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -2509,6 +2532,7 @@ struct _Pass *result = PyObject_New(struct _Pass, &Py_Pass_Type); if (result == NULL) return NULL; + result->_base._kind = Pass_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -2576,6 +2600,7 @@ struct _Break *result = PyObject_New(struct _Break, &Py_Break_Type); if (result == NULL) return NULL; + result->_base._kind = Break_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -2643,6 +2668,7 @@ struct _Continue *result = PyObject_New(struct _Continue, &Py_Continue_Type); if (result == NULL) return NULL; + result->_base._kind = Continue_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -2807,6 +2833,7 @@ values = PyList_New(0); Py_INCREF(values); result->values = values; + result->_base._kind = BoolOp_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -2901,6 +2928,7 @@ result->op = op; Py_INCREF(right); result->right = right; + result->_base._kind = BinOp_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -2988,6 +3016,7 @@ result->op = op; Py_INCREF(operand); result->operand = operand; + result->_base._kind = UnaryOp_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -3070,6 +3099,7 @@ result->args = args; Py_INCREF(body); result->body = body; + result->_base._kind = Lambda_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -3156,6 +3186,7 @@ values = PyList_New(0); Py_INCREF(values); result->values = values; + result->_base._kind = Dict_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -3258,6 +3289,7 @@ generators = PyList_New(0); Py_INCREF(generators); result->generators = generators; + result->_base._kind = ListComp_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -3353,6 +3385,7 @@ generators = PyList_New(0); Py_INCREF(generators); result->generators = generators; + result->_base._kind = GeneratorExp_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -3448,6 +3481,7 @@ } Py_INCREF(value); result->value = value; + result->_base._kind = Yield_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -3508,7 +3542,7 @@ Yield_validate(PyObject *_obj) { struct _Yield *obj = (struct _Yield*)_obj; - if (obj->value != Py_None) /* empty */; + if (obj->value == Py_None) /* empty */; else if (!expr_Check(obj->value)) { failed_check("value", "expr", obj->value); return -1; @@ -3534,6 +3568,7 @@ comparators = PyList_New(0); Py_INCREF(comparators); result->comparators = comparators; + result->_base._kind = Compare_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -3658,6 +3693,7 @@ } Py_INCREF(kwargs); result->kwargs = kwargs; + result->_base._kind = Call_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -3752,14 +3788,14 @@ if (keyword_validate(PyList_GET_ITEM(obj->keywords, i)) < 0) return -1; } - if (obj->starargs != Py_None) /* empty */; + if (obj->starargs == Py_None) /* empty */; else if (!expr_Check(obj->starargs)) { failed_check("starargs", "expr", obj->starargs); return -1; } else if (expr_validate(obj->starargs) < 0) return -1; - if (obj->kwargs != Py_None) /* empty */; + if (obj->kwargs == Py_None) /* empty */; else if (!expr_Check(obj->kwargs)) { failed_check("kwargs", "expr", obj->kwargs); return -1; @@ -3777,6 +3813,7 @@ return NULL; Py_INCREF(value); result->value = value; + result->_base._kind = Repr_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -3852,6 +3889,7 @@ return NULL; Py_INCREF(n); result->n = n; + result->_base._kind = Num_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -3927,6 +3965,7 @@ return NULL; Py_INCREF(s); result->s = s; + result->_base._kind = Str_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -4006,6 +4045,7 @@ result->attr = attr; Py_INCREF(ctx); result->ctx = ctx; + result->_base._kind = Attribute_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -4095,6 +4135,7 @@ result->slice = slice; Py_INCREF(ctx); result->ctx = ctx; + result->_base._kind = Subscript_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -4182,6 +4223,7 @@ result->id = id; Py_INCREF(ctx); result->ctx = ctx; + result->_base._kind = Name_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -4266,6 +4308,7 @@ result->elts = elts; Py_INCREF(ctx); result->ctx = ctx; + result->_base._kind = List_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -4359,6 +4402,7 @@ result->elts = elts; Py_INCREF(ctx); result->ctx = ctx; + result->_base._kind = Tuple_kind; result->_base.lineno = lineno; return (PyObject*)result; } @@ -4513,6 +4557,7 @@ struct _Load *result = PyObject_New(struct _Load, &Py_Load_Type); if (result == NULL) return NULL; + result->_base._kind = Load_kind; return (PyObject*)result; } @@ -4579,6 +4624,7 @@ struct _Store *result = PyObject_New(struct _Store, &Py_Store_Type); if (result == NULL) return NULL; + result->_base._kind = Store_kind; return (PyObject*)result; } @@ -4645,6 +4691,7 @@ struct _Del *result = PyObject_New(struct _Del, &Py_Del_Type); if (result == NULL) return NULL; + result->_base._kind = Del_kind; return (PyObject*)result; } @@ -4711,6 +4758,7 @@ struct _AugLoad *result = PyObject_New(struct _AugLoad, &Py_AugLoad_Type); if (result == NULL) return NULL; + result->_base._kind = AugLoad_kind; return (PyObject*)result; } @@ -4777,6 +4825,7 @@ struct _AugStore *result = PyObject_New(struct _AugStore, &Py_AugStore_Type); if (result == NULL) return NULL; + result->_base._kind = AugStore_kind; return (PyObject*)result; } @@ -4843,6 +4892,7 @@ struct _Param *result = PyObject_New(struct _Param, &Py_Param_Type); if (result == NULL) return NULL; + result->_base._kind = Param_kind; return (PyObject*)result; } @@ -4972,6 +5022,7 @@ struct _Ellipsis *result = PyObject_New(struct _Ellipsis, &Py_Ellipsis_Type); if (result == NULL) return NULL; + result->_base._kind = Ellipsis_kind; return (PyObject*)result; } @@ -5056,6 +5107,7 @@ } Py_INCREF(step); result->step = step; + result->_base._kind = Slice_kind; return (PyObject*)result; } @@ -5117,21 +5169,21 @@ Slice_validate(PyObject *_obj) { struct _Slice *obj = (struct _Slice*)_obj; - if (obj->lower != Py_None) /* empty */; + if (obj->lower == Py_None) /* empty */; else if (!expr_Check(obj->lower)) { failed_check("lower", "expr", obj->lower); return -1; } else if (expr_validate(obj->lower) < 0) return -1; - if (obj->upper != Py_None) /* empty */; + if (obj->upper == Py_None) /* empty */; else if (!expr_Check(obj->upper)) { failed_check("upper", "expr", obj->upper); return -1; } else if (expr_validate(obj->upper) < 0) return -1; - if (obj->step != Py_None) /* empty */; + if (obj->step == Py_None) /* empty */; else if (!expr_Check(obj->step)) { failed_check("step", "expr", obj->step); return -1; @@ -5151,6 +5203,7 @@ dims = PyList_New(0); Py_INCREF(dims); result->dims = dims; + result->_base._kind = ExtSlice_kind; return (PyObject*)result; } @@ -5235,6 +5288,7 @@ return NULL; Py_INCREF(value); result->value = value; + result->_base._kind = Index_kind; return (PyObject*)result; } @@ -5366,6 +5420,7 @@ struct _And *result = PyObject_New(struct _And, &Py_And_Type); if (result == NULL) return NULL; + result->_base._kind = And_kind; return (PyObject*)result; } @@ -5432,6 +5487,7 @@ struct _Or *result = PyObject_New(struct _Or, &Py_Or_Type); if (result == NULL) return NULL; + result->_base._kind = Or_kind; return (PyObject*)result; } @@ -5577,6 +5633,7 @@ struct _Add *result = PyObject_New(struct _Add, &Py_Add_Type); if (result == NULL) return NULL; + result->_base._kind = Add_kind; return (PyObject*)result; } @@ -5643,6 +5700,7 @@ struct _Sub *result = PyObject_New(struct _Sub, &Py_Sub_Type); if (result == NULL) return NULL; + result->_base._kind = Sub_kind; return (PyObject*)result; } @@ -5709,6 +5767,7 @@ struct _Mult *result = PyObject_New(struct _Mult, &Py_Mult_Type); if (result == NULL) return NULL; + result->_base._kind = Mult_kind; return (PyObject*)result; } @@ -5775,6 +5834,7 @@ struct _Div *result = PyObject_New(struct _Div, &Py_Div_Type); if (result == NULL) return NULL; + result->_base._kind = Div_kind; return (PyObject*)result; } @@ -5841,6 +5901,7 @@ struct _Mod *result = PyObject_New(struct _Mod, &Py_Mod_Type); if (result == NULL) return NULL; + result->_base._kind = Mod_kind; return (PyObject*)result; } @@ -5907,6 +5968,7 @@ struct _Pow *result = PyObject_New(struct _Pow, &Py_Pow_Type); if (result == NULL) return NULL; + result->_base._kind = Pow_kind; return (PyObject*)result; } @@ -5973,6 +6035,7 @@ struct _LShift *result = PyObject_New(struct _LShift, &Py_LShift_Type); if (result == NULL) return NULL; + result->_base._kind = LShift_kind; return (PyObject*)result; } @@ -6039,6 +6102,7 @@ struct _RShift *result = PyObject_New(struct _RShift, &Py_RShift_Type); if (result == NULL) return NULL; + result->_base._kind = RShift_kind; return (PyObject*)result; } @@ -6105,6 +6169,7 @@ struct _BitOr *result = PyObject_New(struct _BitOr, &Py_BitOr_Type); if (result == NULL) return NULL; + result->_base._kind = BitOr_kind; return (PyObject*)result; } @@ -6171,6 +6236,7 @@ struct _BitXor *result = PyObject_New(struct _BitXor, &Py_BitXor_Type); if (result == NULL) return NULL; + result->_base._kind = BitXor_kind; return (PyObject*)result; } @@ -6237,6 +6303,7 @@ struct _BitAnd *result = PyObject_New(struct _BitAnd, &Py_BitAnd_Type); if (result == NULL) return NULL; + result->_base._kind = BitAnd_kind; return (PyObject*)result; } @@ -6303,6 +6370,7 @@ struct _FloorDiv *result = PyObject_New(struct _FloorDiv, &Py_FloorDiv_Type); if (result == NULL) return NULL; + result->_base._kind = FloorDiv_kind; return (PyObject*)result; } @@ -6432,6 +6500,7 @@ struct _Invert *result = PyObject_New(struct _Invert, &Py_Invert_Type); if (result == NULL) return NULL; + result->_base._kind = Invert_kind; return (PyObject*)result; } @@ -6498,6 +6567,7 @@ struct _Not *result = PyObject_New(struct _Not, &Py_Not_Type); if (result == NULL) return NULL; + result->_base._kind = Not_kind; return (PyObject*)result; } @@ -6564,6 +6634,7 @@ struct _UAdd *result = PyObject_New(struct _UAdd, &Py_UAdd_Type); if (result == NULL) return NULL; + result->_base._kind = UAdd_kind; return (PyObject*)result; } @@ -6630,6 +6701,7 @@ struct _USub *result = PyObject_New(struct _USub, &Py_USub_Type); if (result == NULL) return NULL; + result->_base._kind = USub_kind; return (PyObject*)result; } @@ -6771,6 +6843,7 @@ struct _Eq *result = PyObject_New(struct _Eq, &Py_Eq_Type); if (result == NULL) return NULL; + result->_base._kind = Eq_kind; return (PyObject*)result; } @@ -6837,6 +6910,7 @@ struct _NotEq *result = PyObject_New(struct _NotEq, &Py_NotEq_Type); if (result == NULL) return NULL; + result->_base._kind = NotEq_kind; return (PyObject*)result; } @@ -6903,6 +6977,7 @@ struct _Lt *result = PyObject_New(struct _Lt, &Py_Lt_Type); if (result == NULL) return NULL; + result->_base._kind = Lt_kind; return (PyObject*)result; } @@ -6969,6 +7044,7 @@ struct _LtE *result = PyObject_New(struct _LtE, &Py_LtE_Type); if (result == NULL) return NULL; + result->_base._kind = LtE_kind; return (PyObject*)result; } @@ -7035,6 +7111,7 @@ struct _Gt *result = PyObject_New(struct _Gt, &Py_Gt_Type); if (result == NULL) return NULL; + result->_base._kind = Gt_kind; return (PyObject*)result; } @@ -7101,6 +7178,7 @@ struct _GtE *result = PyObject_New(struct _GtE, &Py_GtE_Type); if (result == NULL) return NULL; + result->_base._kind = GtE_kind; return (PyObject*)result; } @@ -7167,6 +7245,7 @@ struct _Is *result = PyObject_New(struct _Is, &Py_Is_Type); if (result == NULL) return NULL; + result->_base._kind = Is_kind; return (PyObject*)result; } @@ -7233,6 +7312,7 @@ struct _IsNot *result = PyObject_New(struct _IsNot, &Py_IsNot_Type); if (result == NULL) return NULL; + result->_base._kind = IsNot_kind; return (PyObject*)result; } @@ -7299,6 +7379,7 @@ struct _In *result = PyObject_New(struct _In, &Py_In_Type); if (result == NULL) return NULL; + result->_base._kind = In_kind; return (PyObject*)result; } @@ -7365,6 +7446,7 @@ struct _NotIn *result = PyObject_New(struct _NotIn, &Py_NotIn_Type); if (result == NULL) return NULL; + result->_base._kind = NotIn_kind; return (PyObject*)result; } @@ -7608,14 +7690,14 @@ { struct _excepthandler *obj = (struct _excepthandler*)_obj; int i; - if (obj->type != Py_None) /* empty */; + if (obj->type == Py_None) /* empty */; else if (!expr_Check(obj->type)) { failed_check("type", "expr", obj->type); return -1; } else if (expr_validate(obj->type) < 0) return -1; - if (obj->name != Py_None) /* empty */; + if (obj->name == Py_None) /* empty */; else if (!expr_Check(obj->name)) { failed_check("name", "expr", obj->name); return -1; @@ -7739,12 +7821,12 @@ if (expr_validate(PyList_GET_ITEM(obj->args, i)) < 0) return -1; } - if (obj->vararg != Py_None) /* empty */; + if (obj->vararg == Py_None) /* empty */; else if (!PyString_Check(obj->vararg)) { failed_check("vararg", "identifier", obj->vararg); return -1; } - if (obj->kwarg != Py_None) /* empty */; + if (obj->kwarg == Py_None) /* empty */; else if (!PyString_Check(obj->kwarg)) { failed_check("kwarg", "identifier", obj->kwarg); return -1; @@ -7924,7 +8006,7 @@ failed_check("name", "identifier", obj->name); return -1; } - if (obj->asname != Py_None) /* empty */; + if (obj->asname == Py_None) /* empty */; else if (!PyString_Check(obj->asname)) { failed_check("asname", "identifier", obj->asname); return -1; Modified: python/branches/ast-objects/Python/ast.c ============================================================================== --- python/branches/ast-objects/Python/ast.c (original) +++ python/branches/ast-objects/Python/ast.c Sun Feb 5 02:54:29 2006 @@ -21,50 +21,27 @@ */ /* - Note: - - You should rarely need to use the asdl_seq_free() in this file. - If you use asdl_seq_free(), you will leak any objects held in the seq. - If there is an appropriate asdl_*_seq_free() function, use it. - If there isn't an asdl_*_seq_free() function for you, you will - need to loop over the data in the sequence and free it. + Instructions: - asdl_seq* seq; - int i; +It's all pretty mechanic. All functions returning PyObject* give you +a new reference, which you have to release, both in success and error +cases. All references which you receive as parameters and wish to +keep, you have to duplicate. + +Move all local variables of PyObject* to the beginning of the function, +and make sure they are all NULL-initialized. Arrange to have a single +exit point from each function, reachable through goto's. Make sure +the value actually returned is stored in a variable named "result". +Make sure all other PyObject* variables are DECREFed at the end of +the function. Pay particular attention to loops: if a variable is +overwritten in the beginning of the loop, it needs to be DECREF'ed +around the end of the loop. - for (i = 0; i < asdl_seq_LEN(seq); i++) - free_***(asdl_seq_GET(seq, i)); - asdl_seq_free(seq); / * ok * / - - Almost all of the ast functions return a seq of expr, so you should - use asdl_expr_seq_free(). The exception is ast_for_suite() which - returns a seq of stmt's, so use asdl_stmt_seq_free() to free it. - - If asdl_seq_free is appropriate, you should mark it with an ok comment. - - There are still many memory problems in this file even though - it runs clean in valgrind, save one problem that may have existed - before the AST. - - Any code which does something like this: - - return ASTconstruct(local, LINENO(n)); - - will leak memory. The problem is if ASTconstruct (e.g., TryFinally) - cannot allocate memory, local will be leaked. - - There was discussion on python-dev to replace the entire allocation - scheme in this file with arenas. Basically rather than allocate - memory in little blocks with malloc(), we allocate one big honking - hunk and deref everything into this block. We would still need - another block or technique to handle the PyObject*s. - - http://mail.python.org/pipermail/python-dev/2005-November/058138.html */ /* Data structure used internally */ struct compiling { - char *c_encoding; /* source encoding */ + char *c_encoding; /* source encoding */ }; static PyObject *seq_for_testlist(struct compiling *, const node *); @@ -83,7 +60,7 @@ static PyObject *parsestrplus(struct compiling *, const node *n); #ifndef LINENO -#define LINENO(n) ((n)->n_lineno) +#define LINENO(n) ((n)->n_lineno) #endif #define NEW_IDENTIFIER(n) PyString_InternFromString(STR(n)) @@ -104,7 +81,7 @@ { PyObject *u = Py_BuildValue("zi", errstr, LINENO(n)); if (!u) - return 0; + return 0; PyErr_SetObject(PyExc_SyntaxError, u); Py_DECREF(u); return 0; @@ -118,32 +95,32 @@ assert(PyErr_Occurred()); if (!PyErr_ExceptionMatches(PyExc_SyntaxError)) - return; + return; PyErr_Fetch(&type, &value, &tback); errstr = PyTuple_GetItem(value, 0); if (!errstr) - return; + return; Py_INCREF(errstr); lineno = PyInt_AsLong(PyTuple_GetItem(value, 1)); if (lineno == -1) - return; + return; Py_DECREF(value); loc = PyErr_ProgramText(filename, lineno); if (!loc) { - Py_INCREF(Py_None); - loc = Py_None; + Py_INCREF(Py_None); + loc = Py_None; } tmp = Py_BuildValue("(ziOO)", filename, lineno, Py_None, loc); Py_DECREF(loc); if (!tmp) - return; + return; value = Py_BuildValue("(OO)", errstr, tmp); Py_DECREF(errstr); Py_DECREF(tmp); if (!value) - return; + return; PyErr_Restore(type, value, tback); } @@ -234,9 +211,9 @@ switch (TYPE(n)) { case file_input: stmts = PyList_New(num_stmts(n)); - pos = 0; + pos = 0; if (!stmts) - goto error; + goto error; for (i = 0; i < NCH(n) - 1; i++) { ch = CHILD(n, i); if (TYPE(ch) == NEWLINE) @@ -260,39 +237,39 @@ } } } - assert(pos==PyList_GET_SIZE(stmts)); + assert(pos==PyList_GET_SIZE(stmts)); result = Module(stmts); - goto success; + goto success; case eval_input: { /* XXX Why not gen_for here? */ testlist_ast = ast_for_testlist(&c, CHILD(n, 0)); if (!testlist_ast) goto error; result = Expression(testlist_ast); - goto success; + goto success; } case single_input: if (TYPE(CHILD(n, 0)) == NEWLINE) { stmts = PyList_New(1); if (!stmts) - goto error; - s = Pass(n->n_lineno); - if (!s) - goto error; - STEAL_ITEM(stmts, 0, s); - result = Interactive(stmts); - goto success; + goto error; + s = Pass(n->n_lineno); + if (!s) + goto error; + STEAL_ITEM(stmts, 0, s); + result = Interactive(stmts); + goto success; } else { n = CHILD(n, 0); num = num_stmts(n); stmts = PyList_New(num); if (!stmts) - goto error; + goto error; if (num == 1) { - s = ast_for_stmt(&c, n); - if (!s) - goto error; + s = ast_for_stmt(&c, n); + if (!s) + goto error; STEAL_ITEM(stmts, 0, s); } else { @@ -308,8 +285,8 @@ } } - result = Interactive(stmts); - goto success; + result = Interactive(stmts); + goto success; } default: goto error; @@ -318,6 +295,7 @@ Py_XDECREF(stmts); Py_XDECREF(s); Py_XDECREF(testlist_ast); + if (result && PyAST_Validate(result) == -1) return NULL; return result; error: Py_XDECREF(stmts); @@ -357,7 +335,7 @@ case PERCENT: return Mod(); default: - PyErr_BadInternalCall(); + PyErr_BadInternalCall(); return NULL; } } @@ -382,40 +360,40 @@ #define SET_CTX(x) Py_DECREF(x); Py_INCREF(ctx); x = ctx switch (e->_kind) { case Attribute_kind: - if (Store_Check(ctx) && - !strcmp(PyString_AS_STRING(Attribute_attr(e)), "None")) { - return ast_error(n, "assignment to None"); - } - SET_CTX(Attribute_ctx(e)); - break; + if (Store_Check(ctx) && + !strcmp(PyString_AS_STRING(Attribute_attr(e)), "None")) { + return ast_error(n, "assignment to None"); + } + SET_CTX(Attribute_ctx(e)); + break; case Subscript_kind: - SET_CTX(Subscript_ctx(e)); - break; + SET_CTX(Subscript_ctx(e)); + break; case Name_kind: - if (Store_Check(ctx) && - !strcmp(PyString_AS_STRING(Name_id(e)), "None")) { - return ast_error(n, "assignment to None"); - } - SET_CTX(Name_ctx(e)); - break; + if (Store_Check(ctx) && + !strcmp(PyString_AS_STRING(Name_id(e)), "None")) { + return ast_error(n, "assignment to None"); + } + SET_CTX(Name_ctx(e)); + break; case List_kind: - SET_CTX(List_ctx(e)); - s = List_elts(e); - break; + SET_CTX(List_ctx(e)); + s = List_elts(e); + break; case Tuple_kind: if (PyList_GET_SIZE(Tuple_elts(e)) == 0) return ast_error(n, "can't assign to ()"); - SET_CTX(Tuple_ctx(e)); - s = Tuple_elts(e); - break; + SET_CTX(Tuple_ctx(e)); + s = Tuple_elts(e); + break; case Call_kind: - if (Store_Check(ctx)) - return ast_error(n, "can't assign to function call"); - else if (Del_Check(ctx)) - return ast_error(n, "can't delete function call"); - else - return ast_error(n, "unexpected operation on function call"); - break; + if (Store_Check(ctx)) + return ast_error(n, "can't assign to function call"); + else if (Del_Check(ctx)) + return ast_error(n, "can't delete function call"); + else + return ast_error(n, "unexpected operation on function call"); + break; case BinOp_kind: return ast_error(n, "can't assign to operator"); case GeneratorExp_kind: @@ -423,25 +401,25 @@ "not possible"); case Num_kind: case Str_kind: - return ast_error(n, "can't assign to literal"); + return ast_error(n, "can't assign to literal"); default: { - char buf[300]; - PyOS_snprintf(buf, sizeof(buf), - "unexpected expression in assignment %d (line %d)", - e->_kind, e->lineno); - return ast_error(n, buf); + char buf[300]; + PyOS_snprintf(buf, sizeof(buf), + "unexpected expression in assignment %d (line %d)", + e->_kind, e->lineno); + return ast_error(n, buf); } } /* If the LHS is a list or tuple, we need to set the assignment context for all the tuple elements. */ if (s) { - int i; + int i; - for (i = 0; i < PyList_GET_SIZE(s); i++) { - if (!set_context(PyList_GET_ITEM(s, i), ctx, n)) - return 0; - } + for (i = 0; i < PyList_GET_SIZE(s); i++) { + if (!set_context(PyList_GET_ITEM(s, i), ctx, n)) + return 0; + } } return 1; #undef SET_CTX @@ -493,13 +471,13 @@ */ REQ(n, comp_op); if (NCH(n) == 1) { - n = CHILD(n, 0); - switch (TYPE(n)) { + n = CHILD(n, 0); + switch (TYPE(n)) { case LESS: return Lt(); case GREATER: return Gt(); - case EQEQUAL: /* == */ + case EQEQUAL: /* == */ return Eq(); case LESSEQUAL: return LtE(); @@ -516,11 +494,11 @@ PyErr_Format(PyExc_SystemError, "invalid comp_op: %s", STR(n)); return 0; - } + } } else if (NCH(n) == 2) { - /* handle "not in" and "is not" */ - switch (TYPE(CHILD(n, 0))) { + /* handle "not in" and "is not" */ + switch (TYPE(CHILD(n, 0))) { case NAME: if (strcmp(STR(CHILD(n, 1)), "in") == 0) return NotIn(); @@ -530,7 +508,7 @@ PyErr_Format(PyExc_SystemError, "invalid comp_op: %s %s", STR(CHILD(n, 0)), STR(CHILD(n, 1))); return 0; - } + } } PyErr_Format(PyExc_SystemError, "invalid comp_op: has %d children", NCH(n)); @@ -546,10 +524,10 @@ PyObject *expression; int i; assert(TYPE(n) == testlist - || TYPE(n) == listmaker - || TYPE(n) == testlist_gexp - || TYPE(n) == testlist_safe - ); + || TYPE(n) == listmaker + || TYPE(n) == testlist_gexp + || TYPE(n) == testlist_safe + ); seq = PyList_New((NCH(n) + 1) / 2); if (!seq) @@ -560,10 +538,10 @@ expression = ast_for_expr(c, CHILD(n, i)); if (!expression) { - goto error; + goto error; } - assert(i / 2 < seq->size); + /* assert(i / 2 < seq->size); */ // ?? PyList_SET_ITEM(seq, i / 2, expression); } result = seq; @@ -583,26 +561,26 @@ PyObject *store = NULL; PyObject *arg = NULL; if (!args) - goto error; + goto error; store = Store(); if (!store) - goto error; + goto error; REQ(n, fplist); for (i = 0; i < len; i++) { const node *child = CHILD(CHILD(n, 2*i), 0); if (TYPE(child) == NAME) { - if (!strcmp(STR(child), "None")) { - ast_error(child, "assignment to None"); - goto error; - } + if (!strcmp(STR(child), "None")) { + ast_error(child, "assignment to None"); + goto error; + } arg = Name(NEW_IDENTIFIER(child), store, LINENO(child)); - } + } else arg = compiler_complex_args(CHILD(CHILD(n, 2*i), 1)); - if (!set_context(arg, store, n)) - goto error; + if (!set_context(arg, store, n)) + goto error; PyList_SET_ITEM(args, i, arg); } @@ -612,6 +590,7 @@ Py_XDECREF(args); Py_XDECREF(arg); Py_XDECREF(store); + if (result && PyAST_Validate(result) == -1) return NULL; return result; } @@ -641,24 +620,24 @@ node *ch; if (TYPE(n) == parameters) { - if (NCH(n) == 2) /* () as argument list */ - return arguments(NULL, NULL, NULL, NULL); - n = CHILD(n, 1); + if (NCH(n) == 2) /* () as argument list */ + return arguments(NULL, NULL, NULL, NULL); + n = CHILD(n, 1); } REQ(n, varargslist); /* first count the number of normal args & defaults */ for (i = 0; i < NCH(n); i++) { - ch = CHILD(n, i); - if (TYPE(ch) == fpdef) { - n_args++; - } - if (TYPE(ch) == EQUAL) - n_defaults++; + ch = CHILD(n, i); + if (TYPE(ch) == fpdef) { + n_args++; + } + if (TYPE(ch) == EQUAL) + n_defaults++; } args = (n_args ? PyList_New(n_args) : NULL); if (!args && n_args) - goto error; + goto error; defaults = (n_defaults ? PyList_New(n_defaults) : NULL); if (!defaults && n_defaults) goto error; @@ -668,67 +647,67 @@ */ i = 0; while (i < NCH(n)) { - ch = CHILD(n, i); - switch (TYPE(ch)) { + ch = CHILD(n, i); + switch (TYPE(ch)) { case fpdef: /* XXX Need to worry about checking if TYPE(CHILD(n, i+1)) is anything other than EQUAL or a comma? */ /* XXX Should NCH(n) check be made a separate check? */ if (i + 1 < NCH(n) && TYPE(CHILD(n, i + 1)) == EQUAL) { - e = ast_for_expr(c, CHILD(n, i + 2)); - if (!e) - goto error; + e = ast_for_expr(c, CHILD(n, i + 2)); + if (!e) + goto error; STEAL_ITEM(defaults, defno++, e); i += 2; - found_default = 1; + found_default = 1; + } + else if (found_default) { + ast_error(n, + "non-default argument follows default argument"); + goto error; } - else if (found_default) { - ast_error(n, - "non-default argument follows default argument"); - goto error; - } if (NCH(ch) == 3) { - e = compiler_complex_args(CHILD(ch, 1)); - if (!e) - goto error; + e = compiler_complex_args(CHILD(ch, 1)); + if (!e) + goto error; STEAL_ITEM(args, argno++, e); - } + } else if (TYPE(CHILD(ch, 0)) == NAME) { - if (!strcmp(STR(CHILD(ch, 0)), "None")) { - ast_error(CHILD(ch, 0), "assignment to None"); - goto error; - } - id = NEW_IDENTIFIER(CHILD(ch, 0)); - if (!id) goto error; - if (!param) param = Param(); - if (!param) goto error; + if (!strcmp(STR(CHILD(ch, 0)), "None")) { + ast_error(CHILD(ch, 0), "assignment to None"); + goto error; + } + id = NEW_IDENTIFIER(CHILD(ch, 0)); + if (!id) goto error; + if (!param) param = Param(); + if (!param) goto error; e = Name(id, param, LINENO(ch)); if (!e) goto error; STEAL_ITEM(args, argno++, e); - - } + + } i += 2; /* the name and the comma */ break; case STAR: - if (!strcmp(STR(CHILD(n, i+1)), "None")) { - ast_error(CHILD(n, i+1), "assignment to None"); - goto error; - } + if (!strcmp(STR(CHILD(n, i+1)), "None")) { + ast_error(CHILD(n, i+1), "assignment to None"); + goto error; + } vararg = NEW_IDENTIFIER(CHILD(n, i+1)); - if (!vararg) - goto error; + if (!vararg) + goto error; i += 3; break; case DOUBLESTAR: - if (!strcmp(STR(CHILD(n, i+1)), "None")) { - ast_error(CHILD(n, i+1), "assignment to None"); - goto error; - } + if (!strcmp(STR(CHILD(n, i+1)), "None")) { + ast_error(CHILD(n, i+1), "assignment to None"); + goto error; + } kwarg = NEW_IDENTIFIER(CHILD(n, i+1)); - if (!kwarg) - goto error; + if (!kwarg) + goto error; i += 3; break; default: @@ -736,7 +715,7 @@ "unexpected node in varargslist: %d @ %d", TYPE(ch), i); goto error; - } + } } result = arguments(args, vararg, kwarg, defaults); @@ -748,6 +727,7 @@ Py_XDECREF(e); Py_XDECREF(id); Py_XDECREF(param); + if (result && PyAST_Validate(result) == -1) return NULL; return result; } @@ -768,21 +748,21 @@ goto error; load = Load(); if (!load) - goto error; + goto error; e = Name(id, load, LINENO(n)); if (!result) - goto error; + goto error; id = NULL; for (i = 2; i < NCH(n); i+=2) { id = NEW_IDENTIFIER(CHILD(n, i)); - if (!id) - goto error; - attrib = Attribute(e, id, load, LINENO(CHILD(n, i))); - if (!attrib) - goto error; - e = attrib; - attrib = NULL; + if (!id) + goto error; + attrib = Attribute(e, id, load, LINENO(CHILD(n, i))); + if (!attrib) + goto error; + e = attrib; + attrib = NULL; } result = e; e = NULL; @@ -792,7 +772,8 @@ Py_XDECREF(e); Py_XDECREF(attrib); Py_XDECREF(load); - return NULL; + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* @@ -806,28 +787,28 @@ REQ(n, decorator); if ((NCH(n) < 3 && NCH(n) != 5 && NCH(n) != 6) - || TYPE(CHILD(n, 0)) != AT || TYPE(RCHILD(n, -1)) != NEWLINE) { - ast_error(n, "Invalid decorator node"); - goto error; + || TYPE(CHILD(n, 0)) != AT || TYPE(RCHILD(n, -1)) != NEWLINE) { + ast_error(n, "Invalid decorator node"); + goto error; } name_expr = ast_for_dotted_name(c, CHILD(n, 1)); if (!name_expr) - goto error; - + goto error; + if (NCH(n) == 3) { /* No arguments */ - d = name_expr; - name_expr = NULL; + d = name_expr; + name_expr = NULL; } else if (NCH(n) == 5) { /* Call with no arguments */ - d = Call(name_expr, NULL, NULL, NULL, NULL, LINENO(n)); - if (!d) - goto error; + d = Call(name_expr, NULL, NULL, NULL, NULL, LINENO(n)); + if (!d) + goto error; } else { - d = ast_for_call(c, CHILD(n, 3), name_expr); - if (!d) - goto error; + d = ast_for_call(c, CHILD(n, 3), name_expr); + if (!d) + goto error; } result = d; @@ -836,6 +817,7 @@ error: Py_XDECREF(name_expr); Py_XDECREF(d); + if (result && PyAST_Validate(result) == -1) return NULL; return result; } @@ -852,12 +834,12 @@ decorator_seq = PyList_New(NCH(n)); if (!decorator_seq) goto error; - + for (i = 0; i < NCH(n); i++) { - d = ast_for_decorator(c, CHILD(n, i)); - if (!d) - goto error; - STEAL_ITEM(decorator_seq, i, d); + d = ast_for_decorator(c, CHILD(n, i)); + if (!d) + goto error; + STEAL_ITEM(decorator_seq, i, d); } result = decorator_seq; decorator_seq = NULL; @@ -881,28 +863,28 @@ REQ(n, funcdef); if (NCH(n) == 6) { /* decorators are present */ - decorator_seq = ast_for_decorators(c, CHILD(n, 0)); - if (!decorator_seq) - goto error; - name_i = 2; + decorator_seq = ast_for_decorators(c, CHILD(n, 0)); + if (!decorator_seq) + goto error; + name_i = 2; } else { - name_i = 1; + name_i = 1; } name = NEW_IDENTIFIER(CHILD(n, name_i)); if (!name) - goto error; + goto error; else if (!strcmp(STR(CHILD(n, name_i)), "None")) { - ast_error(CHILD(n, name_i), "assignment to None"); - goto error; + ast_error(CHILD(n, name_i), "assignment to None"); + goto error; } args = ast_for_arguments(c, CHILD(n, name_i + 1)); if (!args) - goto error; + goto error; body = ast_for_suite(c, CHILD(n, name_i + 3)); if (!body) - goto error; + goto error; result = FunctionDef(name, args, body, decorator_seq, LINENO(n)); @@ -911,6 +893,7 @@ Py_XDECREF(decorator_seq); Py_XDECREF(args); Py_XDECREF(name); + if (result && PyAST_Validate(result) == -1) return NULL; return result; } @@ -943,6 +926,7 @@ error: Py_XDECREF(args); Py_XDECREF(expression); + if (result && PyAST_Validate(result) == -1) return NULL; return result; } @@ -961,14 +945,14 @@ n_fors++; REQ(ch, list_for); if (NCH(ch) == 5) - ch = CHILD(ch, 4); + ch = CHILD(ch, 4); else - return n_fors; + return n_fors; count_list_iter: REQ(ch, list_iter); ch = CHILD(ch, 0); if (TYPE(ch) == list_for) - goto count_list_for; + goto count_list_for; else if (TYPE(ch) == list_if) { if (NCH(ch) == 3) { ch = CHILD(ch, 2); @@ -997,12 +981,12 @@ count_list_iter: REQ(n, list_iter); if (TYPE(CHILD(n, 0)) == list_for) - return n_ifs; + return n_ifs; n = CHILD(n, 0); REQ(n, list_if); n_ifs++; if (NCH(n) == 2) - return n_ifs; + return n_ifs; n = CHILD(n, 2); goto count_list_iter; } @@ -1016,7 +1000,7 @@ list_if: 'if' test [list_iter] testlist_safe: test [(',' test)+ [',']] */ - PyObject *result; + PyObject *result = NULL; PyObject *elt = NULL; PyObject *listcomps = NULL; PyObject *t = NULL; @@ -1041,77 +1025,77 @@ listcomps = PyList_New(n_fors); if (!listcomps) - goto error; + goto error; ch = CHILD(n, 1); for (i = 0; i < n_fors; i++) { - /* each variable should be NULL each round */ - assert(lc == NULL); - assert(t == NULL); - assert(expression == NULL); - assert(ifs == NULL); - - REQ(ch, list_for); - - if (!store) store = Store(); - if (!store) goto error; - t = ast_for_exprlist(c, CHILD(ch, 1), store); + /* each variable should be NULL each round */ + assert(lc == NULL); + assert(t == NULL); + assert(expression == NULL); + assert(ifs == NULL); + + REQ(ch, list_for); + + if (!store) store = Store(); + if (!store) goto error; + t = ast_for_exprlist(c, CHILD(ch, 1), store); if (!t) - goto error; + goto error; expression = ast_for_testlist(c, CHILD(ch, 3)); if (!expression) - goto error; + goto error; - if (PyList_GET_SIZE(t) == 1) { - lc = comprehension(PyList_GET_ITEM(t, 0), expression, NULL); - if (!lc) - goto error; - } - else { - tmp = Tuple(t, store, LINENO(ch)); - if (!t) - goto error; - lc = comprehension(tmp, expression, NULL); - if (!lc) - goto error; - Py_RELEASE(tmp); - } - Py_RELEASE(t); - Py_RELEASE(expression); + if (PyList_GET_SIZE(t) == 1) { + lc = comprehension(PyList_GET_ITEM(t, 0), expression, NULL); + if (!lc) + goto error; + } + else { + tmp = Tuple(t, store, LINENO(ch)); + if (!t) + goto error; + lc = comprehension(tmp, expression, NULL); + if (!lc) + goto error; + Py_RELEASE(tmp); + } + Py_RELEASE(t); + Py_RELEASE(expression); - if (NCH(ch) == 5) { - int j, n_ifs; + if (NCH(ch) == 5) { + int j, n_ifs; - ch = CHILD(ch, 4); - n_ifs = count_list_ifs(ch); + ch = CHILD(ch, 4); + n_ifs = count_list_ifs(ch); if (n_ifs == -1) - goto error; + goto error; + + ifs = PyList_New(n_ifs); + if (!ifs) + goto error; - ifs = PyList_New(n_ifs); - if (!ifs) - goto error; - - for (j = 0; j < n_ifs; j++) { - REQ(ch, list_iter); - - ch = CHILD(ch, 0); - REQ(ch, list_if); - - t = ast_for_expr(c, CHILD(ch, 1)); - if (!t) - goto error; - STEAL_ITEM(ifs, j, t); - if (NCH(ch) == 3) - ch = CHILD(ch, 2); - } - /* on exit, must guarantee that ch is a list_for */ - if (TYPE(ch) == list_iter) - ch = CHILD(ch, 0); - Py_DECREF(comprehension_ifs(lc)); - comprehension_ifs(lc) = ifs; - ifs = NULL; - } - STEAL_ITEM(listcomps, i, lc); + for (j = 0; j < n_ifs; j++) { + REQ(ch, list_iter); + + ch = CHILD(ch, 0); + REQ(ch, list_if); + + t = ast_for_expr(c, CHILD(ch, 1)); + if (!t) + goto error; + STEAL_ITEM(ifs, j, t); + if (NCH(ch) == 3) + ch = CHILD(ch, 2); + } + /* on exit, must guarantee that ch is a list_for */ + if (TYPE(ch) == list_iter) + ch = CHILD(ch, 0); + Py_DECREF(comprehension_ifs(lc)); + comprehension_ifs(lc) = ifs; + ifs = NULL; + } + STEAL_ITEM(listcomps, i, lc); } result = ListComp(elt, listcomps, LINENO(n)); @@ -1124,6 +1108,7 @@ Py_XDECREF(store); Py_XDECREF(ifs); Py_XDECREF(tmp); + if (result && PyAST_Validate(result) == -1) return NULL; return result; } @@ -1136,35 +1121,35 @@ static int count_gen_fors(const node *n) { - int n_fors = 0; - node *ch = CHILD(n, 1); + int n_fors = 0; + node *ch = CHILD(n, 1); count_gen_for: - n_fors++; - REQ(ch, gen_for); - if (NCH(ch) == 5) - ch = CHILD(ch, 4); - else - return n_fors; + n_fors++; + REQ(ch, gen_for); + if (NCH(ch) == 5) + ch = CHILD(ch, 4); + else + return n_fors; count_gen_iter: - REQ(ch, gen_iter); - ch = CHILD(ch, 0); - if (TYPE(ch) == gen_for) - goto count_gen_for; - else if (TYPE(ch) == gen_if) { - if (NCH(ch) == 3) { - ch = CHILD(ch, 2); - goto count_gen_iter; - } - else - return n_fors; - } - else { - /* Should never be reached */ - PyErr_SetString(PyExc_SystemError, - "logic error in count_gen_fors"); - return -1; - } + REQ(ch, gen_iter); + ch = CHILD(ch, 0); + if (TYPE(ch) == gen_for) + goto count_gen_for; + else if (TYPE(ch) == gen_if) { + if (NCH(ch) == 3) { + ch = CHILD(ch, 2); + goto count_gen_iter; + } + else + return n_fors; + } + else { + /* Should never be reached */ + PyErr_SetString(PyExc_SystemError, + "logic error in count_gen_fors"); + return -1; + } } /* Count the number of 'if' statements in a generator expression. @@ -1175,26 +1160,26 @@ static int count_gen_ifs(const node *n) { - int n_ifs = 0; + int n_ifs = 0; - while (1) { - REQ(n, gen_iter); - if (TYPE(CHILD(n, 0)) == gen_for) - return n_ifs; - n = CHILD(n, 0); - REQ(n, gen_if); - n_ifs++; - if (NCH(n) == 2) - return n_ifs; - n = CHILD(n, 2); - } + while (1) { + REQ(n, gen_iter); + if (TYPE(CHILD(n, 0)) == gen_for) + return n_ifs; + n = CHILD(n, 0); + REQ(n, gen_if); + n_ifs++; + if (NCH(n) == 2) + return n_ifs; + n = CHILD(n, 2); + } } static PyObject* ast_for_genexp(struct compiling *c, const node *n) { /* testlist_gexp: test ( gen_for | (',' test)* [','] ) - argument: [test '='] test [gen_for] # Really [keyword '='] test */ + argument: [test '='] test [gen_for] # Really [keyword '='] test */ PyObject *result = NULL; PyObject *elt = NULL; PyObject *genexps = NULL; @@ -1211,7 +1196,7 @@ elt = ast_for_expr(c, CHILD(n, 0)); if (!elt) - goto error; + goto error; n_fors = count_gen_fors(n); if (n_fors == -1) @@ -1219,45 +1204,45 @@ genexps = PyList_New(n_fors); if (!genexps) - goto error; + goto error; store = Store(); if (!store) - goto error; + goto error; ch = CHILD(n, 1); for (i = 0; i < n_fors; i++) { assert(ge == NULL); - assert(t == NULL); - assert(expression == NULL); + assert(t == NULL); + assert(expression == NULL); REQ(ch, gen_for); t = ast_for_exprlist(c, CHILD(ch, 1), store); if (!t) - goto error; + goto error; expression = ast_for_expr(c, CHILD(ch, 3)); if (!expression) - goto error; + goto error; if (PyList_GET_SIZE(t) == 1) { ge = comprehension(PyList_GET_ITEM(t, 0), expression, NULL); - } + } else { - tmp = Tuple(t, store, LINENO(ch)); - if (!tmp) - goto error; + tmp = Tuple(t, store, LINENO(ch)); + if (!tmp) + goto error; ge = comprehension(tmp, expression, NULL); - if (!ge) - goto error; - Py_RELEASE(tmp); - } - - if (!ge) - goto error; - Py_RELEASE(t); - Py_RELEASE(expression); + if (!ge) + goto error; + Py_RELEASE(tmp); + } + + if (!ge) + goto error; + Py_RELEASE(t); + Py_RELEASE(expression); if (NCH(ch) == 5) { int j, n_ifs; @@ -1266,11 +1251,11 @@ ch = CHILD(ch, 4); n_ifs = count_gen_ifs(ch); if (n_ifs == -1) - goto error; + goto error; ifs = PyList_New(n_ifs); if (!ifs) - goto error; + goto error; for (j = 0; j < n_ifs; j++) { REQ(ch, gen_iter); @@ -1279,7 +1264,7 @@ expression = ast_for_expr(c, CHILD(ch, 1)); if (!expression) - goto error; + goto error; STEAL_ITEM(ifs, j, expression); if (NCH(ch) == 3) ch = CHILD(ch, 2); @@ -1287,9 +1272,9 @@ /* on exit, must guarantee that ch is a gen_for */ if (TYPE(ch) == gen_iter) ch = CHILD(ch, 0); - Py_DECREF(comprehension_ifs(ge)); - Py_INCREF(ifs); - comprehension_ifs(ge) = ifs; + Py_DECREF(comprehension_ifs(ge)); + Py_INCREF(ifs); + comprehension_ifs(ge) = ifs; } STEAL_ITEM(genexps, i, ge); } @@ -1301,6 +1286,7 @@ Py_XDECREF(ge); Py_XDECREF(t); Py_XDECREF(expression); + if (result && PyAST_Validate(result) == -1) return NULL; return result; } @@ -1319,119 +1305,120 @@ switch (TYPE(ch)) { case NAME: { - /* All names start in Load context, but may later be - changed. */ - PyObject *tmp = Load(); - if (!tmp) - goto error; - result = Name(NEW_IDENTIFIER(ch), tmp, LINENO(n)); - break; + /* All names start in Load context, but may later be + changed. */ + PyObject *tmp = Load(); + if (!tmp) + goto error; + result = Name(NEW_IDENTIFIER(ch), tmp, LINENO(n)); + break; } case STRING: { - PyObject *str = parsestrplus(c, n); - - if (!str) - goto error; - - result = Str(str, LINENO(n)); - break; + PyObject *str = parsestrplus(c, n); + + if (!str) + goto error; + + result = Str(str, LINENO(n)); + break; } case NUMBER: { - tmp = parsenumber(STR(ch)); - - if (!tmp) - goto error; - - result = Num(tmp, LINENO(n)); - break; + tmp = parsenumber(STR(ch)); + + if (!tmp) + goto error; + + result = Num(tmp, LINENO(n)); + break; } case LPAR: {/* some parenthesized expressions */ - ch = CHILD(n, 1); - - if (TYPE(ch) == RPAR) { - tmp = Load(); - if (!tmp) - goto error; - result = Tuple(NULL, tmp, LINENO(n)); - } - else if (TYPE(ch) == yield_expr) - result = ast_for_expr(c, ch); - else if ((NCH(ch) > 1) && (TYPE(CHILD(ch, 1)) == gen_for)) - result = ast_for_genexp(c, ch); - else - result = ast_for_testlist_gexp(c, ch); - break; + ch = CHILD(n, 1); + + if (TYPE(ch) == RPAR) { + tmp = Load(); + if (!tmp) + goto error; + result = Tuple(NULL, tmp, LINENO(n)); + } + else if (TYPE(ch) == yield_expr) + result = ast_for_expr(c, ch); + else if ((NCH(ch) > 1) && (TYPE(CHILD(ch, 1)) == gen_for)) + result = ast_for_genexp(c, ch); + else + result = ast_for_testlist_gexp(c, ch); + break; } case LSQB: /* list (or list comprehension) */ - tmp = Load(); - if (!tmp) - goto error; - ch = CHILD(n, 1); - - if (TYPE(ch) == RSQB) - result = List(NULL, tmp, LINENO(n)); - else { - REQ(ch, listmaker); - if (NCH(ch) == 1 || TYPE(CHILD(ch, 1)) == COMMA) { - elts = seq_for_testlist(c, ch); - - if (!elts) - return NULL; - - result = List(elts, tmp, LINENO(n)); - } - else - result = ast_for_listcomp(c, ch); - } - break; + tmp = Load(); + if (!tmp) + goto error; + ch = CHILD(n, 1); + + if (TYPE(ch) == RSQB) + result = List(NULL, tmp, LINENO(n)); + else { + REQ(ch, listmaker); + if (NCH(ch) == 1 || TYPE(CHILD(ch, 1)) == COMMA) { + elts = seq_for_testlist(c, ch); + + if (!elts) + return NULL; + + result = List(elts, tmp, LINENO(n)); + } + else + result = ast_for_listcomp(c, ch); + } + break; case LBRACE: { - /* dictmaker: test ':' test (',' test ':' test)* [','] */ - int i, size; - - ch = CHILD(n, 1); - size = (NCH(ch) + 1) / 4; /* +1 in case no trailing comma */ - keys = PyList_New(size); - if (!keys) - goto error; - - values = PyList_New(size); - if (!values) - goto error; - - for (i = 0; i < NCH(ch); i += 4) { - - tmp = ast_for_expr(c, CHILD(ch, i)); - if (!tmp) - goto error; - - STEAL_ITEM(keys, i / 4, tmp); - - tmp = ast_for_expr(c, CHILD(ch, i + 2)); - if (!tmp) - goto error; - - STEAL_ITEM(values, i / 4, tmp); - } - result = Dict(keys, values, LINENO(n)); - break; + /* dictmaker: test ':' test (',' test ':' test)* [','] */ + int i, size; + + ch = CHILD(n, 1); + size = (NCH(ch) + 1) / 4; /* +1 in case no trailing comma */ + keys = PyList_New(size); + if (!keys) + goto error; + + values = PyList_New(size); + if (!values) + goto error; + + for (i = 0; i < NCH(ch); i += 4) { + + tmp = ast_for_expr(c, CHILD(ch, i)); + if (!tmp) + goto error; + + STEAL_ITEM(keys, i / 4, tmp); + + tmp = ast_for_expr(c, CHILD(ch, i + 2)); + if (!tmp) + goto error; + + STEAL_ITEM(values, i / 4, tmp); + } + result = Dict(keys, values, LINENO(n)); + break; } case BACKQUOTE: { /* repr */ - tmp = ast_for_testlist(c, CHILD(n, 1)); - - if (!tmp) - goto error; - - result = Repr(tmp, LINENO(n)); - break; + tmp = ast_for_testlist(c, CHILD(n, 1)); + + if (!tmp) + goto error; + + result = Repr(tmp, LINENO(n)); + break; } default: - PyErr_Format(PyExc_SystemError, "unhandled atom %d", TYPE(ch)); + PyErr_Format(PyExc_SystemError, "unhandled atom %d", TYPE(ch)); } error: Py_XDECREF(tmp); Py_XDECREF(elts); Py_XDECREF(keys); Py_XDECREF(values); + if (result && PyAST_Validate(result) == -1) return NULL; return result; } @@ -1452,7 +1439,7 @@ */ ch = CHILD(n, 0); if (TYPE(ch) == DOT) - return Ellipsis(); + return Ellipsis(); if (NCH(n) == 1 && TYPE(ch) == test) { /* 'step' variable hold no significance in terms of being used over @@ -1461,32 +1448,32 @@ if (!step) goto error; - result = Index(step); - goto success; + result = Index(step); + goto success; } if (TYPE(ch) == test) { - lower = ast_for_expr(c, ch); + lower = ast_for_expr(c, ch); if (!lower) goto error; } /* If there's an upper bound it's in the second or third position. */ if (TYPE(ch) == COLON) { - if (NCH(n) > 1) { - node *n2 = CHILD(n, 1); + if (NCH(n) > 1) { + node *n2 = CHILD(n, 1); - if (TYPE(n2) == test) { - upper = ast_for_expr(c, n2); + if (TYPE(n2) == test) { + upper = ast_for_expr(c, n2); if (!upper) goto error; } - } + } } else if (NCH(n) > 2) { - node *n2 = CHILD(n, 2); + node *n2 = CHILD(n, 2); - if (TYPE(n2) == test) { - upper = ast_for_expr(c, n2); + if (TYPE(n2) == test) { + upper = ast_for_expr(c, n2); if (!upper) goto error; } @@ -1494,14 +1481,14 @@ ch = CHILD(n, NCH(n) - 1); if (TYPE(ch) == sliceop) { - if (NCH(ch) == 1) + if (NCH(ch) == 1) /* XXX: If only 1 child, then should just be a colon. Should we just skip assigning and just get to the return? */ - ch = CHILD(ch, 0); - else - ch = CHILD(ch, 1); - if (TYPE(ch) == test) { - step = ast_for_expr(c, ch); + ch = CHILD(ch, 0); + else + ch = CHILD(ch, 1); + if (TYPE(ch) == test) { + step = ast_for_expr(c, ch); if (!step) goto error; } @@ -1513,24 +1500,25 @@ Py_XDECREF(lower); Py_XDECREF(upper); Py_XDECREF(step); + if (result && PyAST_Validate(result) == -1) return NULL; return result; } static PyObject* ast_for_binop(struct compiling *c, const node *n) { - /* Must account for a sequence of expressions. - How should A op B op C by represented? - BinOp(BinOp(A, op, B), op, C). - */ - - PyObject *result = NULL; - int i, nops; - PyObject *expr1 = NULL; - PyObject *expr2 = NULL; - PyObject *tmp_result = NULL; - PyObject *tmp1 = NULL; - PyObject *tmp2 = NULL; + /* Must account for a sequence of expressions. + How should A op B op C by represented? + BinOp(BinOp(A, op, B), op, C). + */ + + PyObject *result = NULL; + int i, nops; + PyObject *expr1 = NULL; + PyObject *expr2 = NULL; + PyObject *tmp_result = NULL; + PyObject *tmp1 = NULL; + PyObject *tmp2 = NULL; PyObject *operator = NULL; expr1 = ast_for_expr(c, CHILD(n, 0)); @@ -1545,15 +1533,15 @@ if (!operator) return NULL; - tmp_result = BinOp(expr1, operator, expr2, LINENO(n)); - if (!tmp_result) + tmp_result = BinOp(expr1, operator, expr2, LINENO(n)); + if (!tmp_result) return NULL; - nops = (NCH(n) - 1) / 2; - for (i = 1; i < nops; i++) { - const node* next_oper = CHILD(n, i * 2 + 1); + nops = (NCH(n) - 1) / 2; + for (i = 1; i < nops; i++) { + const node* next_oper = CHILD(n, i * 2 + 1); - operator = get_operator(next_oper); + operator = get_operator(next_oper); if (!operator) goto error; @@ -1562,29 +1550,31 @@ goto error; tmp2 = BinOp(tmp_result, operator, tmp1, - LINENO(next_oper)); - if (!tmp_result) - goto error; - tmp_result = tmp2; - tmp2 = NULL; - Py_RELEASE(tmp1); - } - result = tmp_result; - tmp_result = NULL; - error: - Py_XDECREF(expr1); - Py_XDECREF(expr2); - Py_XDECREF(operator); - Py_XDECREF(tmp_result); - Py_XDECREF(tmp1); - Py_XDECREF(tmp2); - return NULL; + LINENO(next_oper)); + if (!tmp_result) + goto error; + tmp_result = tmp2; + tmp2 = NULL; + Py_RELEASE(tmp1); + } + result = tmp_result; + tmp_result = NULL; + error: + Py_XDECREF(expr1); + Py_XDECREF(expr2); + Py_XDECREF(operator); + Py_XDECREF(tmp_result); + Py_XDECREF(tmp1); + Py_XDECREF(tmp2); + return result; } static PyObject* -ast_for_trailer(struct compiling *c, const node *n, expr_ty left_expr) +ast_for_trailer(struct compiling *c, const node *n, PyObject *left_expr) { /* trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME */ + PyObject *slc = NULL; + PyObject *slices = NULL; PyObject *result = NULL; PyObject *e = NULL; REQ(n, trailer); @@ -1598,45 +1588,44 @@ REQ(CHILD(n, 2), RSQB); n = CHILD(n, 1); if (NCH(n) <= 2) { - slice_ty slc = ast_for_slice(c, CHILD(n, 0)); + slc = ast_for_slice(c, CHILD(n, 0)); if (!slc) - return NULL; - e = Subscript(left_expr, slc, Load, LINENO(n)); + goto error; + e = Subscript(left_expr, slc, Load(), LINENO(n)); if (!e) { - free_slice(slc); - return NULL; + goto error; } } else { int j; - slice_ty slc; - asdl_seq *slices = asdl_seq_new((NCH(n) + 1) / 2); + slices = PyList_New((NCH(n) + 1) / 2); if (!slices) - return NULL; + goto error; for (j = 0; j < NCH(n); j += 2) { slc = ast_for_slice(c, CHILD(n, j)); if (!slc) { - for (j = j / 2; j >= 0; j--) - free_slice(asdl_seq_GET(slices, j)); - asdl_seq_free(slices); /* ok */ - return NULL; + goto error; } - asdl_seq_SET(slices, j / 2, slc); + STEAL_ITEM(slices, j / 2, slc); } - e = Subscript(left_expr, ExtSlice(slices), Load, LINENO(n)); + e = Subscript(left_expr, ExtSlice(slices), Load(), LINENO(n)); if (!e) { - for (j = 0; j < asdl_seq_LEN(slices); j++) - free_slice(asdl_seq_GET(slices, j)); - asdl_seq_free(slices); /* ok */ - return NULL; + goto error; } } } else { assert(TYPE(CHILD(n, 0)) == DOT); - e = Attribute(left_expr, NEW_IDENTIFIER(CHILD(n, 1)), Load, LINENO(n)); + e = Attribute(left_expr, NEW_IDENTIFIER(CHILD(n, 1)), Load(), LINENO(n)); } - return e; + result = e; + e = NULL; + error: + Py_XDECREF(slc); + Py_XDECREF(slices); + Py_XDECREF(e); + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* @@ -1644,41 +1633,49 @@ { /* power: atom trailer* ('**' factor)* */ + PyObject *f = NULL; PyObject *result = NULL; + PyObject *tmp = NULL; + PyObject *e = NULL; int i; - PyObject *e = NULL; PyObject *tmp = NULL; REQ(n, power); e = ast_for_atom(c, CHILD(n, 0)); if (!e) - return NULL; + goto error; if (NCH(n) == 1) - return e; + return e; // BAD BAD for (i = 1; i < NCH(n); i++) { node *ch = CHILD(n, i); if (TYPE(ch) != trailer) break; tmp = ast_for_trailer(c, ch, e); if (!tmp) { - free_expr(e); - return NULL; + goto error; } + // Py_XDECREF(e); // UNCOMMENT e = tmp; + tmp = NULL; } if (TYPE(CHILD(n, NCH(n) - 1)) == factor) { - expr_ty f = ast_for_expr(c, CHILD(n, NCH(n) - 1)); + f = ast_for_expr(c, CHILD(n, NCH(n) - 1)); if (!f) { - free_expr(e); - return NULL; + goto error; } - tmp = BinOp(e, Pow, f, LINENO(n)); + tmp = BinOp(e, Pow(), f, LINENO(n)); if (!tmp) { - free_expr(f); - free_expr(e); - return NULL; + goto error; } e = tmp; + tmp = NULL; } - return e; + result = e; + e = NULL; + error: + Py_XDECREF(f); + Py_XDECREF(e); + Py_XDECREF(tmp); + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } /* Do not name a variable 'expr'! Will cause a compile error. @@ -1704,33 +1701,39 @@ PyObject *result = NULL; PyObject *seq = NULL; + PyObject *e = NULL; + PyObject *expression = NULL; + PyObject *ops = NULL; + PyObject *cmps = NULL; + PyObject *exp = NULL; + PyObject *operator = NULL; int i; loop: switch (TYPE(n)) { case test: if (TYPE(CHILD(n, 0)) == lambdef) - return ast_for_lambdef(c, CHILD(n, 0)); + result = ast_for_lambdef(c, CHILD(n, 0)); /* Fall through to and_test */ case and_test: if (NCH(n) == 1) { n = CHILD(n, 0); goto loop; } - seq = asdl_seq_new((NCH(n) + 1) / 2); + seq = PyList_New((NCH(n) + 1) / 2); if (!seq) - return NULL; + goto error; for (i = 0; i < NCH(n); i += 2) { - expr_ty e = ast_for_expr(c, CHILD(n, i)); + e = ast_for_expr(c, CHILD(n, i)); if (!e) - return NULL; - asdl_seq_SET(seq, i / 2, e); + goto error; + STEAL_ITEM(seq, i / 2, e); } if (!strcmp(STR(CHILD(n, 1)), "and")) - return BoolOp(And, seq, LINENO(n)); + result = BoolOp(And(), seq, LINENO(n)); else { assert(!strcmp(STR(CHILD(n, 1)), "or")); - return BoolOp(Or, seq, LINENO(n)); + result = BoolOp(Or(), seq, LINENO(n)); } break; case not_test: @@ -1739,12 +1742,13 @@ goto loop; } else { - expr_ty expression = ast_for_expr(c, CHILD(n, 1)); + expression = ast_for_expr(c, CHILD(n, 1)); if (!expression) - return NULL; + goto error; - return UnaryOp(Not, expression, LINENO(n)); + result = UnaryOp(Not(), expression, LINENO(n)); } + break; case comparison: if (NCH(n) == 1) { n = CHILD(n, 0); @@ -1753,43 +1757,33 @@ else { PyObject *expression = NULL; PyObject *ops = NULL; PyObject *cmps = NULL; - ops = asdl_seq_new(NCH(n) / 2); + ops = PyList_New(NCH(n) / 2); if (!ops) - return NULL; - cmps = asdl_seq_new(NCH(n) / 2); + goto error; + cmps = PyList_New(NCH(n) / 2); if (!cmps) { - asdl_seq_free(ops); /* ok */ - return NULL; + goto error; } for (i = 1; i < NCH(n); i += 2) { - /* XXX cmpop_ty is just an enum */ - cmpop_ty operator; - operator = ast_for_comp_op(CHILD(n, i)); if (!operator) { - asdl_expr_seq_free(ops); - asdl_expr_seq_free(cmps); - return NULL; - } + goto error; + } expression = ast_for_expr(c, CHILD(n, i + 1)); if (!expression) { - asdl_expr_seq_free(ops); - asdl_expr_seq_free(cmps); - return NULL; - } + goto error; + } - asdl_seq_SET(ops, i / 2, (void *)operator); - asdl_seq_SET(cmps, i / 2, expression); + STEAL_ITEM(ops, i / 2, operator); + STEAL_ITEM(cmps, i / 2, expression); } expression = ast_for_expr(c, CHILD(n, 0)); if (!expression) { - asdl_expr_seq_free(ops); - asdl_expr_seq_free(cmps); - return NULL; - } + goto error; + } - return Compare(expression, ops, cmps, LINENO(n)); + result = Compare(expression, ops, cmps, LINENO(n)); } break; @@ -1807,19 +1801,18 @@ n = CHILD(n, 0); goto loop; } - return ast_for_binop(c, n); + result = ast_for_binop(c, n); + break; case yield_expr: { - expr_ty exp = NULL; - if (NCH(n) == 2) { - exp = ast_for_testlist(c, CHILD(n, 1)); - if (!exp) - return NULL; - } - return Yield(exp, LINENO(n)); - } + if (NCH(n) == 2) { + exp = ast_for_testlist(c, CHILD(n, 1)); + if (!exp) + goto error; + } + result = Yield(exp, LINENO(n)); + break; + } case factor: { - PyObject *expression = NULL; - if (NCH(n) == 1) { n = CHILD(n, 0); goto loop; @@ -1827,44 +1820,60 @@ expression = ast_for_expr(c, CHILD(n, 1)); if (!expression) - return NULL; + goto error; switch (TYPE(CHILD(n, 0))) { case PLUS: - return UnaryOp(UAdd, expression, LINENO(n)); + result = UnaryOp(UAdd(), expression, LINENO(n)); + break; case MINUS: - return UnaryOp(USub, expression, LINENO(n)); + result = UnaryOp(USub(), expression, LINENO(n)); + break; case TILDE: - return UnaryOp(Invert, expression, LINENO(n)); + result = UnaryOp(Invert(), expression, LINENO(n)); + break; } PyErr_Format(PyExc_SystemError, "unhandled factor: %d", - TYPE(CHILD(n, 0))); + TYPE(CHILD(n, 0))); break; } case power: - return ast_for_power(c, n); + result = ast_for_power(c, n); + break; default: PyErr_Format(PyExc_SystemError, "unhandled expr: %d", TYPE(n)); - return NULL; + goto error; } - /* should never get here */ - return NULL; + error: + Py_XDECREF(seq); + Py_XDECREF(e); + Py_XDECREF(expression); + Py_XDECREF(ops); + Py_XDECREF(cmps); + Py_XDECREF(exp); + Py_XDECREF(operator); + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* -ast_for_call(struct compiling *c, const node *n, expr_ty func) +ast_for_call(struct compiling *c, const node *n, PyObject *func) { /* arglist: (argument ',')* (argument [',']| '*' test [',' '**' test] | '**' test) - argument: [test '='] test [gen_for] # Really [keyword '='] test + argument: [test '='] test [gen_for] # Really [keyword '='] test */ PyObject *result = NULL; int i, nargs, nkeywords, ngens; - asdl_seq *args = NULL; - asdl_seq *keywords = NULL; - expr_ty vararg = NULL, kwarg = NULL; + PyObject *args = NULL; + PyObject *keywords = NULL; + PyObject *vararg = NULL; + PyObject *kwarg = NULL; + PyObject *e = NULL; + PyObject *kw = NULL; + PyObject *key = NULL; REQ(n, arglist); @@ -1872,57 +1881,53 @@ nkeywords = 0; ngens = 0; for (i = 0; i < NCH(n); i++) { - node *ch = CHILD(n, i); - if (TYPE(ch) == argument) { - if (NCH(ch) == 1) - nargs++; - else if (TYPE(CHILD(ch, 1)) == gen_for) - ngens++; + node *ch = CHILD(n, i); + if (TYPE(ch) == argument) { + if (NCH(ch) == 1) + nargs++; + else if (TYPE(CHILD(ch, 1)) == gen_for) + ngens++; else - nkeywords++; - } + nkeywords++; + } } if (ngens > 1 || (ngens && (nargs || nkeywords))) { ast_error(n, "Generator expression must be parenthesised " - "if not sole argument"); - return NULL; + "if not sole argument"); + goto error; } if (nargs + nkeywords + ngens > 255) { ast_error(n, "more than 255 arguments"); - return NULL; + goto error; } - args = asdl_seq_new(nargs + ngens); + args = PyList_New(nargs + ngens); if (!args) goto error; - keywords = asdl_seq_new(nkeywords); + keywords = PyList_New(nkeywords); if (!keywords) goto error; nargs = 0; nkeywords = 0; for (i = 0; i < NCH(n); i++) { - node *ch = CHILD(n, i); - if (TYPE(ch) == argument) { - PyObject *e = NULL; - if (NCH(ch) == 1) { - e = ast_for_expr(c, CHILD(ch, 0)); + node *ch = CHILD(n, i); + if (TYPE(ch) == argument) { + if (NCH(ch) == 1) { + e = ast_for_expr(c, CHILD(ch, 0)); if (!e) goto error; - asdl_seq_SET(args, nargs++, e); - } - else if (TYPE(CHILD(ch, 1)) == gen_for) { - e = ast_for_genexp(c, ch); + STEAL_ITEM(args, nargs++, e); + } + else if (TYPE(CHILD(ch, 1)) == gen_for) { + e = ast_for_genexp(c, ch); if (!e) goto error; - asdl_seq_SET(args, nargs++, e); + STEAL_ITEM(args, nargs++, e); } - else { - keyword_ty kw; - identifier key; - - /* CHILD(ch, 0) is test, but must be an identifier? */ - e = ast_for_expr(c, CHILD(ch, 0)); + else { + /* CHILD(ch, 0) is test, but must be an identifier? */ + e = ast_for_expr(c, CHILD(ch, 0)); if (!e) goto error; /* f(lambda x: x[0] = 3) ends up getting parsed with @@ -1930,47 +1935,38 @@ * SF bug 132313 points out that complaining about a keyword * then is very confusing. */ - if (e->kind == Lambda_kind) { + if (expr_kind(e) == Lambda_kind) { ast_error(CHILD(ch, 0), "lambda cannot contain assignment"); goto error; - } else if (e->kind != Name_kind) { + } else if (expr_kind(e) != Name_kind) { ast_error(CHILD(ch, 0), "keyword can't be an expression"); goto error; } - key = Name_id(e); - free(e); /* XXX: is free correct here? */ - e = ast_for_expr(c, CHILD(ch, 2)); + key = Name_id(e); + e = ast_for_expr(c, CHILD(ch, 2)); if (!e) goto error; - kw = keyword(key, e); + kw = keyword(key, e); if (!kw) goto error; - asdl_seq_SET(keywords, nkeywords++, kw); - } - } - else if (TYPE(ch) == STAR) { - vararg = ast_for_expr(c, CHILD(n, i+1)); - i++; - } - else if (TYPE(ch) == DOUBLESTAR) { - kwarg = ast_for_expr(c, CHILD(n, i+1)); - i++; - } - } - - return Call(func, args, keywords, vararg, kwarg, LINENO(n)); - - error: - free_expr(vararg); - free_expr(kwarg); - if (args) - asdl_expr_seq_free(args); - if (keywords) { - for (i = 0; i < asdl_seq_LEN(keywords); i++) - free_keyword(asdl_seq_GET(keywords, i)); - asdl_seq_free(keywords); /* ok */ - } - return NULL; + STEAL_ITEM(keywords, nkeywords++, kw); + } + } + else if (TYPE(ch) == STAR) { + vararg = ast_for_expr(c, CHILD(n, i+1)); + i++; + } + else if (TYPE(ch) == DOUBLESTAR) { + kwarg = ast_for_expr(c, CHILD(n, i+1)); + i++; + } + } + + result = Call(func, args, keywords, vararg, kwarg, LINENO(n)); + + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* @@ -1992,13 +1988,16 @@ TYPE(n) == testlist1); } if (NCH(n) == 1) - return ast_for_expr(c, CHILD(n, 0)); + result = ast_for_expr(c, CHILD(n, 0)); else { - asdl_seq *tmp = seq_for_testlist(c, n); + PyObject *tmp = seq_for_testlist(c, n); if (!tmp) - return NULL; - return Tuple(tmp, Load, LINENO(n)); + goto error; + result = Tuple(tmp, Load(), LINENO(n)); } + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* @@ -2009,147 +2008,146 @@ PyObject *result = NULL; assert(TYPE(n) == testlist_gexp || TYPE(n) == argument); if (NCH(n) > 1 && TYPE(CHILD(n, 1)) == gen_for) { - return ast_for_genexp(c, n); + result = ast_for_genexp(c, n); } else - return ast_for_testlist(c, n); + result = ast_for_testlist(c, n); + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } /* like ast_for_testlist() but returns a sequence */ -static asdl_seq* +static PyObject* ast_for_class_bases(struct compiling *c, const node* n) { /* testlist: test (',' test)* [','] */ PyObject *result = NULL; + PyObject *base = NULL; + PyObject *bases = NULL; assert(NCH(n) > 0); REQ(n, testlist); if (NCH(n) == 1) { - PyObject *base = NULL; - asdl_seq *bases = asdl_seq_new(1); + bases = PyList_New(1); if (!bases) - return NULL; + goto error; base = ast_for_expr(c, CHILD(n, 0)); if (!base) { - asdl_seq_free(bases); /* ok */ - return NULL; + goto error; } - asdl_seq_SET(bases, 0, base); - return bases; + STEAL_ITEM(bases, 0, base); + result = bases; } else { - return seq_for_testlist(c, n); + result = seq_for_testlist(c, n); } + error: + return result; } static PyObject* ast_for_expr_stmt(struct compiling *c, const node *n) { PyObject *result = NULL; + PyObject *e = NULL; + PyObject *expr1 = NULL; + PyObject *expr2 = NULL; + PyObject *operator = NULL; + PyObject *targets = NULL; + PyObject *expression = NULL; REQ(n, expr_stmt); /* expr_stmt: testlist (augassign (yield_expr|testlist) | ('=' (yield_expr|testlist))*) testlist: test (',' test)* [','] augassign: '+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' - | '<<=' | '>>=' | '**=' | '//=' + | '<<=' | '>>=' | '**=' | '//=' test: ... here starts the operator precendence dance */ if (NCH(n) == 1) { - expr_ty e = ast_for_testlist(c, CHILD(n, 0)); + e = ast_for_testlist(c, CHILD(n, 0)); if (!e) - return NULL; + goto error; - return Expr(e, LINENO(n)); + result = Expr(e, LINENO(n)); } else if (TYPE(CHILD(n, 1)) == augassign) { - PyObject *expr1 = NULL; PyObject *expr2 = NULL; - operator_ty operator; - node *ch = CHILD(n, 0); - - if (TYPE(ch) == testlist) - expr1 = ast_for_testlist(c, ch); - else - expr1 = Yield(ast_for_expr(c, CHILD(ch, 0)), LINENO(ch)); + node *ch = CHILD(n, 0); + + if (TYPE(ch) == testlist) + expr1 = ast_for_testlist(c, ch); + else + expr1 = Yield(ast_for_expr(c, CHILD(ch, 0)), LINENO(ch)); if (!expr1) - return NULL; - if (expr1->kind == GeneratorExp_kind) { - free_expr(expr1); - ast_error(ch, "augmented assignment to generator " - "expression not possible"); - return NULL; - } - if (expr1->kind == Name_kind) { - char *var_name = PyString_AS_STRING(expr1->v.Name.id); - if (var_name[0] == 'N' && !strcmp(var_name, "None")) { - free_expr(expr1); - ast_error(ch, "assignment to None"); - return NULL; - } - } - - ch = CHILD(n, 2); - if (TYPE(ch) == testlist) - expr2 = ast_for_testlist(c, ch); - else - expr2 = Yield(ast_for_expr(c, ch), LINENO(ch)); + goto error; + if (expr_kind(expr1) == GeneratorExp_kind) { + ast_error(ch, "augmented assignment to generator " + "expression not possible"); + goto error; + } + if (expr_kind(expr1) == Name_kind) { + char *var_name = PyString_AS_STRING(Name_id(expr1)); + if (var_name[0] == 'N' && !strcmp(var_name, "None")) { + ast_error(ch, "assignment to None"); + goto error; + } + } + + ch = CHILD(n, 2); + if (TYPE(ch) == testlist) + expr2 = ast_for_testlist(c, ch); + else + expr2 = Yield(ast_for_expr(c, ch), LINENO(ch)); if (!expr2) { - free_expr(expr1); - return NULL; + goto error; } operator = ast_for_augassign(CHILD(n, 1)); if (!operator) { - free_expr(expr1); - free_expr(expr2); - return NULL; + goto error; } - return AugAssign(expr1, operator, expr2, LINENO(n)); + result = AugAssign(expr1, operator, expr2, LINENO(n)); } else { - int i; - PyObject *targets = NULL; - node *value; - PyObject *expression = NULL; - - /* a normal assignment */ - REQ(CHILD(n, 1), EQUAL); - targets = asdl_seq_new(NCH(n) / 2); - if (!targets) - return NULL; - for (i = 0; i < NCH(n) - 2; i += 2) { - PyObject *e = NULL; - node *ch = CHILD(n, i); - if (TYPE(ch) == yield_expr) { - ast_error(ch, "assignment to yield expression not possible"); - goto error; - } - e = ast_for_testlist(c, ch); - - /* set context to assign */ - if (!e) - goto error; - - if (!set_context(e, Store, CHILD(n, i))) { - free_expr(e); - goto error; - } - - asdl_seq_SET(targets, i / 2, e); - } - value = CHILD(n, NCH(n) - 1); - if (TYPE(value) == testlist) - expression = ast_for_testlist(c, value); - else - expression = ast_for_expr(c, value); - if (!expression) - goto error; - return Assign(targets, expression, LINENO(n)); - error: - asdl_expr_seq_free(targets); + int i; + node *value; + + /* a normal assignment */ + REQ(CHILD(n, 1), EQUAL); + targets = PyList_New(NCH(n) / 2); + if (!targets) + goto error; + for (i = 0; i < NCH(n) - 2; i += 2) { + node *ch = CHILD(n, i); + if (TYPE(ch) == yield_expr) { + ast_error(ch, "assignment to yield expression not possible"); + goto error; + } + e = ast_for_testlist(c, ch); + + /* set context to assign */ + if (!e) + goto error; + + if (!set_context(e, Store(), CHILD(n, i))) { + goto error; + } + + STEAL_ITEM(targets, i / 2, e); + } + value = CHILD(n, NCH(n) - 1); + if (TYPE(value) == testlist) + expression = ast_for_testlist(c, value); + else + expression = ast_for_expr(c, value); + if (!expression) + goto error; + result = Assign(targets, expression, LINENO(n)); } - return NULL; + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* @@ -2159,62 +2157,65 @@ | '>>' test [ (',' test)+ [','] ] ) */ PyObject *result = NULL; - expr_ty dest = NULL, expression; + PyObject *dest = NULL, *expression; PyObject *seq = NULL; - int nl; + PyObject *nl = NULL; int i, start = 1; + int pos = 0; REQ(n, print_stmt); if (NCH(n) >= 2 && TYPE(CHILD(n, 1)) == RIGHTSHIFT) { - dest = ast_for_expr(c, CHILD(n, 2)); + dest = ast_for_expr(c, CHILD(n, 2)); if (!dest) - return NULL; - start = 4; + goto error; + start = 4; } - seq = asdl_seq_new((NCH(n) + 1 - start) / 2); + seq = PyList_New((NCH(n) + 1 - start) / 2); if (!seq) - return NULL; + goto error; for (i = start; i < NCH(n); i += 2) { expression = ast_for_expr(c, CHILD(n, i)); if (!expression) { - free_expr(dest); - asdl_expr_seq_free(seq); - return NULL; - } + goto error; + } - asdl_seq_APPEND(seq, expression); + STEAL_ITEM(seq, pos++, expression); } - nl = (TYPE(CHILD(n, NCH(n) - 1)) == COMMA) ? false : true; - return Print(dest, seq, nl, LINENO(n)); + assert(pos==PyList_GET_SIZE(seq)); + nl = (TYPE(CHILD(n, NCH(n) - 1)) == COMMA) ? Py_False : Py_True; + result = Print(dest, seq, nl, LINENO(n)); + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } -static asdl_seq * +static PyObject * ast_for_exprlist(struct compiling *c, const node *n, PyObject* context) { + PyObject *result = NULL; PyObject *seq = NULL; int i; PyObject *e = NULL; REQ(n, exprlist); - seq = asdl_seq_new((NCH(n) + 1) / 2); + seq = PyList_New((NCH(n) + 1) / 2); if (!seq) - return NULL; + goto error; for (i = 0; i < NCH(n); i += 2) { - e = ast_for_expr(c, CHILD(n, i)); - if (!e) - goto error; - asdl_seq_SET(seq, i / 2, e); - if (context) { - if (!set_context(e, context, CHILD(n, i))) - goto error; + e = ast_for_expr(c, CHILD(n, i)); + if (!e) + goto error; + PyList_SET_ITEM(seq,i/2,e); // e=NULL; // ??? + if (context) { + if (!set_context(e, context, CHILD(n, i))) + goto error; } } - return seq; + result = seq; error: - asdl_expr_seq_free(seq); - return NULL; + return result; } static PyObject* @@ -2226,10 +2227,13 @@ /* del_stmt: 'del' exprlist */ REQ(n, del_stmt); - expr_list = ast_for_exprlist(c, CHILD(n, 1), Del); + expr_list = ast_for_exprlist(c, CHILD(n, 1), Del()); if (!expr_list) - return NULL; - return Delete(expr_list, LINENO(n)); + goto error; + result = Delete(expr_list, LINENO(n)); + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* @@ -2246,71 +2250,80 @@ raise_stmt: 'raise' [test [',' test [',' test]]] */ PyObject *result = NULL; + PyObject *exp = NULL; + PyObject *expression = NULL; + PyObject *expr1 = NULL; + PyObject *expr2 = NULL; + PyObject *expr3 = NULL; node *ch; REQ(n, flow_stmt); ch = CHILD(n, 0); switch (TYPE(ch)) { case break_stmt: - return Break(LINENO(n)); + result = Break(LINENO(n)); + break; case continue_stmt: - return Continue(LINENO(n)); + result = Continue(LINENO(n)); + break; case yield_stmt: { /* will reduce to yield_expr */ - expr_ty exp = ast_for_expr(c, CHILD(ch, 0)); - if (!exp) - return NULL; - return Expr(exp, LINENO(n)); + exp = ast_for_expr(c, CHILD(ch, 0)); + if (!exp) + goto error; + result = Expr(exp, LINENO(n)); + break; } case return_stmt: if (NCH(ch) == 1) - return Return(NULL, LINENO(n)); + result = Return(NULL, LINENO(n)); else { - expr_ty expression = ast_for_testlist(c, CHILD(ch, 1)); + expression = ast_for_testlist(c, CHILD(ch, 1)); if (!expression) - return NULL; - return Return(expression, LINENO(n)); + goto error; + result = Return(expression, LINENO(n)); } + break; case raise_stmt: if (NCH(ch) == 1) - return Raise(NULL, NULL, NULL, LINENO(n)); + result = Raise(NULL, NULL, NULL, LINENO(n)); else if (NCH(ch) == 2) { - expr_ty expression = ast_for_expr(c, CHILD(ch, 1)); + expression = ast_for_expr(c, CHILD(ch, 1)); if (!expression) - return NULL; - return Raise(expression, NULL, NULL, LINENO(n)); + goto error; + result = Raise(expression, NULL, NULL, LINENO(n)); } else if (NCH(ch) == 4) { - PyObject *expr1 = NULL; PyObject *expr2 = NULL; - expr1 = ast_for_expr(c, CHILD(ch, 1)); if (!expr1) - return NULL; + goto error; expr2 = ast_for_expr(c, CHILD(ch, 3)); if (!expr2) - return NULL; + goto error; - return Raise(expr1, expr2, NULL, LINENO(n)); + result = Raise(expr1, expr2, NULL, LINENO(n)); } else if (NCH(ch) == 6) { - PyObject *expr1 = NULL; PyObject *expr2 = NULL; PyObject *expr3 = NULL; - expr1 = ast_for_expr(c, CHILD(ch, 1)); if (!expr1) - return NULL; + goto error; expr2 = ast_for_expr(c, CHILD(ch, 3)); if (!expr2) - return NULL; + goto error; expr3 = ast_for_expr(c, CHILD(ch, 5)); if (!expr3) - return NULL; + goto error; - return Raise(expr1, expr2, expr3, LINENO(n)); + result = Raise(expr1, expr2, expr3, LINENO(n)); } + break; default: PyErr_Format(PyExc_SystemError, "unexpected flow_stmt: %d", TYPE(ch)); - return NULL; + goto error; } + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* @@ -2322,15 +2335,14 @@ dotted_name: NAME ('.' NAME)* */ PyObject *result = NULL; + PyObject *a = NULL; loop: switch (TYPE(n)) { case import_as_name: if (NCH(n) == 3) - return alias(NEW_IDENTIFIER(CHILD(n, 0)), - NEW_IDENTIFIER(CHILD(n, 2))); + result = alias(NEW_IDENTIFIER(CHILD(n, 0)), NEW_IDENTIFIER(CHILD(n, 2))); else - return alias(NEW_IDENTIFIER(CHILD(n, 0)), - NULL); + result = alias(NEW_IDENTIFIER(CHILD(n, 0)), NULL); break; case dotted_as_name: if (NCH(n) == 1) { @@ -2338,15 +2350,15 @@ goto loop; } else { - alias_ty a = alias_for_import_name(CHILD(n, 0)); - assert(!a->asname); - a->asname = NEW_IDENTIFIER(CHILD(n, 2)); - return a; + a = alias_for_import_name(CHILD(n, 0)); + assert(!alias_asname(a)); + alias_asname(a) = NEW_IDENTIFIER(CHILD(n, 2)); + result = a; } break; case dotted_name: if (NCH(n) == 1) - return alias(NEW_IDENTIFIER(CHILD(n, 0)), NULL); + result = alias(NEW_IDENTIFIER(CHILD(n, 0)), NULL); else { /* Create a string of the form "a.b.c" */ int i, len; @@ -2360,10 +2372,10 @@ len--; /* the last name doesn't have a dot */ str = PyString_FromStringAndSize(NULL, len); if (!str) - return NULL; + goto error; s = PyString_AS_STRING(str); if (!s) - return NULL; + goto error; for (i = 0; i < NCH(n); i += 2) { char *sch = STR(CHILD(n, i)); strcpy(s, STR(CHILD(n, i))); @@ -2373,17 +2385,20 @@ --s; *s = '\0'; PyString_InternInPlace(&str); - return alias(str, NULL); + result = alias(str, NULL); } break; case STAR: - return alias(PyString_InternFromString("*"), NULL); + result = alias(PyString_InternFromString("*"), NULL); + break; default: PyErr_Format(PyExc_SystemError, "unexpected import name: %d", TYPE(n)); - return NULL; + goto error; } - return NULL; + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* @@ -2399,33 +2414,35 @@ PyObject *result = NULL; int i; PyObject *aliases = NULL; + PyObject *mod = NULL; + PyObject *import = NULL; + PyObject *import_alias = NULL; REQ(n, import_stmt); n = CHILD(n, 0); if (STR(CHILD(n, 0))[0] == 'i') { /* import */ n = CHILD(n, 1); - REQ(n, dotted_as_names); - aliases = asdl_seq_new((NCH(n) + 1) / 2); - if (!aliases) - return NULL; - for (i = 0; i < NCH(n); i += 2) { - alias_ty import_alias = alias_for_import_name(CHILD(n, i)); + REQ(n, dotted_as_names); + aliases = PyList_New((NCH(n) + 1) / 2); + if (!aliases) + goto error; + for (i = 0; i < NCH(n); i += 2) { + import_alias = alias_for_import_name(CHILD(n, i)); if (!import_alias) { - asdl_alias_seq_free(aliases); - return NULL; + goto error; } - asdl_seq_SET(aliases, i / 2, import_alias); + STEAL_ITEM(aliases, i / 2, import_alias); } - return Import(aliases, LINENO(n)); + result = Import(aliases, LINENO(n)); } else if (STR(CHILD(n, 0))[0] == 'f') { /* from */ - stmt_ty import; int n_children; const char *from_modules; - int lineno = LINENO(n); - alias_ty mod = alias_for_import_name(CHILD(n, 1)); - if (!mod) - return NULL; + int lineno = LINENO(n); + int pos = 0; + mod = alias_for_import_name(CHILD(n, 1)); + if (!mod) + goto error; /* XXX this needs to be cleaned up */ @@ -2434,10 +2451,9 @@ n = CHILD(n, 3); /* from ... import x, y, z */ if (NCH(n) % 2 == 0) { /* it ends with a comma, not valid but the parser allows it */ - free_alias(mod); ast_error(n, "trailing comma not allowed without" " surrounding parentheses"); - return NULL; + goto error; } } else if (from_modules[0] == '*') { @@ -2446,50 +2462,47 @@ else if (from_modules[0] == '(') n = CHILD(n, 4); /* from ... import (x, y, z) */ else { - /* XXX: don't we need to call ast_error(n, "..."); */ - free_alias(mod); - return NULL; - } + /* XXX: don't we need to call ast_error(n, "..."); */ + goto error; + } n_children = NCH(n); if (from_modules && from_modules[0] == '*') n_children = 1; - aliases = asdl_seq_new((n_children + 1) / 2); - if (!aliases) { - free_alias(mod); - return NULL; - } + aliases = PyList_New((n_children + 1) / 2); + if (!aliases) { + goto error; + } /* handle "from ... import *" special b/c there's no children */ if (from_modules && from_modules[0] == '*') { - alias_ty import_alias = alias_for_import_name(n); + import_alias = alias_for_import_name(n); if (!import_alias) { - asdl_alias_seq_free(aliases); - free_alias(mod); - return NULL; + goto error; } - asdl_seq_APPEND(aliases, import_alias); + PyList_SET_ITEM(aliases, pos++, import_alias); } - for (i = 0; i < NCH(n); i += 2) { - alias_ty import_alias = alias_for_import_name(CHILD(n, i)); + for (i = 0; i < NCH(n); i += 2) { + import_alias = alias_for_import_name(CHILD(n, i)); if (!import_alias) { - asdl_alias_seq_free(aliases); - free_alias(mod); - return NULL; + goto error; } - asdl_seq_APPEND(aliases, import_alias); + PyList_SET_ITEM(aliases, pos++, import_alias); } - Py_INCREF(mod->name); - import = ImportFrom(mod->name, aliases, lineno); - free_alias(mod); - return import; - } - PyErr_Format(PyExc_SystemError, - "unknown import statement: starts with command '%s'", - STR(CHILD(n, 0))); - return NULL; + assert(pos == PyList_GET_SIZE(aliases)); + Py_INCREF(alias_name(mod)); + import = ImportFrom(alias_name(mod), aliases, lineno); + result = import; + } + else + PyErr_Format(PyExc_SystemError, + "unknown import statement: starts with command '%s'", + STR(CHILD(n, 0))); + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* @@ -2497,56 +2510,60 @@ { /* global_stmt: 'global' NAME (',' NAME)* */ PyObject *result = NULL; - identifier name; + PyObject *name = NULL; PyObject *s = NULL; int i; REQ(n, global_stmt); - s = asdl_seq_new(NCH(n) / 2); + s = PyList_New(NCH(n) / 2); if (!s) - return NULL; + goto error; for (i = 1; i < NCH(n); i += 2) { - name = NEW_IDENTIFIER(CHILD(n, i)); - if (!name) { - for (i = i / 2; i > 0; i--) - Py_XDECREF((identifier) asdl_seq_GET(s, i)); - asdl_seq_free(s); /* ok */ - return NULL; - } - asdl_seq_SET(s, i / 2, name); + name = NEW_IDENTIFIER(CHILD(n, i)); + if (!name) { + goto error; + } + STEAL_ITEM(s, i / 2, name); } - return Global(s, LINENO(n)); + result = Global(s, LINENO(n)); + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* ast_for_exec_stmt(struct compiling *c, const node *n) { - expr_ty expr1, globals = NULL, locals = NULL; + PyObject *result = NULL; + PyObject *expr1 = NULL, *globals = NULL, *locals = NULL; int n_children = NCH(n); if (n_children != 2 && n_children != 4 && n_children != 6) { PyErr_Format(PyExc_SystemError, "poorly formed 'exec' statement: %d parts to statement", n_children); - return NULL; + goto error; } /* exec_stmt: 'exec' expr ['in' test [',' test]] */ REQ(n, exec_stmt); expr1 = ast_for_expr(c, CHILD(n, 1)); if (!expr1) - return NULL; + goto error; if (n_children >= 4) { globals = ast_for_expr(c, CHILD(n, 3)); if (!globals) - return NULL; + goto error; } if (n_children == 6) { locals = ast_for_expr(c, CHILD(n, 5)); if (!locals) - return NULL; + goto error; } - return Exec(expr1, globals, locals, LINENO(n)); + result = Exec(expr1, globals, locals, LINENO(n)); + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* @@ -2554,94 +2571,97 @@ { /* assert_stmt: 'assert' test [',' test] */ PyObject *result = NULL; + PyObject *expression = NULL; + PyObject *expr1 = NULL; + PyObject *expr2 = NULL; REQ(n, assert_stmt); if (NCH(n) == 2) { - expr_ty expression = ast_for_expr(c, CHILD(n, 1)); + expression = ast_for_expr(c, CHILD(n, 1)); if (!expression) - return NULL; - return Assert(expression, NULL, LINENO(n)); + goto error; + result = Assert(expression, NULL, LINENO(n)); } else if (NCH(n) == 4) { - PyObject *expr1 = NULL; PyObject *expr2 = NULL; - expr1 = ast_for_expr(c, CHILD(n, 1)); if (!expr1) - return NULL; + goto error; expr2 = ast_for_expr(c, CHILD(n, 3)); if (!expr2) - return NULL; + goto error; - return Assert(expr1, expr2, LINENO(n)); + result = Assert(expr1, expr2, LINENO(n)); } - PyErr_Format(PyExc_SystemError, - "improper number of parts to 'assert' statement: %d", - NCH(n)); - return NULL; + else + PyErr_Format(PyExc_SystemError, + "improper number of parts to 'assert' statement: %d", + NCH(n)); + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } -static asdl_seq * +static PyObject * ast_for_suite(struct compiling *c, const node *n) { /* suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT */ - asdl_seq *seq = NULL; - stmt_ty s; + PyObject *result = NULL; + PyObject *seq = NULL; + PyObject *s = NULL; int i, total, num, end, pos = 0; node *ch; REQ(n, suite); total = num_stmts(n); - seq = asdl_seq_new(total); + seq = PyList_New(total); if (!seq) - return NULL; + goto error; if (TYPE(CHILD(n, 0)) == simple_stmt) { - n = CHILD(n, 0); - /* simple_stmt always ends with a NEWLINE, - and may have a trailing SEMI - */ - end = NCH(n) - 1; - if (TYPE(CHILD(n, end - 1)) == SEMI) - end--; + n = CHILD(n, 0); + /* simple_stmt always ends with a NEWLINE, + and may have a trailing SEMI + */ + end = NCH(n) - 1; + if (TYPE(CHILD(n, end - 1)) == SEMI) + end--; /* loop by 2 to skip semi-colons */ - for (i = 0; i < end; i += 2) { - ch = CHILD(n, i); - s = ast_for_stmt(c, ch); - if (!s) - goto error; - asdl_seq_SET(seq, pos++, s); - } + for (i = 0; i < end; i += 2) { + ch = CHILD(n, i); + s = ast_for_stmt(c, ch); + if (!s) + goto error; + STEAL_ITEM(seq, pos++, s); + } } else { - for (i = 2; i < (NCH(n) - 1); i++) { - ch = CHILD(n, i); - REQ(ch, stmt); - num = num_stmts(ch); - if (num == 1) { - /* small_stmt or compound_stmt with only one child */ - s = ast_for_stmt(c, ch); - if (!s) - goto error; - asdl_seq_SET(seq, pos++, s); - } - else { - int j; - ch = CHILD(ch, 0); - REQ(ch, simple_stmt); - for (j = 0; j < NCH(ch); j += 2) { - s = ast_for_stmt(c, CHILD(ch, j)); - if (!s) - goto error; - asdl_seq_SET(seq, pos++, s); - } - } - } + for (i = 2; i < (NCH(n) - 1); i++) { + ch = CHILD(n, i); + REQ(ch, stmt); + num = num_stmts(ch); + if (num == 1) { + /* small_stmt or compound_stmt with only one child */ + s = ast_for_stmt(c, ch); + if (!s) + goto error; + STEAL_ITEM(seq, pos++, s); + } + else { + int j; + ch = CHILD(ch, 0); + REQ(ch, simple_stmt); + for (j = 0; j < NCH(ch); j += 2) { + s = ast_for_stmt(c, CHILD(ch, j)); + if (!s) + goto error; + STEAL_ITEM(seq, pos++, s); + } + } + } } - assert(pos == seq->size); - return seq; + assert(pos == PyList_GET_SIZE(seq)); + result = seq; error: - if (seq) - asdl_stmt_seq_free(seq); - return NULL; + return result; } static PyObject* @@ -2651,135 +2671,116 @@ ['else' ':' suite] */ PyObject *result = NULL; + PyObject *expression = NULL; + PyObject *suite_seq = NULL; + PyObject *seq1 = NULL; + PyObject *seq2 = NULL; + PyObject *orelse = NULL; + PyObject *new = NULL; char *s; REQ(n, if_stmt); if (NCH(n) == 4) { - PyObject *expression = NULL; - PyObject *suite_seq = NULL; - expression = ast_for_expr(c, CHILD(n, 1)); if (!expression) - return NULL; + goto error; suite_seq = ast_for_suite(c, CHILD(n, 3)); if (!suite_seq) { - free_expr(expression); - return NULL; - } + goto error; + } - return If(expression, suite_seq, NULL, LINENO(n)); - } - s = STR(CHILD(n, 4)); - /* s[2], the third character in the string, will be - 's' for el_s_e, or - 'i' for el_i_f - */ - if (s[2] == 's') { - PyObject *expression = NULL; - PyObject *seq1 = NULL; PyObject *seq2 = NULL; - - expression = ast_for_expr(c, CHILD(n, 1)); - if (!expression) - return NULL; - seq1 = ast_for_suite(c, CHILD(n, 3)); - if (!seq1) { - free_expr(expression); - return NULL; - } - seq2 = ast_for_suite(c, CHILD(n, 6)); - if (!seq2) { - asdl_stmt_seq_free(seq1); - free_expr(expression); - return NULL; - } - - return If(expression, seq1, seq2, LINENO(n)); + result = If(expression, suite_seq, NULL, LINENO(n)); } - else if (s[2] == 'i') { - int i, n_elif, has_else = 0; - asdl_seq *orelse = NULL; - n_elif = NCH(n) - 4; - /* must reference the child n_elif+1 since 'else' token is third, - not fourth, child from the end. */ - if (TYPE(CHILD(n, (n_elif + 1))) == NAME - && STR(CHILD(n, (n_elif + 1)))[2] == 's') { - has_else = 1; - n_elif -= 3; - } - n_elif /= 4; - - if (has_else) { - PyObject *expression = NULL; - PyObject *seq1 = NULL; PyObject *seq2 = NULL; - - orelse = asdl_seq_new(1); - if (!orelse) - return NULL; - expression = ast_for_expr(c, CHILD(n, NCH(n) - 6)); - if (!expression) { - asdl_seq_free(orelse); /* ok */ - return NULL; - } - seq1 = ast_for_suite(c, CHILD(n, NCH(n) - 4)); + else { + s = STR(CHILD(n, 4)); + /* s[2], the third character in the string, will be + 's' for el_s_e, or + 'i' for el_i_f + */ + if (s[2] == 's') { + expression = ast_for_expr(c, CHILD(n, 1)); + if (!expression) + goto error; + seq1 = ast_for_suite(c, CHILD(n, 3)); if (!seq1) { - free_expr(expression); - asdl_seq_free(orelse); /* ok */ - return NULL; + goto error; } - seq2 = ast_for_suite(c, CHILD(n, NCH(n) - 1)); + seq2 = ast_for_suite(c, CHILD(n, 6)); if (!seq2) { - free_expr(expression); - asdl_stmt_seq_free(seq1); - asdl_seq_free(orelse); /* ok */ - return NULL; + goto error; } - - asdl_seq_SET(orelse, 0, If(expression, seq1, seq2, - LINENO(CHILD(n, NCH(n) - 6)))); - /* the just-created orelse handled the last elif */ - n_elif--; - } - else - orelse = NULL; - - for (i = 0; i < n_elif; i++) { - int off = 5 + (n_elif - i - 1) * 4; - PyObject *expression = NULL; - PyObject *suite_seq = NULL; - asdl_seq *new = asdl_seq_new(1); - if (!new) { - asdl_stmt_seq_free(orelse); - return NULL; - } - expression = ast_for_expr(c, CHILD(n, off)); - if (!expression) { - asdl_stmt_seq_free(orelse); - asdl_seq_free(new); /* ok */ - return NULL; + + result = If(expression, seq1, seq2, LINENO(n)); + } + else if (s[2] == 'i') { + int i, n_elif, has_else = 0; + n_elif = NCH(n) - 4; + /* must reference the child n_elif+1 since 'else' token is third, + not fourth, child from the end. */ + if (TYPE(CHILD(n, (n_elif + 1))) == NAME + && STR(CHILD(n, (n_elif + 1)))[2] == 's') { + has_else = 1; + n_elif -= 3; } - suite_seq = ast_for_suite(c, CHILD(n, off + 2)); - if (!suite_seq) { - asdl_stmt_seq_free(orelse); - free_expr(expression); - asdl_seq_free(new); /* ok */ - return NULL; + n_elif /= 4; + + if (has_else) { + + orelse = PyList_New(1); + if (!orelse) + goto error; + expression = ast_for_expr(c, CHILD(n, NCH(n) - 6)); + if (!expression) { + goto error; + } + seq1 = ast_for_suite(c, CHILD(n, NCH(n) - 4)); + if (!seq1) { + goto error; + } + seq2 = ast_for_suite(c, CHILD(n, NCH(n) - 1)); + if (!seq2) { + goto error; + } + + PyList_SET_ITEM(orelse, 0, If(expression, seq1, seq2, LINENO(CHILD(n, NCH(n) - 6)))); + /* the just-created orelse handled the last elif */ + n_elif--; } - - asdl_seq_SET(new, 0, - If(expression, suite_seq, orelse, - LINENO(CHILD(n, off)))); - orelse = new; - } - return If(ast_for_expr(c, CHILD(n, 1)), - ast_for_suite(c, CHILD(n, 3)), - orelse, LINENO(n)); - } - else { - PyErr_Format(PyExc_SystemError, - "unexpected token in 'if' statement: %s", s); - return NULL; + else + orelse = NULL; + + for (i = 0; i < n_elif; i++) { + int off = 5 + (n_elif - i - 1) * 4; + new = PyList_New(1); + if (!new) { + goto error; + } + expression = ast_for_expr(c, CHILD(n, off)); + if (!expression) { + goto error; + } + suite_seq = ast_for_suite(c, CHILD(n, off + 2)); + if (!suite_seq) { + goto error; + } + + PyList_SET_ITEM(new, 0, If(expression, suite_seq, orelse, LINENO(CHILD(n, off)))); + orelse = new; + } + result = If(ast_for_expr(c, CHILD(n, 1)), + ast_for_suite(c, CHILD(n, 3)), + orelse, LINENO(n)); + } + else { + PyErr_Format(PyExc_SystemError, + "unexpected token in 'if' statement: %s", s); + goto error; + } } + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* @@ -2787,94 +2788,91 @@ { /* while_stmt: 'while' test ':' suite ['else' ':' suite] */ PyObject *result = NULL; + PyObject *expression = NULL; + PyObject *suite_seq = NULL; + PyObject *seq1 = NULL; + PyObject *seq2 = NULL; REQ(n, while_stmt); if (NCH(n) == 4) { - PyObject *expression = NULL; - PyObject *suite_seq = NULL; expression = ast_for_expr(c, CHILD(n, 1)); if (!expression) - return NULL; + goto error; suite_seq = ast_for_suite(c, CHILD(n, 3)); if (!suite_seq) { - free_expr(expression); - return NULL; - } - return While(expression, suite_seq, NULL, LINENO(n)); + goto error; + } + result = While(expression, suite_seq, NULL, LINENO(n)); } else if (NCH(n) == 7) { - PyObject *expression = NULL; - PyObject *seq1 = NULL; PyObject *seq2 = NULL; expression = ast_for_expr(c, CHILD(n, 1)); if (!expression) - return NULL; + goto error; seq1 = ast_for_suite(c, CHILD(n, 3)); if (!seq1) { - free_expr(expression); - return NULL; - } + goto error; + } seq2 = ast_for_suite(c, CHILD(n, 6)); if (!seq2) { - asdl_stmt_seq_free(seq1); - free_expr(expression); - return NULL; - } + goto error; + } - return While(expression, seq1, seq2, LINENO(n)); + result = While(expression, seq1, seq2, LINENO(n)); } else { PyErr_Format(PyExc_SystemError, "wrong number of tokens for 'while' statement: %d", NCH(n)); - return NULL; + goto error; } + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* ast_for_for_stmt(struct compiling *c, const node *n) { PyObject *result = NULL; - asdl_seq *_target = NULL, *seq = NULL, *suite_seq = NULL; + PyObject *_target = NULL; + PyObject *seq = NULL; + PyObject *suite_seq = NULL; PyObject *expression = NULL; PyObject *target = NULL; /* for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] */ REQ(n, for_stmt); if (NCH(n) == 9) { - seq = ast_for_suite(c, CHILD(n, 8)); + seq = ast_for_suite(c, CHILD(n, 8)); if (!seq) - return NULL; + goto error; } - _target = ast_for_exprlist(c, CHILD(n, 1), Store); + _target = ast_for_exprlist(c, CHILD(n, 1), Store()); if (!_target) { - asdl_stmt_seq_free(seq); - return NULL; + goto error; } - if (asdl_seq_LEN(_target) == 1) { - target = asdl_seq_GET(_target, 0); - asdl_seq_free(_target); /* ok */ + if (PyList_GET_SIZE(_target) == 1) { + target = PyList_GET_ITEM(_target, 0); } else - target = Tuple(_target, Store, LINENO(n)); + target = Tuple(_target, Store(), LINENO(n)); expression = ast_for_testlist(c, CHILD(n, 3)); if (!expression) { - free_expr(target); - asdl_stmt_seq_free(seq); - return NULL; + goto error; } suite_seq = ast_for_suite(c, CHILD(n, 5)); if (!suite_seq) { - free_expr(target); - free_expr(expression); - asdl_stmt_seq_free(seq); - return NULL; + goto error; } - return For(target, expression, suite_seq, seq, LINENO(n)); + result = For(target, expression, suite_seq, seq, LINENO(n)); + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* @@ -2882,137 +2880,127 @@ { /* except_clause: 'except' [test [',' test]] */ PyObject *result = NULL; + PyObject *suite_seq = NULL; + PyObject *expression = NULL; + PyObject *e = NULL; REQ(exc, except_clause); REQ(body, suite); if (NCH(exc) == 1) { - asdl_seq *suite_seq = ast_for_suite(c, body); + suite_seq = ast_for_suite(c, body); if (!suite_seq) - return NULL; + goto error; - return excepthandler(NULL, NULL, suite_seq); + result = excepthandler(NULL, NULL, suite_seq); } else if (NCH(exc) == 2) { - PyObject *expression = NULL; - PyObject *suite_seq = NULL; expression = ast_for_expr(c, CHILD(exc, 1)); if (!expression) - return NULL; + goto error; suite_seq = ast_for_suite(c, body); if (!suite_seq) { - free_expr(expression); - return NULL; - } + goto error; + } - return excepthandler(expression, NULL, suite_seq); + result = excepthandler(expression, NULL, suite_seq); } else if (NCH(exc) == 4) { - PyObject *suite_seq = NULL; - PyObject *expression = NULL; - expr_ty e = ast_for_expr(c, CHILD(exc, 3)); - if (!e) - return NULL; - if (!set_context(e, Store, CHILD(exc, 3))) { - free_expr(e); - return NULL; - } + e = ast_for_expr(c, CHILD(exc, 3)); + if (!e) + goto error; + if (!set_context(e, Store(), CHILD(exc, 3))) { + goto error; + } expression = ast_for_expr(c, CHILD(exc, 1)); if (!expression) { - free_expr(e); - return NULL; - } + goto error; + } suite_seq = ast_for_suite(c, body); if (!suite_seq) { - free_expr(expression); - free_expr(e); - return NULL; - } + goto error; + } - return excepthandler(expression, e, suite_seq); + result = excepthandler(expression, e, suite_seq); } else { PyErr_Format(PyExc_SystemError, "wrong number of children for 'except' clause: %d", NCH(exc)); - return NULL; + goto error; } + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* ast_for_try_stmt(struct compiling *c, const node *n) { + PyObject *e = NULL; + PyObject *s1 = NULL; + PyObject *s2 = NULL; + PyObject *suite_seq1 = NULL; + PyObject *suite_seq2 = NULL; + PyObject *handlers = NULL; PyObject *result = NULL; REQ(n, try_stmt); if (TYPE(CHILD(n, 3)) == NAME) {/* must be 'finally' */ - /* try_stmt: 'try' ':' suite 'finally' ':' suite) */ - PyObject *s1 = NULL; PyObject *s2 = NULL; + /* try_stmt: 'try' ':' suite 'finally' ':' suite) */ s1 = ast_for_suite(c, CHILD(n, 2)); if (!s1) - return NULL; + goto error; s2 = ast_for_suite(c, CHILD(n, 5)); if (!s2) { - asdl_stmt_seq_free(s1); - return NULL; - } + goto error; + } - return TryFinally(s1, s2, LINENO(n)); + result = TryFinally(s1, s2, LINENO(n)); } else if (TYPE(CHILD(n, 3)) == except_clause) { - /* try_stmt: ('try' ':' suite (except_clause ':' suite)+ + /* try_stmt: ('try' ':' suite (except_clause ':' suite)+ ['else' ':' suite] - */ - PyObject *suite_seq1 = NULL; PyObject *suite_seq2 = NULL; - PyObject *handlers = NULL; - int i, has_else = 0, n_except = NCH(n) - 3; - if (TYPE(CHILD(n, NCH(n) - 3)) == NAME) { - has_else = 1; - n_except -= 3; - } - n_except /= 3; - handlers = asdl_seq_new(n_except); - if (!handlers) - return NULL; - for (i = 0; i < n_except; i++) { - excepthandler_ty e = ast_for_except_clause(c, - CHILD(n, 3 + i * 3), - CHILD(n, 5 + i * 3)); + */ + int i, has_else = 0, n_except = NCH(n) - 3; + if (TYPE(CHILD(n, NCH(n) - 3)) == NAME) { + has_else = 1; + n_except -= 3; + } + n_except /= 3; + handlers = PyList_New(n_except); + if (!handlers) + goto error; + for (i = 0; i < n_except; i++) { + e = ast_for_except_clause(c, CHILD(n, 3 + i * 3), CHILD(n, 5 + i * 3)); if (!e) { - for ( ; i >= 0; i--) - free_excepthandler(asdl_seq_GET(handlers, i)); - asdl_seq_free(handlers); /* ok */ - return NULL; - } - asdl_seq_SET(handlers, i, e); + goto error; + } + STEAL_ITEM(handlers, i, e); } suite_seq1 = ast_for_suite(c, CHILD(n, 2)); if (!suite_seq1) { - for (i = 0; i < asdl_seq_LEN(handlers); i++) - free_excepthandler(asdl_seq_GET(handlers, i)); - asdl_seq_free(handlers); /* ok */ - return NULL; - } + goto error; + } if (has_else) { suite_seq2 = ast_for_suite(c, CHILD(n, NCH(n) - 1)); if (!suite_seq2) { - for (i = 0; i < asdl_seq_LEN(handlers); i++) - free_excepthandler(asdl_seq_GET(handlers, i)); - asdl_seq_free(handlers); /* ok */ - asdl_stmt_seq_free(suite_seq1); - return NULL; - } + goto error; + } } else suite_seq2 = NULL; - return TryExcept(suite_seq1, handlers, suite_seq2, LINENO(n)); + result = TryExcept(suite_seq1, handlers, suite_seq2, LINENO(n)); } else { ast_error(n, "malformed 'try' statement"); - return NULL; + goto error; } + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* @@ -3025,35 +3013,38 @@ REQ(n, classdef); if (!strcmp(STR(CHILD(n, 1)), "None")) { - ast_error(n, "assignment to None"); - return NULL; + ast_error(n, "assignment to None"); + goto error; } if (NCH(n) == 4) { s = ast_for_suite(c, CHILD(n, 3)); if (!s) - return NULL; - return ClassDef(NEW_IDENTIFIER(CHILD(n, 1)), NULL, s, LINENO(n)); + goto error; + result = ClassDef(NEW_IDENTIFIER(CHILD(n, 1)), NULL, s, LINENO(n)); } /* check for empty base list */ - if (TYPE(CHILD(n,3)) == RPAR) { - s = ast_for_suite(c, CHILD(n,5)); - if (!s) - return NULL; - return ClassDef(NEW_IDENTIFIER(CHILD(n, 1)), NULL, s, LINENO(n)); + else if (TYPE(CHILD(n,3)) == RPAR) { + s = ast_for_suite(c, CHILD(n,5)); + if (!s) + goto error; + result = ClassDef(NEW_IDENTIFIER(CHILD(n, 1)), NULL, s, LINENO(n)); } - - /* else handle the base class list */ - bases = ast_for_class_bases(c, CHILD(n, 3)); - if (!bases) - return NULL; - - s = ast_for_suite(c, CHILD(n, 6)); - if (!s) { - asdl_expr_seq_free(bases); - return NULL; + else { + /* else handle the base class list */ + bases = ast_for_class_bases(c, CHILD(n, 3)); + if (!bases) + goto error; + + s = ast_for_suite(c, CHILD(n, 6)); + if (!s) { + goto error; + } + result = ClassDef(NEW_IDENTIFIER(CHILD(n, 1)), bases, s, LINENO(n)); } - return ClassDef(NEW_IDENTIFIER(CHILD(n, 1)), bases, s, LINENO(n)); + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } static PyObject* @@ -3061,211 +3052,231 @@ { PyObject *result = NULL; if (TYPE(n) == stmt) { - assert(NCH(n) == 1); - n = CHILD(n, 0); + assert(NCH(n) == 1); + n = CHILD(n, 0); } if (TYPE(n) == simple_stmt) { - assert(num_stmts(n) == 1); - n = CHILD(n, 0); + assert(num_stmts(n) == 1); + n = CHILD(n, 0); } if (TYPE(n) == small_stmt) { - REQ(n, small_stmt); - n = CHILD(n, 0); - /* small_stmt: expr_stmt | print_stmt | del_stmt | pass_stmt - | flow_stmt | import_stmt | global_stmt | exec_stmt + REQ(n, small_stmt); + n = CHILD(n, 0); + /* small_stmt: expr_stmt | print_stmt | del_stmt | pass_stmt + | flow_stmt | import_stmt | global_stmt | exec_stmt | assert_stmt - */ - switch (TYPE(n)) { + */ + switch (TYPE(n)) { case expr_stmt: - return ast_for_expr_stmt(c, n); + result = ast_for_expr_stmt(c, n); + break; case print_stmt: - return ast_for_print_stmt(c, n); + result = ast_for_print_stmt(c, n); + break; case del_stmt: - return ast_for_del_stmt(c, n); + result = ast_for_del_stmt(c, n); + break; case pass_stmt: - return Pass(LINENO(n)); + result = Pass(LINENO(n)); + break; case flow_stmt: - return ast_for_flow_stmt(c, n); + result = ast_for_flow_stmt(c, n); + break; case import_stmt: - return ast_for_import_stmt(c, n); + result = ast_for_import_stmt(c, n); + break; case global_stmt: - return ast_for_global_stmt(c, n); + result = ast_for_global_stmt(c, n); + break; case exec_stmt: - return ast_for_exec_stmt(c, n); + result = ast_for_exec_stmt(c, n); + break; case assert_stmt: - return ast_for_assert_stmt(c, n); + result = ast_for_assert_stmt(c, n); + break; default: PyErr_Format(PyExc_SystemError, "unhandled small_stmt: TYPE=%d NCH=%d\n", TYPE(n), NCH(n)); - return NULL; + goto error; } } else { /* compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt - | funcdef | classdef - */ - node *ch = CHILD(n, 0); - REQ(n, compound_stmt); - switch (TYPE(ch)) { + | funcdef | classdef + */ + node *ch = CHILD(n, 0); + REQ(n, compound_stmt); + switch (TYPE(ch)) { case if_stmt: - return ast_for_if_stmt(c, ch); + result = ast_for_if_stmt(c, ch); + break; case while_stmt: - return ast_for_while_stmt(c, ch); + result = ast_for_while_stmt(c, ch); + break; case for_stmt: - return ast_for_for_stmt(c, ch); + result = ast_for_for_stmt(c, ch); + break; case try_stmt: - return ast_for_try_stmt(c, ch); + result = ast_for_try_stmt(c, ch); + break; case funcdef: - return ast_for_funcdef(c, ch); + result = ast_for_funcdef(c, ch); + break; case classdef: - return ast_for_classdef(c, ch); + result = ast_for_classdef(c, ch); + break; default: PyErr_Format(PyExc_SystemError, "unhandled small_stmt: TYPE=%d NCH=%d\n", TYPE(n), NCH(n)); - return NULL; - } + goto error; + } } + error: + if (result && PyAST_Validate(result) == -1) return NULL; + return result; } +/* ------------ ALL GOOD BELOW ----------------------- */ + static PyObject * parsenumber(const char *s) { - PyObject *result = NULL; - const char *end; - long x; - double dx; + PyObject *result = NULL; + const char *end; + long x; + double dx; #ifndef WITHOUT_COMPLEX - Py_complex c; - int imflag; + Py_complex c; + int imflag; #endif - errno = 0; - end = s + strlen(s) - 1; + errno = 0; + end = s + strlen(s) - 1; #ifndef WITHOUT_COMPLEX - imflag = *end == 'j' || *end == 'J'; + imflag = *end == 'j' || *end == 'J'; #endif - if (*end == 'l' || *end == 'L') - return PyLong_FromString((char *)s, (char **)0, 0); - if (s[0] == '0') { - x = (long) PyOS_strtoul((char *)s, (char **)&end, 0); - if (x < 0 && errno == 0) { - return PyLong_FromString((char *)s, - (char **)0, - 0); - } - } - else - x = PyOS_strtol((char *)s, (char **)&end, 0); - if (*end == '\0') { - if (errno != 0) - return PyLong_FromString((char *)s, (char **)0, 0); - return PyInt_FromLong(x); - } - /* XXX Huge floats may silently fail */ + if (*end == 'l' || *end == 'L') + return PyLong_FromString((char *)s, (char **)0, 0); + if (s[0] == '0') { + x = (long) PyOS_strtoul((char *)s, (char **)&end, 0); + if (x < 0 && errno == 0) { + return PyLong_FromString((char *)s, + (char **)0, + 0); + } + } + else + x = PyOS_strtol((char *)s, (char **)&end, 0); + if (*end == '\0') { + if (errno != 0) + return PyLong_FromString((char *)s, (char **)0, 0); + return PyInt_FromLong(x); + } + /* XXX Huge floats may silently fail */ #ifndef WITHOUT_COMPLEX - if (imflag) { - c.real = 0.; - PyFPE_START_PROTECT("atof", return 0) - c.imag = atof(s); - PyFPE_END_PROTECT(c) - return PyComplex_FromCComplex(c); - } - else + if (imflag) { + c.real = 0.; + PyFPE_START_PROTECT("atof", return 0) + c.imag = atof(s); + PyFPE_END_PROTECT(c) + return PyComplex_FromCComplex(c); + } + else #endif - { - PyFPE_START_PROTECT("atof", return 0) - dx = atof(s); - PyFPE_END_PROTECT(dx) - return PyFloat_FromDouble(dx); - } + { + PyFPE_START_PROTECT("atof", return 0) + dx = atof(s); + PyFPE_END_PROTECT(dx) + return PyFloat_FromDouble(dx); + } } static PyObject * decode_utf8(const char **sPtr, const char *end, char* encoding) { - PyObject *result = NULL; + PyObject *result = NULL; #ifndef Py_USING_UNICODE - Py_FatalError("decode_utf8 should not be called in this build."); - return NULL; + Py_FatalError("decode_utf8 should not be called in this build."); + goto error; #else - PyObject *u, *v; - char *s, *t; - t = s = (char *)*sPtr; - /* while (s < end && *s != '\\') s++; */ /* inefficient for u".." */ - while (s < end && (*s & 0x80)) s++; - *sPtr = s; - u = PyUnicode_DecodeUTF8(t, s - t, NULL); - if (u == NULL) - return NULL; - v = PyUnicode_AsEncodedString(u, encoding, NULL); - Py_DECREF(u); - return v; + PyObject *u, *v; + char *s, *t; + t = s = (char *)*sPtr; + /* while (s < end && *s != '\\') s++; */ /* inefficient for u".." */ + while (s < end && (*s & 0x80)) s++; + *sPtr = s; + u = PyUnicode_DecodeUTF8(t, s - t, NULL); + if (u == NULL) + return NULL; + v = PyUnicode_AsEncodedString(u, encoding, NULL); + Py_DECREF(u); + return v; #endif } static PyObject * decode_unicode(const char *s, size_t len, int rawmode, const char *encoding) { - PyObject *result = NULL; - PyObject *v, *u; - char *buf; - char *p; - const char *end; - if (encoding == NULL) { - buf = (char *)s; - u = NULL; - } else if (strcmp(encoding, "iso-8859-1") == 0) { - buf = (char *)s; - u = NULL; - } else { - /* "\XX" may become "\u005c\uHHLL" (12 bytes) */ - u = PyString_FromStringAndSize((char *)NULL, len * 4); - if (u == NULL) - return NULL; - p = buf = PyString_AsString(u); - end = s + len; - while (s < end) { - if (*s == '\\') { - *p++ = *s++; - if (*s & 0x80) { - strcpy(p, "u005c"); - p += 5; - } - } - if (*s & 0x80) { /* XXX inefficient */ - PyObject *w; - char *r; - int rn, i; - w = decode_utf8(&s, end, "utf-16-be"); - if (w == NULL) { - Py_DECREF(u); - return NULL; - } - r = PyString_AsString(w); - rn = PyString_Size(w); - assert(rn % 2 == 0); - for (i = 0; i < rn; i += 2) { - sprintf(p, "\\u%02x%02x", - r[i + 0] & 0xFF, - r[i + 1] & 0xFF); - p += 6; - } - Py_DECREF(w); - } else { - *p++ = *s++; - } - } - len = p - buf; - s = buf; - } - if (rawmode) - v = PyUnicode_DecodeRawUnicodeEscape(s, len, NULL); - else - v = PyUnicode_DecodeUnicodeEscape(s, len, NULL); - Py_XDECREF(u); - return v; + PyObject *result = NULL; + PyObject *v, *u; + char *buf; + char *p; + const char *end; + if (encoding == NULL) { + buf = (char *)s; + u = NULL; + } else if (strcmp(encoding, "iso-8859-1") == 0) { + buf = (char *)s; + u = NULL; + } else { + /* "\XX" may become "\u005c\uHHLL" (12 bytes) */ + u = PyString_FromStringAndSize((char *)NULL, len * 4); + if (u == NULL) + return NULL; + p = buf = PyString_AsString(u); + end = s + len; + while (s < end) { + if (*s == '\\') { + *p++ = *s++; + if (*s & 0x80) { + strcpy(p, "u005c"); + p += 5; + } + } + if (*s & 0x80) { /* XXX inefficient */ + PyObject *w; + char *r; + int rn, i; + w = decode_utf8(&s, end, "utf-16-be"); + if (w == NULL) { + Py_DECREF(u); + return NULL; + } + r = PyString_AsString(w); + rn = PyString_Size(w); + assert(rn % 2 == 0); + for (i = 0; i < rn; i += 2) { + sprintf(p, "\\u%02x%02x", + r[i + 0] & 0xFF, + r[i + 1] & 0xFF); + p += 6; + } + Py_DECREF(w); + } else { + *p++ = *s++; + } + } + len = p - buf; + s = buf; + } + if (rawmode) + v = PyUnicode_DecodeRawUnicodeEscape(s, len, NULL); + else + v = PyUnicode_DecodeUnicodeEscape(s, len, NULL); + Py_XDECREF(u); + return v; } /* s is a Python string literal, including the bracketing quote characters, @@ -3275,77 +3286,77 @@ static PyObject * parsestr(const char *s, const char *encoding) { - PyObject *result = NULL; - PyObject *v; - size_t len; - int quote = *s; - int rawmode = 0; - int need_encoding; - int unicode = 0; - - if (isalpha(quote) || quote == '_') { - if (quote == 'u' || quote == 'U') { - quote = *++s; - unicode = 1; - } - if (quote == 'r' || quote == 'R') { - quote = *++s; - rawmode = 1; - } - } - if (quote != '\'' && quote != '\"') { - PyErr_BadInternalCall(); - return NULL; - } - s++; - len = strlen(s); - if (len > INT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "string to parse is too long"); - return NULL; - } - if (s[--len] != quote) { - PyErr_BadInternalCall(); - return NULL; - } - if (len >= 4 && s[0] == quote && s[1] == quote) { - s += 2; - len -= 2; - if (s[--len] != quote || s[--len] != quote) { - PyErr_BadInternalCall(); - return NULL; - } - } + PyObject *result = NULL; + PyObject *v; + size_t len; + int quote = *s; + int rawmode = 0; + int need_encoding; + int unicode = 0; + + if (isalpha(quote) || quote == '_') { + if (quote == 'u' || quote == 'U') { + quote = *++s; + unicode = 1; + } + if (quote == 'r' || quote == 'R') { + quote = *++s; + rawmode = 1; + } + } + if (quote != '\'' && quote != '\"') { + PyErr_BadInternalCall(); + return NULL; + } + s++; + len = strlen(s); + if (len > INT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "string to parse is too long"); + return NULL; + } + if (s[--len] != quote) { + PyErr_BadInternalCall(); + return NULL; + } + if (len >= 4 && s[0] == quote && s[1] == quote) { + s += 2; + len -= 2; + if (s[--len] != quote || s[--len] != quote) { + PyErr_BadInternalCall(); + return NULL; + } + } #ifdef Py_USING_UNICODE - if (unicode || Py_UnicodeFlag) { - return decode_unicode(s, len, rawmode, encoding); - } + if (unicode || Py_UnicodeFlag) { + return decode_unicode(s, len, rawmode, encoding); + } #endif - need_encoding = (encoding != NULL && - strcmp(encoding, "utf-8") != 0 && - strcmp(encoding, "iso-8859-1") != 0); - if (rawmode || strchr(s, '\\') == NULL) { - if (need_encoding) { + need_encoding = (encoding != NULL && + strcmp(encoding, "utf-8") != 0 && + strcmp(encoding, "iso-8859-1") != 0); + if (rawmode || strchr(s, '\\') == NULL) { + if (need_encoding) { #ifndef Py_USING_UNICODE - /* This should not happen - we never see any other - encoding. */ - Py_FatalError("cannot deal with encodings in this build."); + /* This should not happen - we never see any other + encoding. */ + Py_FatalError("cannot deal with encodings in this build."); #else - PyObject* u = PyUnicode_DecodeUTF8(s, len, NULL); - if (u == NULL) - return NULL; - v = PyUnicode_AsEncodedString(u, encoding, NULL); - Py_DECREF(u); - return v; + PyObject* u = PyUnicode_DecodeUTF8(s, len, NULL); + if (u == NULL) + return NULL; + v = PyUnicode_AsEncodedString(u, encoding, NULL); + Py_DECREF(u); + return v; #endif - } else { - return PyString_FromStringAndSize(s, len); - } - } - - v = PyString_DecodeEscape(s, len, NULL, unicode, - need_encoding ? encoding : NULL); - return v; + } else { + return PyString_FromStringAndSize(s, len); + } + } + + v = PyString_DecodeEscape(s, len, NULL, unicode, + need_encoding ? encoding : NULL); + return v; } /* Build a Python string object out of a STRING atom. This takes care of @@ -3355,38 +3366,38 @@ static PyObject * parsestrplus(struct compiling *c, const node *n) { - PyObject *result = NULL; - PyObject *v; - int i; - REQ(CHILD(n, 0), STRING); - if ((v = parsestr(STR(CHILD(n, 0)), c->c_encoding)) != NULL) { - /* String literal concatenation */ - for (i = 1; i < NCH(n); i++) { - PyObject *s; - s = parsestr(STR(CHILD(n, i)), c->c_encoding); - if (s == NULL) - goto onError; - if (PyString_Check(v) && PyString_Check(s)) { - PyString_ConcatAndDel(&v, s); - if (v == NULL) - goto onError; - } + PyObject *result = NULL; + PyObject *v; + int i; + REQ(CHILD(n, 0), STRING); + if ((v = parsestr(STR(CHILD(n, 0)), c->c_encoding)) != NULL) { + /* String literal concatenation */ + for (i = 1; i < NCH(n); i++) { + PyObject *s; + s = parsestr(STR(CHILD(n, i)), c->c_encoding); + if (s == NULL) + goto onError; + if (PyString_Check(v) && PyString_Check(s)) { + PyString_ConcatAndDel(&v, s); + if (v == NULL) + goto onError; + } #ifdef Py_USING_UNICODE - else { - PyObject *temp; - temp = PyUnicode_Concat(v, s); - Py_DECREF(s); - if (temp == NULL) - goto onError; - Py_DECREF(v); - v = temp; - } + else { + PyObject *temp; + temp = PyUnicode_Concat(v, s); + Py_DECREF(s); + if (temp == NULL) + goto onError; + Py_DECREF(v); + v = temp; + } #endif - } - } - return v; + } + } + return v; onError: - Py_XDECREF(v); - return NULL; + Py_XDECREF(v); + return NULL; } Modified: python/branches/ast-objects/Python/compile.c ============================================================================== --- python/branches/ast-objects/Python/compile.c (original) +++ python/branches/ast-objects/Python/compile.c Sun Feb 5 02:54:29 2006 @@ -161,7 +161,7 @@ int a_lineno_off; /* bytecode offset of last lineno */ }; -static int compiler_enter_scope(struct compiler *, identifier, void *, int); +static int compiler_enter_scope(struct compiler *, PyObject *, void *, int); static void compiler_free(struct compiler *); static basicblock *compiler_new_block(struct compiler *); static int compiler_next_instr(struct compiler *, basicblock *); @@ -172,23 +172,22 @@ static void compiler_use_block(struct compiler *, basicblock *); static basicblock *compiler_use_new_block(struct compiler *); static int compiler_error(struct compiler *, const char *); -static int compiler_nameop(struct compiler *, identifier, expr_context_ty); +static int compiler_nameop(struct compiler *, PyObject *, PyObject *); -static PyCodeObject *compiler_mod(struct compiler *, mod_ty); -static int compiler_visit_stmt(struct compiler *, stmt_ty); -static int compiler_visit_keyword(struct compiler *, keyword_ty); -static int compiler_visit_expr(struct compiler *, expr_ty); -static int compiler_augassign(struct compiler *, stmt_ty); -static int compiler_visit_slice(struct compiler *, slice_ty, - expr_context_ty); +static PyCodeObject *compiler_mod(struct compiler *, PyObject *); +static int compiler_visit_stmt(struct compiler *, PyObject *); +static int compiler_visit_keyword(struct compiler *, PyObject *); +static int compiler_visit_expr(struct compiler *, PyObject *); +static int compiler_augassign(struct compiler *, PyObject *); +static int compiler_visit_slice(struct compiler *, PyObject *, PyObject *); static int compiler_push_fblock(struct compiler *, enum fblocktype, basicblock *); static void compiler_pop_fblock(struct compiler *, enum fblocktype, basicblock *); -static int inplace_binop(struct compiler *, operator_ty); -static int expr_constant(expr_ty e); +static int inplace_binop(struct compiler *, PyObject *); +static int expr_constant(PyObject * e); static PyCodeObject *assemble(struct compiler *, int addNone); static PyObject *__doc__; @@ -243,7 +242,7 @@ } PyCodeObject * -PyAST_Compile(mod_ty mod, const char *filename, PyCompilerFlags *flags) +PyAST_Compile(PyObject *mod, const char *filename, PyCompilerFlags *flags) { struct compiler c; PyCodeObject *co = NULL; @@ -293,11 +292,10 @@ PyNode_Compile(struct _node *n, const char *filename) { PyCodeObject *co; - mod_ty mod = PyAST_FromNode(n, NULL, filename); + PyObject *mod = PyAST_FromNode(n, NULL, filename); if (!mod) return NULL; co = PyAST_Compile(mod, filename, NULL); - free_mod(mod); return co; } @@ -547,7 +545,7 @@ static int fold_unaryops_on_constants(unsigned char *codestr, PyObject *consts) { - PyObject *newconst=NULL, *v; + PyObject *newconst = NULL, *v; int len_consts, opcode; /* Pre-conditions */ @@ -787,8 +785,8 @@ (opcode == BUILD_LIST && codestr[i+3]==COMPARE_OP && ISBASICBLOCK(blocks, h, 3*(j+2)) && - (GETARG(codestr,i+3)==6 || - GETARG(codestr,i+3)==7))) && + (GETARG(codestr,i+3) == 6 || + GETARG(codestr,i+3) == 7))) && tuple_of_constants(&codestr[h], j, consts)) { assert(codestr[i] == LOAD_CONST); cumlc = 1; @@ -1064,7 +1062,7 @@ } static int -compiler_enter_scope(struct compiler *c, identifier name, void *key, +compiler_enter_scope(struct compiler *c, PyObject *name, void *key, int lineno) { struct compiler_unit *u; @@ -1672,9 +1670,9 @@ #define VISIT_SEQ(C, TYPE, SEQ) { \ int i; \ - asdl_seq *seq = (SEQ); /* avoid variable capture */ \ - for (i = 0; i < asdl_seq_LEN(seq); i++) { \ - TYPE ## _ty elt = asdl_seq_GET(seq, i); \ + PyObject *seq = (SEQ); /* avoid variable capture */ \ + for (i = 0; i < PyList_GET_SIZE(seq); i++) { \ + /*TYPE ## _ty*/ PyObject *elt = PyList_GET_ITEM(seq, i); \ if (!compiler_visit_ ## TYPE((C), elt)) \ return 0; \ } \ @@ -1682,9 +1680,9 @@ #define VISIT_SEQ_IN_SCOPE(C, TYPE, SEQ) { \ int i; \ - asdl_seq *seq = (SEQ); /* avoid variable capture */ \ - for (i = 0; i < asdl_seq_LEN(seq); i++) { \ - TYPE ## _ty elt = asdl_seq_GET(seq, i); \ + PyObject *seq = (SEQ); /* avoid variable capture */ \ + for (i = 0; i < PyList_GET_SIZE(seq); i++) { \ + /*TYPE ## _ty*/PyObject *elt = PyList_GET_ITEM(seq, i); \ if (!compiler_visit_ ## TYPE((C), elt)) { \ compiler_exit_scope(c); \ return 0; \ @@ -1693,37 +1691,37 @@ } static int -compiler_isdocstring(stmt_ty s) +compiler_isdocstring(PyObject *s) { - if (s->kind != Expr_kind) + if (stmt_kind(s) != Expr_kind) return 0; - return s->v.Expr.value->kind == Str_kind; + return expr_kind(Expr_value(s)) == Str_kind; } /* Compile a sequence of statements, checking for a docstring. */ static int -compiler_body(struct compiler *c, asdl_seq *stmts) +compiler_body(struct compiler *c, PyObject *stmts) { int i = 0; - stmt_ty st; + PyObject *st; - if (!asdl_seq_LEN(stmts)) + if (!PyList_GET_SIZE(stmts)) return 1; - st = asdl_seq_GET(stmts, 0); + st = PyList_GET_ITEM(stmts, 0); if (compiler_isdocstring(st)) { i = 1; - VISIT(c, expr, st->v.Expr.value); - if (!compiler_nameop(c, __doc__, Store)) + VISIT(c, expr, Expr_value(st)); + if (!compiler_nameop(c, __doc__, Store())) return 0; } - for (; i < asdl_seq_LEN(stmts); i++) - VISIT(c, stmt, asdl_seq_GET(stmts, i)); + for (; i < PyList_GET_SIZE(stmts); i++) + VISIT(c, stmt, PyList_GET_ITEM(stmts, i)); return 1; } static PyCodeObject * -compiler_mod(struct compiler *c, mod_ty mod) +compiler_mod(struct compiler *c, PyObject *mod) { PyCodeObject *co; int addNone = 1; @@ -1735,19 +1733,19 @@ } if (!compiler_enter_scope(c, module, mod, 1)) return NULL; - switch (mod->kind) { + switch (mod_kind(mod)) { case Module_kind: - if (!compiler_body(c, mod->v.Module.body)) { + if (!compiler_body(c, Module_body(mod))) { compiler_exit_scope(c); return 0; } break; case Interactive_kind: c->c_interactive = 1; - VISIT_SEQ_IN_SCOPE(c, stmt, mod->v.Interactive.body); + VISIT_SEQ_IN_SCOPE(c, stmt, Interactive_body(mod)); break; case Expression_kind: - VISIT_IN_SCOPE(c, expr, mod->v.Expression.body); + VISIT_IN_SCOPE(c, expr, Expression_body(mod)); addNone = 0; break; case Suite_kind: @@ -1757,7 +1755,7 @@ default: PyErr_Format(PyExc_SystemError, "module kind %d should not be possible", - mod->kind); + mod_kind(mod)); return 0; } co = assemble(c, addNone); @@ -1853,33 +1851,33 @@ } static int -compiler_decorators(struct compiler *c, asdl_seq* decos) +compiler_decorators(struct compiler *c, PyObject* decos) { int i; if (!decos) return 1; - for (i = 0; i < asdl_seq_LEN(decos); i++) { - VISIT(c, expr, asdl_seq_GET(decos, i)); + for (i = 0; i < PyList_GET_SIZE(decos); i++) { + VISIT(c, expr, PyList_GET_ITEM(decos, i)); } return 1; } static int -compiler_arguments(struct compiler *c, arguments_ty args) +compiler_arguments(struct compiler *c, PyObject *args) { int i; - int n = asdl_seq_LEN(args->args); + int n = PyList_GET_SIZE(arguments_args(args)); /* Correctly handle nested argument lists */ for (i = 0; i < n; i++) { - expr_ty arg = asdl_seq_GET(args->args, i); - if (arg->kind == Tuple_kind) { + PyObject *arg = PyList_GET_ITEM(arguments_args(args), i); + if (expr_kind(arg) == Tuple_kind) { PyObject *id = PyString_FromFormat(".%d", i); if (id == NULL) { return 0; } - if (!compiler_nameop(c, id, Load)) { + if (!compiler_nameop(c, id, Load())) { Py_DECREF(id); return 0; } @@ -1891,29 +1889,30 @@ } static int -compiler_function(struct compiler *c, stmt_ty s) +compiler_function(struct compiler *c, PyObject *s) { PyCodeObject *co; PyObject *first_const = Py_None; - arguments_ty args = s->v.FunctionDef.args; - asdl_seq* decos = s->v.FunctionDef.decorators; - stmt_ty st; + PyObject *args = FunctionDef_args(s); + PyObject *decos = FunctionDef_decorators(s); + PyObject *st; int i, n, docstring; - assert(s->kind == FunctionDef_kind); + assert(stmt_kind(s) == FunctionDef_kind); if (!compiler_decorators(c, decos)) return 0; - if (args->defaults) - VISIT_SEQ(c, expr, args->defaults); - if (!compiler_enter_scope(c, s->v.FunctionDef.name, (void *)s, - s->lineno)) + /* if (arguments_defaults(args)) */ + VISIT_SEQ(c, expr, arguments_defaults(args)); + /* s->lineno)) */ + if (!compiler_enter_scope(c, FunctionDef_name(s), (void *)s, + ((struct _stmt*)s)->lineno )) return 0; - st = asdl_seq_GET(s->v.FunctionDef.body, 0); + st = PyList_GET_ITEM(FunctionDef_body(s), 0); docstring = compiler_isdocstring(st); if (docstring) - first_const = st->v.Expr.value->v.Str.s; + first_const = Str_s(Expr_value(st)); if (compiler_add_o(c, c->u->u_consts, first_const) < 0) { compiler_exit_scope(c); return 0; @@ -1922,13 +1921,13 @@ /* unpack nested arguments */ compiler_arguments(c, args); - c->u->u_argcount = asdl_seq_LEN(args->args); - n = asdl_seq_LEN(s->v.FunctionDef.body); + c->u->u_argcount = PyList_GET_SIZE(arguments_args(args)); + n = PyList_GET_SIZE(FunctionDef_body(s)); /* if there was a docstring, we need to skip the first statement */ for (i = docstring; i < n; i++) { - stmt_ty s2 = asdl_seq_GET(s->v.FunctionDef.body, i); - if (i == 0 && s2->kind == Expr_kind && - s2->v.Expr.value->kind == Str_kind) + PyObject *s2 = PyList_GET_ITEM(FunctionDef_body(s), i); + if (i == 0 && stmt_kind(s2) == Expr_kind && + expr_kind(Expr_value(s2)) == Str_kind) continue; VISIT_IN_SCOPE(c, stmt, s2); } @@ -1937,36 +1936,36 @@ if (co == NULL) return 0; - compiler_make_closure(c, co, asdl_seq_LEN(args->defaults)); + compiler_make_closure(c, co, PyList_GET_SIZE(arguments_defaults(args))); Py_DECREF(co); - for (i = 0; i < asdl_seq_LEN(decos); i++) { + for (i = 0; i < PyList_GET_SIZE(decos); i++) { ADDOP_I(c, CALL_FUNCTION, 1); } - return compiler_nameop(c, s->v.FunctionDef.name, Store); + return compiler_nameop(c, FunctionDef_name(s), Store()); } static int -compiler_class(struct compiler *c, stmt_ty s) +compiler_class(struct compiler *c, PyObject *s) { int n; PyCodeObject *co; PyObject *str; /* push class name on stack, needed by BUILD_CLASS */ - ADDOP_O(c, LOAD_CONST, s->v.ClassDef.name, consts); + ADDOP_O(c, LOAD_CONST, ClassDef_name(s), consts); /* push the tuple of base classes on the stack */ - n = asdl_seq_LEN(s->v.ClassDef.bases); + n = PyList_GET_SIZE(ClassDef_bases(s)); if (n > 0) - VISIT_SEQ(c, expr, s->v.ClassDef.bases); + VISIT_SEQ(c, expr, ClassDef_bases(s)); ADDOP_I(c, BUILD_TUPLE, n); - if (!compiler_enter_scope(c, s->v.ClassDef.name, (void *)s, - s->lineno)) + if (!compiler_enter_scope(c, ClassDef_name(s), (void *)s, + ((struct _stmt*)s)->lineno)) return 0; - c->u->u_private = s->v.ClassDef.name; + c->u->u_private = ClassDef_name(s); Py_INCREF(c->u->u_private); str = PyString_InternFromString("__name__"); - if (!str || !compiler_nameop(c, str, Load)) { + if (!str || !compiler_nameop(c, str, Load())) { Py_XDECREF(str); compiler_exit_scope(c); return 0; @@ -1974,14 +1973,14 @@ Py_DECREF(str); str = PyString_InternFromString("__module__"); - if (!str || !compiler_nameop(c, str, Store)) { + if (!str || !compiler_nameop(c, str, Store())) { Py_XDECREF(str); compiler_exit_scope(c); return 0; } Py_DECREF(str); - if (!compiler_body(c, s->v.ClassDef.body)) { + if (!compiler_body(c, ClassDef_body(s))) { compiler_exit_scope(c); return 0; } @@ -1998,18 +1997,18 @@ ADDOP_I(c, CALL_FUNCTION, 0); ADDOP(c, BUILD_CLASS); - if (!compiler_nameop(c, s->v.ClassDef.name, Store)) + if (!compiler_nameop(c, ClassDef_name(s), Store())) return 0; return 1; } static int -compiler_lambda(struct compiler *c, expr_ty e) +compiler_lambda(struct compiler *c, PyObject *e) { PyCodeObject *co; - static identifier name; - arguments_ty args = e->v.Lambda.args; - assert(e->kind == Lambda_kind); + static PyObject *name; + PyObject *args = Lambda_args(e); + assert(expr_kind(e) == Lambda_kind); if (!name) { name = PyString_InternFromString(""); @@ -2017,43 +2016,43 @@ return 0; } - if (args->defaults) - VISIT_SEQ(c, expr, args->defaults); - if (!compiler_enter_scope(c, name, (void *)e, e->lineno)) + if (arguments_defaults(args)) + VISIT_SEQ(c, expr, arguments_defaults(args)); + if (!compiler_enter_scope(c, name, (void *)e, ((struct _expr*)e)->lineno)) return 0; /* unpack nested arguments */ compiler_arguments(c, args); - c->u->u_argcount = asdl_seq_LEN(args->args); - VISIT_IN_SCOPE(c, expr, e->v.Lambda.body); + c->u->u_argcount = PyList_GET_SIZE(arguments_args(args)); + VISIT_IN_SCOPE(c, expr, Lambda_body(e)); ADDOP_IN_SCOPE(c, RETURN_VALUE); co = assemble(c, 1); compiler_exit_scope(c); if (co == NULL) return 0; - compiler_make_closure(c, co, asdl_seq_LEN(args->defaults)); + compiler_make_closure(c, co, PyList_GET_SIZE(arguments_defaults(args))); Py_DECREF(co); return 1; } static int -compiler_print(struct compiler *c, stmt_ty s) +compiler_print(struct compiler *c, PyObject *s) { int i, n; int dest; - assert(s->kind == Print_kind); - n = asdl_seq_LEN(s->v.Print.values); + assert(stmt_kind(s) == Print_kind); + n = PyList_GET_SIZE(Print_values(s)); dest = 0; - if (s->v.Print.dest) { - VISIT(c, expr, s->v.Print.dest); + if (Print_dest(s) != Py_None) { + VISIT(c, expr, Print_dest(s)); dest = 1; } for (i = 0; i < n; i++) { - expr_ty e = (expr_ty)asdl_seq_GET(s->v.Print.values, i); + PyObject *e = PyList_GET_ITEM(Print_values(s), i); if (dest) { ADDOP(c, DUP_TOP); VISIT(c, expr, e); @@ -2065,7 +2064,7 @@ ADDOP(c, PRINT_ITEM); } } - if (s->v.Print.nl) { + if (Print_nl(s) == Py_True) { if (dest) ADDOP(c, PRINT_NEWLINE_TO) else @@ -2077,32 +2076,32 @@ } static int -compiler_if(struct compiler *c, stmt_ty s) +compiler_if(struct compiler *c, PyObject *s) { basicblock *end, *next; - assert(s->kind == If_kind); + assert(stmt_kind(s) == If_kind); end = compiler_new_block(c); if (end == NULL) return 0; next = compiler_new_block(c); if (next == NULL) return 0; - VISIT(c, expr, s->v.If.test); + VISIT(c, expr, If_test(s)); ADDOP_JREL(c, JUMP_IF_FALSE, next); ADDOP(c, POP_TOP); - VISIT_SEQ(c, stmt, s->v.If.body); + VISIT_SEQ(c, stmt, If_body(s)); ADDOP_JREL(c, JUMP_FORWARD, end); compiler_use_next_block(c, next); ADDOP(c, POP_TOP); - if (s->v.If.orelse) - VISIT_SEQ(c, stmt, s->v.If.orelse); + /* if (If_orelse(s)) */ + VISIT_SEQ(c, stmt, If_orelse(s)); compiler_use_next_block(c, end); return 1; } static int -compiler_for(struct compiler *c, stmt_ty s) +compiler_for(struct compiler *c, PyObject *s) { basicblock *start, *cleanup, *end; @@ -2114,26 +2113,26 @@ ADDOP_JREL(c, SETUP_LOOP, end); if (!compiler_push_fblock(c, LOOP, start)) return 0; - VISIT(c, expr, s->v.For.iter); + VISIT(c, expr, For_iter(s)); ADDOP(c, GET_ITER); compiler_use_next_block(c, start); ADDOP_JREL(c, FOR_ITER, cleanup); - VISIT(c, expr, s->v.For.target); - VISIT_SEQ(c, stmt, s->v.For.body); + VISIT(c, expr, For_target(s)); + VISIT_SEQ(c, stmt, For_body(s)); ADDOP_JABS(c, JUMP_ABSOLUTE, start); compiler_use_next_block(c, cleanup); ADDOP(c, POP_BLOCK); compiler_pop_fblock(c, LOOP, start); - VISIT_SEQ(c, stmt, s->v.For.orelse); + VISIT_SEQ(c, stmt, For_orelse(s)); compiler_use_next_block(c, end); return 1; } static int -compiler_while(struct compiler *c, stmt_ty s) +compiler_while(struct compiler *c, PyObject *s) { basicblock *loop, *orelse, *end, *anchor = NULL; - int constant = expr_constant(s->v.While.test); + int constant = expr_constant(While_test(s)); if (constant == 0) return 1; @@ -2146,24 +2145,20 @@ } if (loop == NULL || end == NULL) return 0; - if (s->v.While.orelse) { - orelse = compiler_new_block(c); - if (orelse == NULL) - return 0; - } - else - orelse = NULL; + orelse = compiler_new_block(c); + if (orelse == NULL) + return 0; ADDOP_JREL(c, SETUP_LOOP, end); compiler_use_next_block(c, loop); if (!compiler_push_fblock(c, LOOP, loop)) return 0; if (constant == -1) { - VISIT(c, expr, s->v.While.test); + VISIT(c, expr, While_test(s)); ADDOP_JREL(c, JUMP_IF_FALSE, anchor); ADDOP(c, POP_TOP); } - VISIT_SEQ(c, stmt, s->v.While.body); + VISIT_SEQ(c, stmt, While_body(s)); ADDOP_JABS(c, JUMP_ABSOLUTE, loop); /* XXX should the two POP instructions be in a separate block @@ -2177,7 +2172,7 @@ } compiler_pop_fblock(c, LOOP, loop); if (orelse != NULL) - VISIT_SEQ(c, stmt, s->v.While.orelse); + VISIT_SEQ(c, stmt, While_orelse(s)); compiler_use_next_block(c, end); return 1; @@ -2246,7 +2241,7 @@ */ static int -compiler_try_finally(struct compiler *c, stmt_ty s) +compiler_try_finally(struct compiler *c, PyObject *s) { basicblock *body, *end; body = compiler_new_block(c); @@ -2258,7 +2253,7 @@ compiler_use_next_block(c, body); if (!compiler_push_fblock(c, FINALLY_TRY, body)) return 0; - VISIT_SEQ(c, stmt, s->v.TryFinally.body); + VISIT_SEQ(c, stmt, TryFinally_body(s)); ADDOP(c, POP_BLOCK); compiler_pop_fblock(c, FINALLY_TRY, body); @@ -2266,7 +2261,7 @@ compiler_use_next_block(c, end); if (!compiler_push_fblock(c, FINALLY_END, end)) return 0; - VISIT_SEQ(c, stmt, s->v.TryFinally.finalbody); + VISIT_SEQ(c, stmt, TryFinally_finalbody(s)); ADDOP(c, END_FINALLY); compiler_pop_fblock(c, FINALLY_END, end); @@ -2308,7 +2303,7 @@ Of course, parts are not generated if Vi or Ei is not present. */ static int -compiler_try_except(struct compiler *c, stmt_ty s) +compiler_try_except(struct compiler *c, PyObject *s) { basicblock *body, *orelse, *except, *end; int i, n; @@ -2323,50 +2318,50 @@ compiler_use_next_block(c, body); if (!compiler_push_fblock(c, EXCEPT, body)) return 0; - VISIT_SEQ(c, stmt, s->v.TryExcept.body); + VISIT_SEQ(c, stmt, TryExcept_body(s)); ADDOP(c, POP_BLOCK); compiler_pop_fblock(c, EXCEPT, body); ADDOP_JREL(c, JUMP_FORWARD, orelse); - n = asdl_seq_LEN(s->v.TryExcept.handlers); + n = PyList_GET_SIZE(TryExcept_handlers(s)); compiler_use_next_block(c, except); for (i = 0; i < n; i++) { - excepthandler_ty handler = asdl_seq_GET( - s->v.TryExcept.handlers, i); - if (!handler->type && i < n-1) + PyObject *handler = PyList_GET_ITEM( + TryExcept_handlers(s), i); + if (!excepthandler_type(handler)&& i < n-1) return compiler_error(c, "default 'except:' must be last"); except = compiler_new_block(c); if (except == NULL) return 0; - if (handler->type) { + if (excepthandler_type(handler)) { ADDOP(c, DUP_TOP); - VISIT(c, expr, handler->type); + VISIT(c, expr, excepthandler_type(handler)); ADDOP_I(c, COMPARE_OP, PyCmp_EXC_MATCH); ADDOP_JREL(c, JUMP_IF_FALSE, except); ADDOP(c, POP_TOP); } ADDOP(c, POP_TOP); - if (handler->name) { - VISIT(c, expr, handler->name); + if (excepthandler_name(handler)) { + VISIT(c, expr, excepthandler_name(handler)); } else { ADDOP(c, POP_TOP); } ADDOP(c, POP_TOP); - VISIT_SEQ(c, stmt, handler->body); + VISIT_SEQ(c, stmt, excepthandler_body(handler)); ADDOP_JREL(c, JUMP_FORWARD, end); compiler_use_next_block(c, except); - if (handler->type) + if (excepthandler_type(handler)) ADDOP(c, POP_TOP); } ADDOP(c, END_FINALLY); compiler_use_next_block(c, orelse); - VISIT_SEQ(c, stmt, s->v.TryExcept.orelse); + VISIT_SEQ(c, stmt, TryExcept_orelse(s)); compiler_use_next_block(c, end); return 1; } static int -compiler_import_as(struct compiler *c, identifier name, identifier asname) +compiler_import_as(struct compiler *c, PyObject *name, PyObject *asname) { /* The IMPORT_NAME opcode was already generated. This function merely needs to bind the result to a name. @@ -2392,11 +2387,11 @@ src = dot + 1; } } - return compiler_nameop(c, asname, Store); + return compiler_nameop(c, asname, Store()); } static int -compiler_import(struct compiler *c, stmt_ty s) +compiler_import(struct compiler *c, PyObject *s) { /* The Import node stores a module name like a.b.c as a single string. This is convenient for all cases except @@ -2405,27 +2400,27 @@ module names. XXX Perhaps change the representation to make this case simpler? */ - int i, n = asdl_seq_LEN(s->v.Import.names); + int i, n = PyList_GET_SIZE(Import_names(s)); for (i = 0; i < n; i++) { - alias_ty alias = asdl_seq_GET(s->v.Import.names, i); + PyObject *alias = PyList_GET_ITEM(Import_names(s), i); int r; ADDOP_O(c, LOAD_CONST, Py_None, consts); - ADDOP_NAME(c, IMPORT_NAME, alias->name, names); + ADDOP_NAME(c, IMPORT_NAME, alias_name(alias), names); - if (alias->asname) { - r = compiler_import_as(c, alias->name, alias->asname); + if (alias_asname(alias) != Py_None) { + r = compiler_import_as(c, alias_name(alias), alias_asname(alias)); if (!r) return r; } else { - identifier tmp = alias->name; - const char *base = PyString_AS_STRING(alias->name); + PyObject *tmp = alias_name(alias); + const char *base = PyString_AS_STRING(alias_name(alias)); char *dot = strchr(base, '.'); if (dot) tmp = PyString_FromStringAndSize(base, dot - base); - r = compiler_nameop(c, tmp, Store); + r = compiler_nameop(c, tmp, Store()); if (dot) { Py_DECREF(tmp); } @@ -2437,9 +2432,9 @@ } static int -compiler_from_import(struct compiler *c, stmt_ty s) +compiler_from_import(struct compiler *c, PyObject *s) { - int i, n = asdl_seq_LEN(s->v.ImportFrom.names); + int i, n = PyList_GET_SIZE(ImportFrom_names(s)); PyObject *names = PyTuple_New(n); if (!names) @@ -2447,13 +2442,13 @@ /* build up the names */ for (i = 0; i < n; i++) { - alias_ty alias = asdl_seq_GET(s->v.ImportFrom.names, i); - Py_INCREF(alias->name); - PyTuple_SET_ITEM(names, i, alias->name); + PyObject *alias = PyList_GET_ITEM(ImportFrom_names(s), i); + Py_INCREF(alias_name(alias)); + PyTuple_SET_ITEM(names, i, alias_name(alias)); } - if (s->lineno > c->c_future->ff_lineno) { - if (!strcmp(PyString_AS_STRING(s->v.ImportFrom.module), + if (((struct _stmt*)s)->lineno > c->c_future->ff_lineno) { + if (!strcmp(PyString_AS_STRING(ImportFrom_module(s)), "__future__")) { Py_DECREF(names); return compiler_error(c, @@ -2465,23 +2460,23 @@ ADDOP_O(c, LOAD_CONST, names, consts); Py_DECREF(names); - ADDOP_NAME(c, IMPORT_NAME, s->v.ImportFrom.module, names); + ADDOP_NAME(c, IMPORT_NAME, ImportFrom_module(s), names); for (i = 0; i < n; i++) { - alias_ty alias = asdl_seq_GET(s->v.ImportFrom.names, i); - identifier store_name; + PyObject *alias = PyList_GET_ITEM(ImportFrom_names(s), i); + PyObject *store_name; - if (i == 0 && *PyString_AS_STRING(alias->name) == '*') { + if (i == 0 && *PyString_AS_STRING(alias_name(alias)) == '*') { assert(n == 1); ADDOP(c, IMPORT_STAR); return 1; } - ADDOP_NAME(c, IMPORT_FROM, alias->name, names); - store_name = alias->name; - if (alias->asname) - store_name = alias->asname; + ADDOP_NAME(c, IMPORT_FROM, alias_name(alias), names); + store_name = alias_name(alias); + if (alias_asname(alias) != Py_None) + store_name = alias_asname(alias); - if (!compiler_nameop(c, store_name, Store)) { + if (!compiler_nameop(c, store_name, Store())) { Py_DECREF(names); return 0; } @@ -2492,7 +2487,7 @@ } static int -compiler_assert(struct compiler *c, stmt_ty s) +compiler_assert(struct compiler *c, PyObject *s) { static PyObject *assertion_error = NULL; basicblock *end; @@ -2504,15 +2499,15 @@ if (assertion_error == NULL) return 0; } - VISIT(c, expr, s->v.Assert.test); + VISIT(c, expr, Assert_test(s)); end = compiler_new_block(c); if (end == NULL) return 0; ADDOP_JREL(c, JUMP_IF_TRUE, end); ADDOP(c, POP_TOP); ADDOP_O(c, LOAD_GLOBAL, assertion_error, names); - if (s->v.Assert.msg) { - VISIT(c, expr, s->v.Assert.msg); + if (Assert_msg(s) != Py_None) { + VISIT(c, expr, Assert_msg(s)); ADDOP_I(c, RAISE_VARARGS, 2); } else { @@ -2524,13 +2519,13 @@ } static int -compiler_visit_stmt(struct compiler *c, stmt_ty s) +compiler_visit_stmt(struct compiler *c, PyObject *s) { int i, n; - c->u->u_lineno = s->lineno; + c->u->u_lineno = ((struct _stmt*)s)->lineno; c->u->u_lineno_set = 0; - switch (s->kind) { + switch (stmt_kind(s)) { case FunctionDef_kind: return compiler_function(c, s); case ClassDef_kind: @@ -2538,28 +2533,28 @@ case Return_kind: if (c->u->u_ste->ste_type != FunctionBlock) return compiler_error(c, "'return' outside function"); - if (s->v.Return.value) { + if (Return_value(s) != Py_None) { if (c->u->u_ste->ste_generator) { return compiler_error(c, "'return' with argument inside generator"); } - VISIT(c, expr, s->v.Return.value); + VISIT(c, expr, Return_value(s)); } else ADDOP_O(c, LOAD_CONST, Py_None, consts); ADDOP(c, RETURN_VALUE); break; case Delete_kind: - VISIT_SEQ(c, expr, s->v.Delete.targets) + VISIT_SEQ(c, expr, Delete_targets(s)) break; case Assign_kind: - n = asdl_seq_LEN(s->v.Assign.targets); - VISIT(c, expr, s->v.Assign.value); + n = PyList_GET_SIZE(Assign_targets(s)); + VISIT(c, expr, Assign_value(s)); for (i = 0; i < n; i++) { if (i < n - 1) ADDOP(c, DUP_TOP); VISIT(c, expr, - (expr_ty)asdl_seq_GET(s->v.Assign.targets, i)); + PyList_GET_ITEM(Assign_targets(s), i)); } break; case AugAssign_kind: @@ -2574,14 +2569,14 @@ return compiler_if(c, s); case Raise_kind: n = 0; - if (s->v.Raise.type) { - VISIT(c, expr, s->v.Raise.type); + if (Raise_type(s) != Py_None) { + VISIT(c, expr, Raise_type(s)); n++; - if (s->v.Raise.inst) { - VISIT(c, expr, s->v.Raise.inst); + if (Raise_inst(s) != Py_None) { + VISIT(c, expr, Raise_inst(s)); n++; - if (s->v.Raise.tback) { - VISIT(c, expr, s->v.Raise.tback); + if (Raise_tback(s) != Py_None) { + VISIT(c, expr, Raise_tback(s)); n++; } } @@ -2599,11 +2594,11 @@ case ImportFrom_kind: return compiler_from_import(c, s); case Exec_kind: - VISIT(c, expr, s->v.Exec.body); - if (s->v.Exec.globals) { - VISIT(c, expr, s->v.Exec.globals); - if (s->v.Exec.locals) { - VISIT(c, expr, s->v.Exec.locals); + VISIT(c, expr, Exec_body(s)); + if (Exec_globals(s) != Py_None) { + VISIT(c, expr, Exec_globals(s)); + if (Exec_locals(s) != Py_None) { + VISIT(c, expr, Exec_locals(s)); } else { ADDOP(c, DUP_TOP); } @@ -2616,7 +2611,7 @@ case Global_kind: break; case Expr_kind: - VISIT(c, expr, s->v.Expr.value); + VISIT(c, expr, Expr_value(s)); if (c->c_interactive && c->c_nestlevel <= 1) { ADDOP(c, PRINT_EXPR); } @@ -2638,124 +2633,124 @@ } static int -unaryop(unaryop_ty op) +unaryop(PyObject *op) { - switch (op) { - case Invert: + switch (unaryop_kind(op)) { + case Invert_kind: return UNARY_INVERT; - case Not: + case Not_kind: return UNARY_NOT; - case UAdd: + case UAdd_kind: return UNARY_POSITIVE; - case USub: + case USub_kind: return UNARY_NEGATIVE; } return 0; } static int -binop(struct compiler *c, operator_ty op) +binop(struct compiler *c, PyObject *op) { - switch (op) { - case Add: + switch (operator_kind(op)) { + case Add_kind: return BINARY_ADD; - case Sub: + case Sub_kind: return BINARY_SUBTRACT; - case Mult: + case Mult_kind: return BINARY_MULTIPLY; - case Div: + case Div_kind: if (c->c_flags && c->c_flags->cf_flags & CO_FUTURE_DIVISION) return BINARY_TRUE_DIVIDE; else return BINARY_DIVIDE; - case Mod: + case Mod_kind: return BINARY_MODULO; - case Pow: + case Pow_kind: return BINARY_POWER; - case LShift: + case LShift_kind: return BINARY_LSHIFT; - case RShift: + case RShift_kind: return BINARY_RSHIFT; - case BitOr: + case BitOr_kind: return BINARY_OR; - case BitXor: + case BitXor_kind: return BINARY_XOR; - case BitAnd: + case BitAnd_kind: return BINARY_AND; - case FloorDiv: + case FloorDiv_kind: return BINARY_FLOOR_DIVIDE; } return 0; } static int -cmpop(cmpop_ty op) +cmpop(PyObject *op) { - switch (op) { - case Eq: + switch (cmpop_kind(op)) { + case Eq_kind: return PyCmp_EQ; - case NotEq: + case NotEq_kind: return PyCmp_NE; - case Lt: + case Lt_kind: return PyCmp_LT; - case LtE: + case LtE_kind: return PyCmp_LE; - case Gt: + case Gt_kind: return PyCmp_GT; - case GtE: + case GtE_kind: return PyCmp_GE; - case Is: + case Is_kind: return PyCmp_IS; - case IsNot: + case IsNot_kind: return PyCmp_IS_NOT; - case In: + case In_kind: return PyCmp_IN; - case NotIn: + case NotIn_kind: return PyCmp_NOT_IN; } return PyCmp_BAD; } static int -inplace_binop(struct compiler *c, operator_ty op) +inplace_binop(struct compiler *c, PyObject *op) { - switch (op) { - case Add: + switch (operator_kind(op)) { + case Add_kind: return INPLACE_ADD; - case Sub: + case Sub_kind: return INPLACE_SUBTRACT; - case Mult: + case Mult_kind: return INPLACE_MULTIPLY; - case Div: + case Div_kind: if (c->c_flags && c->c_flags->cf_flags & CO_FUTURE_DIVISION) return INPLACE_TRUE_DIVIDE; else return INPLACE_DIVIDE; - case Mod: + case Mod_kind: return INPLACE_MODULO; - case Pow: + case Pow_kind: return INPLACE_POWER; - case LShift: + case LShift_kind: return INPLACE_LSHIFT; - case RShift: + case RShift_kind: return INPLACE_RSHIFT; - case BitOr: + case BitOr_kind: return INPLACE_OR; - case BitXor: + case BitXor_kind: return INPLACE_XOR; - case BitAnd: + case BitAnd_kind: return INPLACE_AND; - case FloorDiv: + case FloorDiv_kind: return INPLACE_FLOOR_DIVIDE; } PyErr_Format(PyExc_SystemError, "inplace binary op %d should not be possible", - op); + operator_kind(op)); return 0; } static int -compiler_nameop(struct compiler *c, identifier name, expr_context_ty ctx) +compiler_nameop(struct compiler *c, PyObject *name, PyObject *ctx) { int op, scope, arg; enum { OP_FAST, OP_GLOBAL, OP_DEREF, OP_NAME } optype; @@ -2765,7 +2760,9 @@ /* XXX AugStore isn't used anywhere! */ /* First check for assignment to __debug__. Param? */ - if ((ctx == Store || ctx == AugStore || ctx == Del) + if ((expr_context_kind(ctx) == Store_kind + || expr_context_kind(ctx) == AugStore_kind + || expr_context_kind(ctx) == Del_kind) && !strcmp(PyString_AS_STRING(name), "__debug__")) { return compiler_error(c, "can not assign to __debug__"); } @@ -2805,34 +2802,34 @@ switch (optype) { case OP_DEREF: - switch (ctx) { - case Load: op = LOAD_DEREF; break; - case Store: op = STORE_DEREF; break; - case AugLoad: - case AugStore: + switch (expr_context_kind(ctx)) { + case Load_kind: op = LOAD_DEREF; break; + case Store_kind: op = STORE_DEREF; break; + case AugLoad_kind: + case AugStore_kind: break; - case Del: + case Del_kind: PyErr_Format(PyExc_SyntaxError, "can not delete variable '%s' referenced " "in nested scope", PyString_AS_STRING(name)); Py_DECREF(mangled); return 0; - case Param: + case Param_kind: PyErr_SetString(PyExc_SystemError, "param invalid for deref variable"); return 0; } break; case OP_FAST: - switch (ctx) { - case Load: op = LOAD_FAST; break; - case Store: op = STORE_FAST; break; - case Del: op = DELETE_FAST; break; - case AugLoad: - case AugStore: + switch (expr_context_kind(ctx)) { + case Load_kind: op = LOAD_FAST; break; + case Store_kind: op = STORE_FAST; break; + case Del_kind: op = DELETE_FAST; break; + case AugLoad_kind: + case AugStore_kind: break; - case Param: + case Param_kind: PyErr_SetString(PyExc_SystemError, "param invalid for local variable"); return 0; @@ -2841,28 +2838,28 @@ Py_DECREF(mangled); return 1; case OP_GLOBAL: - switch (ctx) { - case Load: op = LOAD_GLOBAL; break; - case Store: op = STORE_GLOBAL; break; - case Del: op = DELETE_GLOBAL; break; - case AugLoad: - case AugStore: + switch (expr_context_kind(ctx)) { + case Load_kind: op = LOAD_GLOBAL; break; + case Store_kind: op = STORE_GLOBAL; break; + case Del_kind: op = DELETE_GLOBAL; break; + case AugLoad_kind: + case AugStore_kind: break; - case Param: + case Param_kind: PyErr_SetString(PyExc_SystemError, "param invalid for global variable"); return 0; } break; case OP_NAME: - switch (ctx) { - case Load: op = LOAD_NAME; break; - case Store: op = STORE_NAME; break; - case Del: op = DELETE_NAME; break; - case AugLoad: - case AugStore: + switch (expr_context_kind(ctx)) { + case Load_kind: op = LOAD_NAME; break; + case Store_kind: op = STORE_NAME; break; + case Del_kind: op = DELETE_NAME; break; + case AugLoad_kind: + case AugStore_kind: break; - case Param: + case Param_kind: PyErr_SetString(PyExc_SystemError, "param invalid for name variable"); return 0; @@ -2879,92 +2876,92 @@ } static int -compiler_boolop(struct compiler *c, expr_ty e) +compiler_boolop(struct compiler *c, PyObject *e) { basicblock *end; int jumpi, i, n; - asdl_seq *s; + PyObject *s; - assert(e->kind == BoolOp_kind); - if (e->v.BoolOp.op == And) + assert(expr_kind(e) == BoolOp_kind); + if (boolop_kind(BoolOp_op(e)) == And_kind) jumpi = JUMP_IF_FALSE; else jumpi = JUMP_IF_TRUE; end = compiler_new_block(c); if (end < 0) return 0; - s = e->v.BoolOp.values; - n = asdl_seq_LEN(s) - 1; + s = BoolOp_values(e); + n = PyList_GET_SIZE(s) - 1; for (i = 0; i < n; ++i) { - VISIT(c, expr, asdl_seq_GET(s, i)); + VISIT(c, expr, PyList_GET_ITEM(s, i)); ADDOP_JREL(c, jumpi, end); ADDOP(c, POP_TOP) } - VISIT(c, expr, asdl_seq_GET(s, n)); + VISIT(c, expr, PyList_GET_ITEM(s, n)); compiler_use_next_block(c, end); return 1; } static int -compiler_list(struct compiler *c, expr_ty e) +compiler_list(struct compiler *c, PyObject *e) { - int n = asdl_seq_LEN(e->v.List.elts); - if (e->v.List.ctx == Store) { + int n = PyList_GET_SIZE(List_elts(e)); + if (expr_context_kind(List_ctx(e)) == Store_kind) { ADDOP_I(c, UNPACK_SEQUENCE, n); } - VISIT_SEQ(c, expr, e->v.List.elts); - if (e->v.List.ctx == Load) { + VISIT_SEQ(c, expr, List_elts(e)); + if (expr_context_kind(List_ctx(e)) == Load_kind) { ADDOP_I(c, BUILD_LIST, n); } return 1; } static int -compiler_tuple(struct compiler *c, expr_ty e) +compiler_tuple(struct compiler *c, PyObject *e) { - int n = asdl_seq_LEN(e->v.Tuple.elts); - if (e->v.Tuple.ctx == Store) { + int n = PyList_GET_SIZE(Tuple_elts(e)); + if (expr_context_kind(Tuple_ctx(e)) == Store_kind) { ADDOP_I(c, UNPACK_SEQUENCE, n); } - VISIT_SEQ(c, expr, e->v.Tuple.elts); - if (e->v.Tuple.ctx == Load) { + VISIT_SEQ(c, expr, Tuple_elts(e)); + if (expr_context_kind(Tuple_ctx(e)) == Load_kind) { ADDOP_I(c, BUILD_TUPLE, n); } return 1; } static int -compiler_compare(struct compiler *c, expr_ty e) +compiler_compare(struct compiler *c, PyObject *e) { int i, n; basicblock *cleanup = NULL; /* XXX the logic can be cleaned up for 1 or multiple comparisons */ - VISIT(c, expr, e->v.Compare.left); - n = asdl_seq_LEN(e->v.Compare.ops); + VISIT(c, expr, Compare_left(e)); + n = PyList_GET_SIZE(Compare_ops(e)); assert(n > 0); if (n > 1) { cleanup = compiler_new_block(c); if (cleanup == NULL) return 0; - VISIT(c, expr, asdl_seq_GET(e->v.Compare.comparators, 0)); + VISIT(c, expr, PyList_GET_ITEM(Compare_comparators(e), 0)); } for (i = 1; i < n; i++) { ADDOP(c, DUP_TOP); ADDOP(c, ROT_THREE); /* XXX We're casting a void* to cmpop_ty in the next stmt. */ ADDOP_I(c, COMPARE_OP, - cmpop((cmpop_ty)asdl_seq_GET(e->v.Compare.ops, i - 1))); + cmpop(PyList_GET_ITEM(Compare_ops(e), i - 1))); ADDOP_JREL(c, JUMP_IF_FALSE, cleanup); NEXT_BLOCK(c); ADDOP(c, POP_TOP); if (i < (n - 1)) - VISIT(c, expr, asdl_seq_GET(e->v.Compare.comparators, i)); + VISIT(c, expr, PyList_GET_ITEM(Compare_comparators(e), i)); } - VISIT(c, expr, asdl_seq_GET(e->v.Compare.comparators, n - 1)); + VISIT(c, expr, PyList_GET_ITEM(Compare_comparators(e), n - 1)); ADDOP_I(c, COMPARE_OP, /* XXX We're casting a void* to cmpop_ty in the next stmt. */ - cmpop((cmpop_ty)asdl_seq_GET(e->v.Compare.ops, n - 1))); + cmpop(PyList_GET_ITEM(Compare_ops(e), n - 1))); if (n > 1) { basicblock *end = compiler_new_block(c); if (end == NULL) @@ -2979,23 +2976,21 @@ } static int -compiler_call(struct compiler *c, expr_ty e) +compiler_call(struct compiler *c, PyObject *e) { int n, code = 0; - VISIT(c, expr, e->v.Call.func); - n = asdl_seq_LEN(e->v.Call.args); - VISIT_SEQ(c, expr, e->v.Call.args); - if (e->v.Call.keywords) { - VISIT_SEQ(c, keyword, e->v.Call.keywords); - n |= asdl_seq_LEN(e->v.Call.keywords) << 8; - } - if (e->v.Call.starargs) { - VISIT(c, expr, e->v.Call.starargs); + VISIT(c, expr, Call_func(e)); + n = PyList_GET_SIZE(Call_args(e)); + VISIT_SEQ(c, expr, Call_args(e)); + VISIT_SEQ(c, keyword, Call_keywords(e)); + n |= PyList_GET_SIZE(Call_keywords(e)) << 8; + if (Call_starargs(e) != Py_None) { + VISIT(c, expr, Call_starargs(e)); code |= 1; } - if (e->v.Call.kwargs) { - VISIT(c, expr, e->v.Call.kwargs); + if (Call_kwargs(e) != Py_None) { + VISIT(c, expr, Call_kwargs(e)); code |= 2; } switch (code) { @@ -3017,13 +3012,13 @@ static int compiler_listcomp_generator(struct compiler *c, PyObject *tmpname, - asdl_seq *generators, int gen_index, - expr_ty elt) + PyObject *generators, int gen_index, + PyObject *elt) { /* generate code for the iterator, then each of the ifs, and then write to the element */ - comprehension_ty l; + PyObject *l; basicblock *start, *anchor, *skip, *if_cleanup; int i, n; @@ -3036,32 +3031,32 @@ anchor == NULL) return 0; - l = asdl_seq_GET(generators, gen_index); - VISIT(c, expr, l->iter); + l = PyList_GET_ITEM(generators, gen_index); + VISIT(c, expr, comprehension_iter(l)); ADDOP(c, GET_ITER); compiler_use_next_block(c, start); ADDOP_JREL(c, FOR_ITER, anchor); NEXT_BLOCK(c); - VISIT(c, expr, l->target); + VISIT(c, expr, comprehension_target(l)); /* XXX this needs to be cleaned up...a lot! */ - n = asdl_seq_LEN(l->ifs); + n = PyList_GET_SIZE(comprehension_ifs(l)); for (i = 0; i < n; i++) { - expr_ty e = asdl_seq_GET(l->ifs, i); + PyObject *e = PyList_GET_ITEM(comprehension_ifs(l), i); VISIT(c, expr, e); ADDOP_JREL(c, JUMP_IF_FALSE, if_cleanup); NEXT_BLOCK(c); ADDOP(c, POP_TOP); } - if (++gen_index < asdl_seq_LEN(generators)) + if (++gen_index < PyList_GET_SIZE(generators)) if (!compiler_listcomp_generator(c, tmpname, generators, gen_index, elt)) return 0; /* only append after the last for generator */ - if (gen_index >= asdl_seq_LEN(generators)) { - if (!compiler_nameop(c, tmpname, Load)) + if (gen_index >= PyList_GET_SIZE(generators)) { + if (!compiler_nameop(c, tmpname, Load())) return 0; VISIT(c, expr, elt); ADDOP_I(c, CALL_FUNCTION, 1); @@ -3079,22 +3074,22 @@ compiler_use_next_block(c, anchor); /* delete the append method added to locals */ if (gen_index == 1) - if (!compiler_nameop(c, tmpname, Del)) + if (!compiler_nameop(c, tmpname, Del())) return 0; return 1; } static int -compiler_listcomp(struct compiler *c, expr_ty e) +compiler_listcomp(struct compiler *c, PyObject *e) { char tmpname[256]; - identifier tmp; + PyObject *tmp; int rc = 0; - static identifier append; - asdl_seq *generators = e->v.ListComp.generators; + static PyObject *append; + PyObject *generators = ListComp_generators(e); - assert(e->kind == ListComp_kind); + assert(expr_kind(e) == ListComp_kind); if (!append) { append = PyString_InternFromString("append"); if (!append) @@ -3107,22 +3102,22 @@ ADDOP_I(c, BUILD_LIST, 0); ADDOP(c, DUP_TOP); ADDOP_O(c, LOAD_ATTR, append, names); - if (compiler_nameop(c, tmp, Store)) + if (compiler_nameop(c, tmp, Store())) rc = compiler_listcomp_generator(c, tmp, generators, 0, - e->v.ListComp.elt); + ListComp_elt(e)); Py_DECREF(tmp); return rc; } static int compiler_genexp_generator(struct compiler *c, - asdl_seq *generators, int gen_index, - expr_ty elt) + PyObject *generators, int gen_index, + PyObject *elt) { /* generate code for the iterator, then each of the ifs, and then write to the element */ - comprehension_ty ge; + PyObject *ge; basicblock *start, *anchor, *skip, *if_cleanup, *end; int i, n; @@ -3136,7 +3131,7 @@ anchor == NULL || end == NULL) return 0; - ge = asdl_seq_GET(generators, gen_index); + ge = PyList_GET_ITEM(generators, gen_index); ADDOP_JREL(c, SETUP_LOOP, end); if (!compiler_push_fblock(c, LOOP, start)) return 0; @@ -3148,30 +3143,30 @@ } else { /* Sub-iter - calculate on the fly */ - VISIT(c, expr, ge->iter); + VISIT(c, expr, comprehension_iter(ge)); ADDOP(c, GET_ITER); } compiler_use_next_block(c, start); ADDOP_JREL(c, FOR_ITER, anchor); NEXT_BLOCK(c); - VISIT(c, expr, ge->target); + VISIT(c, expr, comprehension_target(ge)); /* XXX this needs to be cleaned up...a lot! */ - n = asdl_seq_LEN(ge->ifs); + n = PyList_GET_SIZE(comprehension_ifs(ge)); for (i = 0; i < n; i++) { - expr_ty e = asdl_seq_GET(ge->ifs, i); + PyObject *e = PyList_GET_ITEM(comprehension_ifs(ge), i); VISIT(c, expr, e); ADDOP_JREL(c, JUMP_IF_FALSE, if_cleanup); NEXT_BLOCK(c); ADDOP(c, POP_TOP); } - if (++gen_index < asdl_seq_LEN(generators)) + if (++gen_index < PyList_GET_SIZE(generators)) if (!compiler_genexp_generator(c, generators, gen_index, elt)) return 0; /* only append after the last 'for' generator */ - if (gen_index >= asdl_seq_LEN(generators)) { + if (gen_index >= PyList_GET_SIZE(generators)) { VISIT(c, expr, elt); ADDOP(c, YIELD_VALUE); ADDOP(c, POP_TOP); @@ -3195,13 +3190,11 @@ } static int -compiler_genexp(struct compiler *c, expr_ty e) +compiler_genexp(struct compiler *c, PyObject *e) { - static identifier name; + static PyObject *name; PyCodeObject *co; - expr_ty outermost_iter = ((comprehension_ty) - (asdl_seq_GET(e->v.GeneratorExp.generators, - 0)))->iter; + PyObject *outermost_iter = comprehension_iter(PyList_GET_ITEM(GeneratorExp_generators(e), 0)); if (!name) { name = PyString_FromString(""); @@ -3209,10 +3202,10 @@ return 0; } - if (!compiler_enter_scope(c, name, (void *)e, e->lineno)) + if (!compiler_enter_scope(c, name, (void *)e, ((struct _expr*)e)->lineno)) return 0; - compiler_genexp_generator(c, e->v.GeneratorExp.generators, 0, - e->v.GeneratorExp.elt); + compiler_genexp_generator(c, GeneratorExp_generators(e), 0, + GeneratorExp_elt(e)); co = assemble(c, 1); compiler_exit_scope(c); if (co == NULL) @@ -3229,10 +3222,10 @@ } static int -compiler_visit_keyword(struct compiler *c, keyword_ty k) +compiler_visit_keyword(struct compiler *c, PyObject *k) { - ADDOP_O(c, LOAD_CONST, k->arg, consts); - VISIT(c, expr, k->value); + ADDOP_O(c, LOAD_CONST, keyword_arg(k), consts); + VISIT(c, expr, keyword_value(k)); return 1; } @@ -3243,52 +3236,52 @@ */ static int -expr_constant(expr_ty e) +expr_constant(PyObject *e) { - switch (e->kind) { + switch (expr_kind(e)) { case Num_kind: - return PyObject_IsTrue(e->v.Num.n); + return PyObject_IsTrue(Num_n(e)); case Str_kind: - return PyObject_IsTrue(e->v.Str.s); + return PyObject_IsTrue(Str_s(e)); default: return -1; } } static int -compiler_visit_expr(struct compiler *c, expr_ty e) +compiler_visit_expr(struct compiler *c, PyObject *e) { int i, n; - if (e->lineno > c->u->u_lineno) { - c->u->u_lineno = e->lineno; + if (((struct _expr*)e)->lineno > c->u->u_lineno) { + c->u->u_lineno = ((struct _expr*)e)->lineno; c->u->u_lineno_set = 0; } - switch (e->kind) { + switch (expr_kind(e)) { case BoolOp_kind: return compiler_boolop(c, e); case BinOp_kind: - VISIT(c, expr, e->v.BinOp.left); - VISIT(c, expr, e->v.BinOp.right); - ADDOP(c, binop(c, e->v.BinOp.op)); + VISIT(c, expr, BinOp_left(e)); + VISIT(c, expr, BinOp_right(e)); + ADDOP(c, binop(c, BinOp_op(e))); break; case UnaryOp_kind: - VISIT(c, expr, e->v.UnaryOp.operand); - ADDOP(c, unaryop(e->v.UnaryOp.op)); + VISIT(c, expr, UnaryOp_operand(e)); + ADDOP(c, unaryop(UnaryOp_op(e))); break; case Lambda_kind: return compiler_lambda(c, e); case Dict_kind: /* XXX get rid of arg? */ ADDOP_I(c, BUILD_MAP, 0); - n = asdl_seq_LEN(e->v.Dict.values); + n = PyList_GET_SIZE(Dict_values(e)); /* We must arrange things just right for STORE_SUBSCR. It wants the stack to look like (value) (dict) (key) */ for (i = 0; i < n; i++) { ADDOP(c, DUP_TOP); - VISIT(c, expr, asdl_seq_GET(e->v.Dict.values, i)); + VISIT(c, expr, PyList_GET_ITEM(Dict_values(e), i)); ADDOP(c, ROT_TWO); - VISIT(c, expr, asdl_seq_GET(e->v.Dict.keys, i)); + VISIT(c, expr, PyList_GET_ITEM(Dict_keys(e), i)); ADDOP(c, STORE_SUBSCR); } break; @@ -3307,8 +3300,8 @@ "block with a 'finally' clause"); } */ - if (e->v.Yield.value) { - VISIT(c, expr, e->v.Yield.value); + if (Yield_value(e) != Py_None) { + VISIT(c, expr, Yield_value(e)); } else { ADDOP_O(c, LOAD_CONST, Py_None, consts); @@ -3320,70 +3313,70 @@ case Call_kind: return compiler_call(c, e); case Repr_kind: - VISIT(c, expr, e->v.Repr.value); + VISIT(c, expr, Repr_value(e)); ADDOP(c, UNARY_CONVERT); break; case Num_kind: - ADDOP_O(c, LOAD_CONST, e->v.Num.n, consts); + ADDOP_O(c, LOAD_CONST, Num_n(e), consts); break; case Str_kind: - ADDOP_O(c, LOAD_CONST, e->v.Str.s, consts); + ADDOP_O(c, LOAD_CONST, Str_s(e), consts); break; /* The following exprs can be assignment targets. */ case Attribute_kind: - if (e->v.Attribute.ctx != AugStore) - VISIT(c, expr, e->v.Attribute.value); - switch (e->v.Attribute.ctx) { - case AugLoad: + if (expr_context_kind(Attribute_ctx(e)) != AugStore_kind) + VISIT(c, expr, Attribute_value(e)); + switch (expr_context_kind(Attribute_ctx(e))) { + case AugLoad_kind: ADDOP(c, DUP_TOP); /* Fall through to load */ - case Load: - ADDOP_NAME(c, LOAD_ATTR, e->v.Attribute.attr, names); + case Load_kind: + ADDOP_NAME(c, LOAD_ATTR, Attribute_attr(e), names); break; - case AugStore: + case AugStore_kind: ADDOP(c, ROT_TWO); /* Fall through to save */ - case Store: - ADDOP_NAME(c, STORE_ATTR, e->v.Attribute.attr, names); + case Store_kind: + ADDOP_NAME(c, STORE_ATTR, Attribute_attr(e), names); break; - case Del: - ADDOP_NAME(c, DELETE_ATTR, e->v.Attribute.attr, names); + case Del_kind: + ADDOP_NAME(c, DELETE_ATTR, Attribute_attr(e), names); break; - case Param: + case Param_kind: PyErr_SetString(PyExc_SystemError, "param invalid in attribute expression"); return 0; } break; case Subscript_kind: - switch (e->v.Subscript.ctx) { - case AugLoad: - VISIT(c, expr, e->v.Subscript.value); - VISIT_SLICE(c, e->v.Subscript.slice, AugLoad); - break; - case Load: - VISIT(c, expr, e->v.Subscript.value); - VISIT_SLICE(c, e->v.Subscript.slice, Load); - break; - case AugStore: - VISIT_SLICE(c, e->v.Subscript.slice, AugStore); - break; - case Store: - VISIT(c, expr, e->v.Subscript.value); - VISIT_SLICE(c, e->v.Subscript.slice, Store); - break; - case Del: - VISIT(c, expr, e->v.Subscript.value); - VISIT_SLICE(c, e->v.Subscript.slice, Del); + switch (expr_context_kind(Subscript_ctx(e))) { + case AugLoad_kind: + VISIT(c, expr, Subscript_value(e)); + VISIT_SLICE(c, Subscript_slice(e), AugLoad()); /* make a PyObject ?? */ + break; + case Load_kind: + VISIT(c, expr, Subscript_value(e)); + VISIT_SLICE(c, Subscript_slice(e), Load()); + break; + case AugStore_kind: + VISIT_SLICE(c, Subscript_slice(e), AugStore()); + break; + case Store_kind: + VISIT(c, expr, Subscript_value(e)); + VISIT_SLICE(c, Subscript_slice(e), Store()); + break; + case Del_kind: + VISIT(c, expr, Subscript_value(e)); + VISIT_SLICE(c, Subscript_slice(e), Del()); break; - case Param: + case Param_kind: PyErr_SetString(PyExc_SystemError, "param invalid in subscript expression"); return 0; } break; case Name_kind: - return compiler_nameop(c, e->v.Name.id, e->v.Name.ctx); + return compiler_nameop(c, Name_id(e), Name_ctx(e)); /* child nodes of List and Tuple will have expr_context set */ case List_kind: return compiler_list(c, e); @@ -3394,43 +3387,41 @@ } static int -compiler_augassign(struct compiler *c, stmt_ty s) +compiler_augassign(struct compiler *c, PyObject *s) { - expr_ty e = s->v.AugAssign.target; - expr_ty auge; + PyObject *e = AugAssign_target(s); + PyObject *auge; - assert(s->kind == AugAssign_kind); + assert(stmt_kind(s) == AugAssign_kind); - switch (e->kind) { + switch (expr_kind(e)) { case Attribute_kind: - auge = Attribute(e->v.Attribute.value, e->v.Attribute.attr, - AugLoad, e->lineno); + auge = Attribute(Attribute_value(e), Attribute_attr(e), + AugLoad(), ((struct _expr*)e)->lineno); if (auge == NULL) return 0; VISIT(c, expr, auge); - VISIT(c, expr, s->v.AugAssign.value); - ADDOP(c, inplace_binop(c, s->v.AugAssign.op)); - auge->v.Attribute.ctx = AugStore; + VISIT(c, expr, AugAssign_value(s)); + ADDOP(c, inplace_binop(c, AugAssign_op(s))); + Attribute_ctx (auge)= AugStore(); VISIT(c, expr, auge); - free(auge); break; case Subscript_kind: - auge = Subscript(e->v.Subscript.value, e->v.Subscript.slice, - AugLoad, e->lineno); + auge = Subscript(Subscript_value(e), Subscript_slice(e), + AugLoad(), ((struct _expr*)e)->lineno); if (auge == NULL) return 0; VISIT(c, expr, auge); - VISIT(c, expr, s->v.AugAssign.value); - ADDOP(c, inplace_binop(c, s->v.AugAssign.op)); - auge->v.Subscript.ctx = AugStore; + VISIT(c, expr, AugAssign_value(s)); + ADDOP(c, inplace_binop(c, AugAssign_op(s))); + Subscript_ctx (auge)= AugStore(); VISIT(c, expr, auge); - free(auge); break; case Name_kind: - VISIT(c, expr, s->v.AugAssign.target); - VISIT(c, expr, s->v.AugAssign.value); - ADDOP(c, inplace_binop(c, s->v.AugAssign.op)); - return compiler_nameop(c, e->v.Name.id, Store); + VISIT(c, expr, AugAssign_target(s)); + VISIT(c, expr, AugAssign_value(s)); + ADDOP(c, inplace_binop(c, AugAssign_op(s))); + return compiler_nameop(c, Name_id(e), Store()); default: fprintf(stderr, "invalid node type for augmented assignment\n"); @@ -3493,27 +3484,27 @@ static int compiler_handle_subscr(struct compiler *c, const char *kind, - expr_context_ty ctx) + PyObject *ctx) { int op = 0; /* XXX this code is duplicated */ - switch (ctx) { - case AugLoad: /* fall through to Load */ - case Load: op = BINARY_SUBSCR; break; - case AugStore:/* fall through to Store */ - case Store: op = STORE_SUBSCR; break; - case Del: op = DELETE_SUBSCR; break; - case Param: + switch (expr_context_kind(ctx)) { + case AugLoad_kind: /* fall through to Load */ + case Load_kind: op = BINARY_SUBSCR; break; + case AugStore_kind:/* fall through to Store */ + case Store_kind: op = STORE_SUBSCR; break; + case Del_kind: op = DELETE_SUBSCR; break; + case Param_kind: fprintf(stderr, "invalid %s kind %d in subscript\n", - kind, ctx); + kind, expr_context_kind(ctx)); return 0; } - if (ctx == AugLoad) { + if (expr_context_kind(ctx) == AugLoad_kind) { ADDOP_I(c, DUP_TOPX, 2); } - else if (ctx == AugStore) { + else if (expr_context_kind(ctx) == AugStore_kind) { ADDOP(c, ROT_THREE); } ADDOP(c, op); @@ -3521,61 +3512,61 @@ } static int -compiler_slice(struct compiler *c, slice_ty s, expr_context_ty ctx) +compiler_slice(struct compiler *c, PyObject *s, PyObject *ctx) { int n = 2; - assert(s->kind == Slice_kind); + assert(slice_kind(s) == Slice_kind); /* only handles the cases where BUILD_SLICE is emitted */ - if (s->v.Slice.lower) { - VISIT(c, expr, s->v.Slice.lower); + if (Slice_lower(s) != Py_None) { + VISIT(c, expr, Slice_lower(s)); } else { ADDOP_O(c, LOAD_CONST, Py_None, consts); } - if (s->v.Slice.upper) { - VISIT(c, expr, s->v.Slice.upper); + if (Slice_upper(s) != Py_None) { + VISIT(c, expr, Slice_upper(s)); } else { ADDOP_O(c, LOAD_CONST, Py_None, consts); } - if (s->v.Slice.step) { + if (Slice_step(s) != Py_None) { n++; - VISIT(c, expr, s->v.Slice.step); + VISIT(c, expr, Slice_step(s)); } ADDOP_I(c, BUILD_SLICE, n); return 1; } static int -compiler_simple_slice(struct compiler *c, slice_ty s, expr_context_ty ctx) +compiler_simple_slice(struct compiler *c, PyObject *s, PyObject *ctx) { int op = 0, slice_offset = 0, stack_count = 0; - assert(s->v.Slice.step == NULL); - if (s->v.Slice.lower) { + assert(Slice_step(s) == NULL); + if (Slice_lower(s) != Py_None) { slice_offset++; stack_count++; - if (ctx != AugStore) - VISIT(c, expr, s->v.Slice.lower); + if (expr_context_kind(ctx) != AugStore_kind) + VISIT(c, expr, Slice_lower(s)); } - if (s->v.Slice.upper) { + if (Slice_upper(s) != Py_None) { slice_offset += 2; stack_count++; - if (ctx != AugStore) - VISIT(c, expr, s->v.Slice.upper); + if (expr_context_kind(ctx) != AugStore_kind) + VISIT(c, expr, Slice_upper(s)); } - if (ctx == AugLoad) { + if (expr_context_kind(ctx) == AugLoad_kind) { switch (stack_count) { case 0: ADDOP(c, DUP_TOP); break; case 1: ADDOP_I(c, DUP_TOPX, 2); break; case 2: ADDOP_I(c, DUP_TOPX, 3); break; } } - else if (ctx == AugStore) { + else if (expr_context_kind(ctx) == AugStore_kind) { switch (stack_count) { case 0: ADDOP(c, ROT_TWO); break; case 1: ADDOP(c, ROT_THREE); break; @@ -3583,13 +3574,13 @@ } } - switch (ctx) { - case AugLoad: /* fall through to Load */ - case Load: op = SLICE; break; - case AugStore:/* fall through to Store */ - case Store: op = STORE_SLICE; break; - case Del: op = DELETE_SLICE; break; - case Param: + switch (expr_context_kind(ctx)) { + case AugLoad_kind: /* fall through to Load */ + case Load_kind: op = SLICE; break; + case AugStore_kind:/* fall through to Store */ + case Store_kind: op = STORE_SLICE; break; + case Del_kind: op = DELETE_SLICE; break; + case Param_kind: PyErr_SetString(PyExc_SystemError, "param invalid in simple slice"); return 0; @@ -3600,10 +3591,10 @@ } static int -compiler_visit_nested_slice(struct compiler *c, slice_ty s, - expr_context_ty ctx) +compiler_visit_nested_slice(struct compiler *c, PyObject *s, + PyObject *ctx) { - switch (s->kind) { + switch (slice_kind(s)) { case Ellipsis_kind: ADDOP_O(c, LOAD_CONST, Py_Ellipsis, consts); break; @@ -3611,7 +3602,7 @@ return compiler_slice(c, s, ctx); break; case Index_kind: - VISIT(c, expr, s->v.Index.value); + VISIT(c, expr, Index_value(s)); break; case ExtSlice_kind: PyErr_SetString(PyExc_SystemError, @@ -3623,28 +3614,28 @@ static int -compiler_visit_slice(struct compiler *c, slice_ty s, expr_context_ty ctx) +compiler_visit_slice(struct compiler *c, PyObject *s, PyObject *ctx) { - switch (s->kind) { + switch (slice_kind(s)) { case Ellipsis_kind: ADDOP_O(c, LOAD_CONST, Py_Ellipsis, consts); break; case Slice_kind: - if (!s->v.Slice.step) + if (Slice_step(s) == Py_None) return compiler_simple_slice(c, s, ctx); if (!compiler_slice(c, s, ctx)) return 0; - if (ctx == AugLoad) { + if (expr_context_kind(ctx) == AugLoad_kind) { ADDOP_I(c, DUP_TOPX, 2); } - else if (ctx == AugStore) { + else if (expr_context_kind(ctx) == AugStore_kind) { ADDOP(c, ROT_THREE); } return compiler_handle_subscr(c, "slice", ctx); case ExtSlice_kind: { - int i, n = asdl_seq_LEN(s->v.ExtSlice.dims); + int i, n = PyList_GET_SIZE(ExtSlice_dims(s)); for (i = 0; i < n; i++) { - slice_ty sub = asdl_seq_GET(s->v.ExtSlice.dims, i); + PyObject *sub = PyList_GET_ITEM(ExtSlice_dims(s), i); if (!compiler_visit_nested_slice(c, sub, ctx)) return 0; } @@ -3652,8 +3643,8 @@ return compiler_handle_subscr(c, "extended slice", ctx); } case Index_kind: - if (ctx != AugStore) - VISIT(c, expr, s->v.Index.value); + if (expr_context_kind(ctx) != AugStore_kind) + VISIT(c, expr, Index_value(s)); return compiler_handle_subscr(c, "index", ctx); } return 1; Modified: python/branches/ast-objects/Python/future.c ============================================================================== --- python/branches/ast-objects/Python/future.c (original) +++ python/branches/ast-objects/Python/future.c Sun Feb 5 02:54:29 2006 @@ -10,17 +10,17 @@ #define UNDEFINED_FUTURE_FEATURE "future feature %.100s is not defined" static int -future_check_features(PyFutureFeatures *ff, stmt_ty s, const char *filename) +future_check_features(PyFutureFeatures *ff, PyObject *s, const char *filename) { int i; - asdl_seq *names; + PyObject *names; - assert(s->kind == ImportFrom_kind); + assert(stmt_kind(s) == ImportFrom_kind); - names = s->v.ImportFrom.names; - for (i = 0; i < asdl_seq_LEN(names); i++) { - alias_ty name = asdl_seq_GET(names, i); - const char *feature = PyString_AsString(name->name); + names = ImportFrom_names(s); + for (i = 0; i < PyList_GET_SIZE(names); i++) { + PyObject *name = PyList_GET_ITEM(names, i); + const char *feature = PyString_AsString(alias_name(name)); if (!feature) return 0; if (strcmp(feature, FUTURE_NESTED_SCOPES) == 0) { @@ -32,12 +32,12 @@ } else if (strcmp(feature, "braces") == 0) { PyErr_SetString(PyExc_SyntaxError, "not a chance"); - PyErr_SyntaxLocation(filename, s->lineno); + PyErr_SyntaxLocation(filename, ((struct _stmt*)s)->lineno); return 0; } else { PyErr_Format(PyExc_SyntaxError, UNDEFINED_FUTURE_FEATURE, feature); - PyErr_SyntaxLocation(filename, s->lineno); + PyErr_SyntaxLocation(filename, ((struct _stmt*)s)->lineno); return 0; } } @@ -45,7 +45,7 @@ } static int -future_parse(PyFutureFeatures *ff, PyTypeObject *mod, const char *filename) +future_parse(PyFutureFeatures *ff, PyObject *mod, const char *filename) { int i, found_docstring = 0, done = 0, prev_line = 0; @@ -56,7 +56,7 @@ return 0; } - if (!(mod->kind == Module_kind || mod->kind == Interactive_kind)) + if (!(mod_kind(mod) == Module_kind || mod_kind(mod) == Interactive_kind)) return 1; /* A subsequent pass will detect future imports that don't @@ -68,12 +68,12 @@ */ - for (i = 0; i < asdl_seq_LEN(mod->v.Module.body); i++) { - stmt_ty s = asdl_seq_GET(mod->v.Module.body, i); + for (i = 0; i < PyList_GET_SIZE(Module_body(mod)); i++) { + PyObject *s = PyList_GET_ITEM(Module_body(mod), i); - if (done && s->lineno > prev_line) + if (done && ((struct _stmt*)s)->lineno > prev_line) return 1; - prev_line = s->lineno; + prev_line = ((struct _stmt*)s)->lineno; /* The tests below will return from this function unless it is still possible to find a future statement. The only things @@ -81,25 +81,25 @@ statement and a doc string. */ - if (s->kind == ImportFrom_kind) { - if (s->v.ImportFrom.module == future) { + if (stmt_kind(s) == ImportFrom_kind) { + if (ImportFrom_module (s)== future) { if (done) { PyErr_SetString(PyExc_SyntaxError, ERR_LATE_FUTURE); PyErr_SyntaxLocation(filename, - s->lineno); + ((struct _stmt*)s)->lineno); return 0; } if (!future_check_features(ff, s, filename)) return 0; - ff->ff_lineno = s->lineno; + ff->ff_lineno = ((struct _stmt*)s)->lineno; } else done = 1; } - else if (s->kind == Expr_kind && !found_docstring) { - expr_ty e = s->v.Expr.value; - if (e->kind != Str_kind) + else if (stmt_kind(s) == Expr_kind && !found_docstring) { + PyObject *e = Expr_value(s); + if (stmt_kind(e) != Str_kind) done = 1; else found_docstring = 1; @@ -112,7 +112,7 @@ PyFutureFeatures * -PyFuture_FromAST(PyTypeObject *mod, const char *filename) +PyFuture_FromAST(PyObject *mod, const char *filename) { PyFutureFeatures *ff; Modified: python/branches/ast-objects/Python/import.c ============================================================================== --- python/branches/ast-objects/Python/import.c (original) +++ python/branches/ast-objects/Python/import.c Sun Feb 5 02:54:29 2006 @@ -772,7 +772,7 @@ parse_source_module(const char *pathname, FILE *fp) { PyCodeObject *co = NULL; - PyTypeObject *mod; + PyObject *mod; mod = PyParser_ASTFromFile(fp, pathname, Py_file_input, 0, 0, 0, NULL); Modified: python/branches/ast-objects/Python/pythonrun.c ============================================================================== --- python/branches/ast-objects/Python/pythonrun.c (original) +++ python/branches/ast-objects/Python/pythonrun.c Sun Feb 5 02:54:29 2006 @@ -35,9 +35,9 @@ /* Forward */ static void initmain(void); static void initsite(void); -static PyObject *run_err_mod(PyTypeObject*, const char *, PyObject *, PyObject *, +static PyObject *run_err_mod(PyObject*, const char *, PyObject *, PyObject *, PyCompilerFlags *); -static PyObject *run_mod(PyTypeObject*, const char *, PyObject *, PyObject *, +static PyObject *run_mod(PyObject*, const char *, PyObject *, PyObject *, PyCompilerFlags *); static PyObject *run_pyc_file(FILE *, const char *, PyObject *, PyObject *, PyCompilerFlags *); @@ -213,6 +213,8 @@ _PyImportHooks_Init(); + init_ast(); + if (install_sigs) initsigs(); /* Signal handling stuff, including initintr() */ @@ -225,9 +227,9 @@ _PyGILState_Init(interp, tstate); #endif /* WITH_THREAD */ - warnings_module = PyImport_ImportModule("warnings"); - if (!warnings_module) - PyErr_Clear(); + warnings_module = PyImport_ImportModule("warnings"); + if (!warnings_module) + PyErr_Clear(); #if defined(Py_USING_UNICODE) && defined(HAVE_LANGINFO_H) && defined(CODESET) /* On Unix, set the file system encoding according to the @@ -240,7 +242,7 @@ setlocale(LC_CTYPE, ""); codeset = nl_langinfo(CODESET); if (codeset && *codeset) { - PyObject *enc = PyCodec_Encoder(codeset); + PyObject *enc = PyCodec_Encoder(codeset); if (enc) { codeset = strdup(codeset); Py_DECREF(enc); @@ -696,7 +698,7 @@ PyRun_InteractiveOneFlags(FILE *fp, const char *filename, PyCompilerFlags *flags) { PyObject *m, *d, *v, *w; - PyTypeObject *mod; + PyObject *mod; char *ps1 = "", *ps2 = ""; int errcode = 0; @@ -1155,7 +1157,7 @@ PyObject *locals, PyCompilerFlags *flags) { PyObject *ret; - PyTypeObject *mod = PyParser_ASTFromString(str, "", start, flags); + PyObject *mod = PyParser_ASTFromString(str, "", start, flags); ret = run_err_mod(mod, "", globals, locals, flags); Py_DECREF(mod); return ret; @@ -1166,7 +1168,7 @@ PyObject *locals, int closeit, PyCompilerFlags *flags) { PyObject *ret; - PyTypeObject *mod = PyParser_ASTFromFile(fp, filename, start, 0, 0, + PyObject *mod = PyParser_ASTFromFile(fp, filename, start, 0, 0, flags, NULL); if (mod == NULL) return NULL; @@ -1178,7 +1180,7 @@ } static PyObject * -run_err_mod(PyTypeObject *mod, const char *filename, PyObject *globals, +run_err_mod(PyObject *mod, const char *filename, PyObject *globals, PyObject *locals, PyCompilerFlags *flags) { if (mod == NULL) @@ -1187,8 +1189,8 @@ } static PyObject * -run_mod(PyTypeObject *mod, const char *filename, PyObject *globals, PyObject *locals, - PyCompilerFlags *flags) +run_mod(PyObject *mod, const char *filename, PyObject *globals, PyObject *locals, + PyCompilerFlags *flags) { PyCodeObject *co; PyObject *v; @@ -1236,7 +1238,7 @@ Py_CompileStringFlags(const char *str, const char *filename, int start, PyCompilerFlags *flags) { - PyTypeObject *mod; + PyObject *mod; PyCodeObject *co; mod = PyParser_ASTFromString(str, filename, start, flags); if (mod == NULL) @@ -1249,7 +1251,7 @@ struct symtable * Py_SymtableString(const char *str, const char *filename, int start) { - PyTypeObject *mod; + PyObject *mod; struct symtable *st; mod = PyParser_ASTFromString(str, filename, start, NULL); @@ -1261,12 +1263,12 @@ } /* Preferred access to parser is through AST. */ -PyTypeObject * +PyObject * PyParser_ASTFromString(const char *s, const char *filename, int start, PyCompilerFlags *flags) { node *n; - PyTypeObject *mod; + PyObject *mod; perrdetail err; n = PyParser_ParseStringFlagsFilename(s, filename, &_PyParser_Grammar, start, &err, @@ -1282,12 +1284,12 @@ } } -PyTypeObject * +PyObject * PyParser_ASTFromFile(FILE *fp, const char *filename, int start, char *ps1, char *ps2, PyCompilerFlags *flags, int *errcode) { node *n; - PyTypeObject *mod; + PyObject *mod; perrdetail err; n = PyParser_ParseFileFlags(fp, filename, &_PyParser_Grammar, start, ps1, ps2, &err, PARSER_FLAGS(flags)); Modified: python/branches/ast-objects/Python/symtable.c ============================================================================== --- python/branches/ast-objects/Python/symtable.c (original) +++ python/branches/ast-objects/Python/symtable.c Sun Feb 5 02:54:29 2006 @@ -153,24 +153,24 @@ static int symtable_analyze(struct symtable *st); static int symtable_warn(struct symtable *st, char *msg); -static int symtable_enter_block(struct symtable *st, identifier name, +static int symtable_enter_block(struct symtable *st, PyObject *name, _Py_block_ty block, void *ast, int lineno); static int symtable_exit_block(struct symtable *st, void *ast); -static int symtable_visit_stmt(struct symtable *st, stmt_ty s); -static int symtable_visit_expr(struct symtable *st, expr_ty s); -static int symtable_visit_genexp(struct symtable *st, expr_ty s); -static int symtable_visit_arguments(struct symtable *st, arguments_ty); -static int symtable_visit_excepthandler(struct symtable *st, excepthandler_ty); -static int symtable_visit_alias(struct symtable *st, alias_ty); -static int symtable_visit_comprehension(struct symtable *st, comprehension_ty); -static int symtable_visit_keyword(struct symtable *st, keyword_ty); -static int symtable_visit_slice(struct symtable *st, slice_ty); -static int symtable_visit_params(struct symtable *st, asdl_seq *args, int top); -static int symtable_visit_params_nested(struct symtable *st, asdl_seq *args); +static int symtable_visit_stmt(struct symtable *st, PyObject *s); +static int symtable_visit_expr(struct symtable *st, PyObject *s); +static int symtable_visit_genexp(struct symtable *st, PyObject *s); +static int symtable_visit_arguments(struct symtable *st, PyObject *); +static int symtable_visit_excepthandler(struct symtable *st, PyObject *); +static int symtable_visit_alias(struct symtable *st, PyObject *); +static int symtable_visit_comprehension(struct symtable *st, PyObject *); +static int symtable_visit_keyword(struct symtable *st, PyObject *); +static int symtable_visit_slice(struct symtable *st, PyObject *); +static int symtable_visit_params(struct symtable *st, PyObject *args, int top); +static int symtable_visit_params_nested(struct symtable *st, PyObject *args); static int symtable_implicit_arg(struct symtable *st, int pos); -static identifier top = NULL, lambda = NULL, genexpr = NULL; +static PyObject *top = NULL, *lambda = NULL, *genexpr = NULL; #define GET_IDENTIFIER(VAR) \ ((VAR) ? (VAR) : ((VAR) = PyString_InternFromString(# VAR))) @@ -204,10 +204,10 @@ } struct symtable * -PySymtable_Build(PyTypeObject *mod, const char *filename, PyFutureFeatures *future) +PySymtable_Build(PyObject *mod, const char *filename, PyFutureFeatures *future) { struct symtable *st = symtable_new(); - asdl_seq *seq; + PyObject *seq; int i; if (st == NULL) @@ -219,21 +219,21 @@ st->st_top = st->st_cur; st->st_cur->ste_unoptimized = OPT_TOPLEVEL; /* Any other top-level initialization? */ - switch (mod->kind) { + switch (mod_kind(mod)) { case Module_kind: - seq = mod->v.Module.body; - for (i = 0; i < asdl_seq_LEN(seq); i++) - if (!symtable_visit_stmt(st, asdl_seq_GET(seq, i))) + seq = Module_body(mod); + for (i = 0; i < PyList_GET_SIZE(seq); i++) + if (!symtable_visit_stmt(st, PyList_GET_ITEM(seq, i))) goto error; break; case Expression_kind: - if (!symtable_visit_expr(st, mod->v.Expression.body)) + if (!symtable_visit_expr(st, Expression_body(mod))) goto error; break; case Interactive_kind: - seq = mod->v.Interactive.body; - for (i = 0; i < asdl_seq_LEN(seq); i++) - if (!symtable_visit_stmt(st, asdl_seq_GET(seq, i))) + seq = Interactive_body(mod); + for (i = 0; i < PyList_GET_SIZE(seq); i++) + if (!symtable_visit_stmt(st, PyList_GET_ITEM(seq, i))) goto error; break; case Suite_kind: @@ -723,7 +723,7 @@ } static int -symtable_enter_block(struct symtable *st, identifier name, _Py_block_ty block, +symtable_enter_block(struct symtable *st, PyObject *name, _Py_block_ty block, void *ast, int lineno) { PySTEntryObject *prev = NULL; @@ -829,21 +829,23 @@ useful if the first node in the sequence requires special treatment. */ -#define VISIT(ST, TYPE, V) \ +#define VISIT(ST, TYPE, V) {\ if (!symtable_visit_ ## TYPE((ST), (V))) \ - return 0; + return 0;\ +} -#define VISIT_IN_BLOCK(ST, TYPE, V, S) \ +#define VISIT_IN_BLOCK(ST, TYPE, V, S) {\ if (!symtable_visit_ ## TYPE((ST), (V))) { \ symtable_exit_block((ST), (S)); \ return 0; \ - } + }\ +} #define VISIT_SEQ(ST, TYPE, SEQ) { \ int i; \ - asdl_seq *seq = (SEQ); /* avoid variable capture */ \ - for (i = 0; i < asdl_seq_LEN(seq); i++) { \ - TYPE ## _ty elt = asdl_seq_GET(seq, i); \ + PyObject *seq = (SEQ); /* avoid variable capture */ \ + for (i = 0; i < PyList_GET_SIZE(seq); i++) { \ + PyObject *elt = PyList_GET_ITEM(seq, i); \ if (!symtable_visit_ ## TYPE((ST), elt)) \ return 0; \ } \ @@ -851,9 +853,9 @@ #define VISIT_SEQ_IN_BLOCK(ST, TYPE, SEQ, S) { \ int i; \ - asdl_seq *seq = (SEQ); /* avoid variable capture */ \ - for (i = 0; i < asdl_seq_LEN(seq); i++) { \ - TYPE ## _ty elt = asdl_seq_GET(seq, i); \ + PyObject *seq = (SEQ); /* avoid variable capture */ \ + for (i = 0; i < PyList_GET_SIZE(seq); i++) { \ + PyObject *elt = PyList_GET_ITEM(seq, i); \ if (!symtable_visit_ ## TYPE((ST), elt)) { \ symtable_exit_block((ST), (S)); \ return 0; \ @@ -863,9 +865,9 @@ #define VISIT_SEQ_TAIL(ST, TYPE, SEQ, START) { \ int i; \ - asdl_seq *seq = (SEQ); /* avoid variable capture */ \ - for (i = (START); i < asdl_seq_LEN(seq); i++) { \ - TYPE ## _ty elt = asdl_seq_GET(seq, i); \ + PyObject *seq = (SEQ); /* avoid variable capture */ \ + for (i = (START); i < PyList_GET_SIZE(seq); i++) { \ + PyObject *elt = PyList_GET_ITEM(seq, i); \ if (!symtable_visit_ ## TYPE((ST), elt)) \ return 0; \ } \ @@ -873,9 +875,9 @@ #define VISIT_SEQ_TAIL_IN_BLOCK(ST, TYPE, SEQ, START, S) { \ int i; \ - asdl_seq *seq = (SEQ); /* avoid variable capture */ \ - for (i = (START); i < asdl_seq_LEN(seq); i++) { \ - TYPE ## _ty elt = asdl_seq_GET(seq, i); \ + PyObject *seq = (SEQ); /* avoid variable capture */ \ + for (i = (START); i < PyList_GET_SIZE(seq); i++) { \ + PyObject *elt = PyList_GET_ITEM(seq, i); \ if (!symtable_visit_ ## TYPE((ST), elt)) { \ symtable_exit_block((ST), (S)); \ return 0; \ @@ -884,136 +886,136 @@ } static int -symtable_visit_stmt(struct symtable *st, stmt_ty s) +symtable_visit_stmt(struct symtable *st, PyObject *s) { - switch (s->kind) { + switch (stmt_kind(s)) { case FunctionDef_kind: - if (!symtable_add_def(st, s->v.FunctionDef.name, DEF_LOCAL)) + if (!symtable_add_def(st, FunctionDef_name(s), DEF_LOCAL)) return 0; - if (s->v.FunctionDef.args->defaults) - VISIT_SEQ(st, expr, s->v.FunctionDef.args->defaults); - if (s->v.FunctionDef.decorators) - VISIT_SEQ(st, expr, s->v.FunctionDef.decorators); - if (!symtable_enter_block(st, s->v.FunctionDef.name, - FunctionBlock, (void *)s, s->lineno)) + if (arguments_defaults(FunctionDef_args(s))) + VISIT_SEQ(st, expr, arguments_defaults(FunctionDef_args(s))); + if (FunctionDef_decorators(s)) + VISIT_SEQ(st, expr, FunctionDef_decorators(s)); + if (!symtable_enter_block(st, FunctionDef_name(s), + FunctionBlock, (void *)s, ((struct _stmt*)s)->lineno)) return 0; - VISIT_IN_BLOCK(st, arguments, s->v.FunctionDef.args, s); - VISIT_SEQ_IN_BLOCK(st, stmt, s->v.FunctionDef.body, s); + VISIT_IN_BLOCK(st, arguments, FunctionDef_args(s), s); + VISIT_SEQ_IN_BLOCK(st, stmt, FunctionDef_body(s), s); if (!symtable_exit_block(st, s)) return 0; break; case ClassDef_kind: { PyObject *tmp; - if (!symtable_add_def(st, s->v.ClassDef.name, DEF_LOCAL)) + if (!symtable_add_def(st, ClassDef_name(s), DEF_LOCAL)) return 0; - VISIT_SEQ(st, expr, s->v.ClassDef.bases); - if (!symtable_enter_block(st, s->v.ClassDef.name, ClassBlock, - (void *)s, s->lineno)) + VISIT_SEQ(st, expr, ClassDef_bases(s)); + if (!symtable_enter_block(st, ClassDef_name(s), ClassBlock, + (void *)s, ((struct _stmt*)s)->lineno)) return 0; tmp = st->st_private; - st->st_private = s->v.ClassDef.name; - VISIT_SEQ_IN_BLOCK(st, stmt, s->v.ClassDef.body, s); + st->st_private = ClassDef_name(s); + VISIT_SEQ_IN_BLOCK(st, stmt, ClassDef_body(s), s); st->st_private = tmp; if (!symtable_exit_block(st, s)) return 0; break; } case Return_kind: - if (s->v.Return.value) - VISIT(st, expr, s->v.Return.value); + if (Return_value(s) != Py_None) + VISIT(st, expr, Return_value(s)); break; case Delete_kind: - VISIT_SEQ(st, expr, s->v.Delete.targets); + VISIT_SEQ(st, expr, Delete_targets(s)); break; case Assign_kind: - VISIT_SEQ(st, expr, s->v.Assign.targets); - VISIT(st, expr, s->v.Assign.value); + VISIT_SEQ(st, expr, Assign_targets(s)); + VISIT(st, expr, Assign_value(s)); break; case AugAssign_kind: - VISIT(st, expr, s->v.AugAssign.target); - VISIT(st, expr, s->v.AugAssign.value); + VISIT(st, expr, AugAssign_target(s)); + VISIT(st, expr, AugAssign_value(s)); break; case Print_kind: - if (s->v.Print.dest) - VISIT(st, expr, s->v.Print.dest); - VISIT_SEQ(st, expr, s->v.Print.values); + if (Print_dest(s) != Py_None) + VISIT(st, expr, Print_dest(s)); + VISIT_SEQ(st, expr, Print_values(s)); break; case For_kind: - VISIT(st, expr, s->v.For.target); - VISIT(st, expr, s->v.For.iter); - VISIT_SEQ(st, stmt, s->v.For.body); - if (s->v.For.orelse) - VISIT_SEQ(st, stmt, s->v.For.orelse); + VISIT(st, expr, For_target(s)); + VISIT(st, expr, For_iter(s)); + VISIT_SEQ(st, stmt, For_body(s)); + /* if (For_orelse(s)) */ + VISIT_SEQ(st, stmt, For_orelse(s)); break; case While_kind: - VISIT(st, expr, s->v.While.test); - VISIT_SEQ(st, stmt, s->v.While.body); - if (s->v.While.orelse) - VISIT_SEQ(st, stmt, s->v.While.orelse); + VISIT(st, expr, While_test(s)); + VISIT_SEQ(st, stmt, While_body(s)); + /* if (While_orelse(s)) */ + VISIT_SEQ(st, stmt, While_orelse(s)); break; case If_kind: /* XXX if 0: and lookup_yield() hacks */ - VISIT(st, expr, s->v.If.test); - VISIT_SEQ(st, stmt, s->v.If.body); - if (s->v.If.orelse) - VISIT_SEQ(st, stmt, s->v.If.orelse); + VISIT(st, expr, If_test(s)); + VISIT_SEQ(st, stmt, If_body(s)); + /* if (If_orelse(s)) */ + VISIT_SEQ(st, stmt, If_orelse(s)); break; case Raise_kind: - if (s->v.Raise.type) { - VISIT(st, expr, s->v.Raise.type); - if (s->v.Raise.inst) { - VISIT(st, expr, s->v.Raise.inst); - if (s->v.Raise.tback) - VISIT(st, expr, s->v.Raise.tback); + if (Raise_type(s) != Py_None) { + VISIT(st, expr, Raise_type(s)); + if (Raise_inst(s) != Py_None) { + VISIT(st, expr, Raise_inst(s)); + if (Raise_tback(s) != Py_None) + VISIT(st, expr, Raise_tback(s)); } } break; case TryExcept_kind: - VISIT_SEQ(st, stmt, s->v.TryExcept.body); - VISIT_SEQ(st, stmt, s->v.TryExcept.orelse); - VISIT_SEQ(st, excepthandler, s->v.TryExcept.handlers); + VISIT_SEQ(st, stmt, TryExcept_body(s)); + VISIT_SEQ(st, stmt, TryExcept_orelse(s)); + VISIT_SEQ(st, excepthandler, TryExcept_handlers(s)); break; case TryFinally_kind: - VISIT_SEQ(st, stmt, s->v.TryFinally.body); - VISIT_SEQ(st, stmt, s->v.TryFinally.finalbody); + VISIT_SEQ(st, stmt, TryFinally_body(s)); + VISIT_SEQ(st, stmt, TryFinally_finalbody(s)); break; case Assert_kind: - VISIT(st, expr, s->v.Assert.test); - if (s->v.Assert.msg) - VISIT(st, expr, s->v.Assert.msg); + VISIT(st, expr, Assert_test(s)); + if (Assert_msg(s) != Py_None) + VISIT(st, expr, Assert_msg(s)); break; case Import_kind: - VISIT_SEQ(st, alias, s->v.Import.names); + VISIT_SEQ(st, alias, Import_names(s)); /* XXX Don't have the lineno available inside visit_alias */ if (st->st_cur->ste_unoptimized && !st->st_cur->ste_opt_lineno) - st->st_cur->ste_opt_lineno = s->lineno; + st->st_cur->ste_opt_lineno = ((struct _stmt*)s)->lineno; break; case ImportFrom_kind: - VISIT_SEQ(st, alias, s->v.ImportFrom.names); + VISIT_SEQ(st, alias, ImportFrom_names(s)); /* XXX Don't have the lineno available inside visit_alias */ if (st->st_cur->ste_unoptimized && !st->st_cur->ste_opt_lineno) - st->st_cur->ste_opt_lineno = s->lineno; + st->st_cur->ste_opt_lineno = ((struct _stmt*)s)->lineno; break; case Exec_kind: - VISIT(st, expr, s->v.Exec.body); + VISIT(st, expr, Exec_body(s)); if (!st->st_cur->ste_opt_lineno) - st->st_cur->ste_opt_lineno = s->lineno; - if (s->v.Exec.globals) { + st->st_cur->ste_opt_lineno = ((struct _stmt*)s)->lineno; + if (Exec_globals(s) != Py_None) { st->st_cur->ste_unoptimized |= OPT_EXEC; - VISIT(st, expr, s->v.Exec.globals); - if (s->v.Exec.locals) - VISIT(st, expr, s->v.Exec.locals); + VISIT(st, expr, Exec_globals(s)); + if (Exec_locals(s) != Py_None) + VISIT(st, expr, Exec_locals(s)); } else { st->st_cur->ste_unoptimized |= OPT_BARE_EXEC; } break; case Global_kind: { int i; - asdl_seq *seq = s->v.Global.names; - for (i = 0; i < asdl_seq_LEN(seq); i++) { - identifier name = asdl_seq_GET(seq, i); + PyObject *seq = Global_names(s); + for (i = 0; i < PyList_GET_SIZE(seq); i++) { + PyObject *name = PyList_GET_ITEM(seq, i); char *c_name = PyString_AS_STRING(name); int cur = symtable_lookup(st, name); if (cur < 0) @@ -1037,7 +1039,7 @@ break; } case Expr_kind: - VISIT(st, expr, s->v.Expr.value); + VISIT(st, expr, Expr_value(s)); break; case Pass_kind: case Break_kind: @@ -1049,41 +1051,41 @@ } static int -symtable_visit_expr(struct symtable *st, expr_ty e) +symtable_visit_expr(struct symtable *st, PyObject *e) { - switch (e->kind) { + switch (expr_kind(e)) { case BoolOp_kind: - VISIT_SEQ(st, expr, e->v.BoolOp.values); + VISIT_SEQ(st, expr, BoolOp_values(e)); break; case BinOp_kind: - VISIT(st, expr, e->v.BinOp.left); - VISIT(st, expr, e->v.BinOp.right); + VISIT(st, expr, BinOp_left(e)); + VISIT(st, expr, BinOp_right(e)); break; case UnaryOp_kind: - VISIT(st, expr, e->v.UnaryOp.operand); + VISIT(st, expr, UnaryOp_operand(e)); break; case Lambda_kind: { if (!symtable_add_def(st, GET_IDENTIFIER(lambda), DEF_LOCAL)) return 0; - if (e->v.Lambda.args->defaults) - VISIT_SEQ(st, expr, e->v.Lambda.args->defaults); + if (arguments_defaults(Lambda_args(e))) + VISIT_SEQ(st, expr, arguments_defaults(Lambda_args(e))); /* XXX how to get line numbers for expressions */ if (!symtable_enter_block(st, GET_IDENTIFIER(lambda), FunctionBlock, (void *)e, 0)) return 0; - VISIT_IN_BLOCK(st, arguments, e->v.Lambda.args, (void*)e); - VISIT_IN_BLOCK(st, expr, e->v.Lambda.body, (void*)e); + VISIT_IN_BLOCK(st, arguments, Lambda_args(e), (void*)e); + VISIT_IN_BLOCK(st, expr, Lambda_body(e), (void*)e); if (!symtable_exit_block(st, (void *)e)) return 0; break; } case Dict_kind: - VISIT_SEQ(st, expr, e->v.Dict.keys); - VISIT_SEQ(st, expr, e->v.Dict.values); + VISIT_SEQ(st, expr, Dict_keys(e)); + VISIT_SEQ(st, expr, Dict_values(e)); break; case ListComp_kind: { char tmpname[256]; - identifier tmp; + PyObject *tmp; PyOS_snprintf(tmpname, sizeof(tmpname), "_[%d]", ++st->st_cur->ste_tmpname); @@ -1091,8 +1093,8 @@ if (!symtable_add_def(st, tmp, DEF_LOCAL)) return 0; Py_DECREF(tmp); - VISIT(st, expr, e->v.ListComp.elt); - VISIT_SEQ(st, comprehension, e->v.ListComp.generators); + VISIT(st, expr, ListComp_elt(e)); + VISIT_SEQ(st, comprehension, ListComp_generators(e)); break; } case GeneratorExp_kind: { @@ -1102,25 +1104,25 @@ break; } case Yield_kind: - if (e->v.Yield.value) - VISIT(st, expr, e->v.Yield.value); + if (Yield_value(e) != Py_None) + VISIT(st, expr, Yield_value(e)); st->st_cur->ste_generator = 1; break; case Compare_kind: - VISIT(st, expr, e->v.Compare.left); - VISIT_SEQ(st, expr, e->v.Compare.comparators); + VISIT(st, expr, Compare_left(e)); + VISIT_SEQ(st, expr, Compare_comparators(e)); break; case Call_kind: - VISIT(st, expr, e->v.Call.func); - VISIT_SEQ(st, expr, e->v.Call.args); - VISIT_SEQ(st, keyword, e->v.Call.keywords); - if (e->v.Call.starargs) - VISIT(st, expr, e->v.Call.starargs); - if (e->v.Call.kwargs) - VISIT(st, expr, e->v.Call.kwargs); + VISIT(st, expr, Call_func(e)); + VISIT_SEQ(st, expr, Call_args(e)); + VISIT_SEQ(st, keyword, Call_keywords(e)); + if (Call_starargs(e) != Py_None) + VISIT(st, expr, Call_starargs(e)); + if (Call_kwargs(e) != Py_None) + VISIT(st, expr, Call_kwargs(e)); break; case Repr_kind: - VISIT(st, expr, e->v.Repr.value); + VISIT(st, expr, Repr_value(e)); break; case Num_kind: case Str_kind: @@ -1128,23 +1130,23 @@ break; /* The following exprs can be assignment targets. */ case Attribute_kind: - VISIT(st, expr, e->v.Attribute.value); + VISIT(st, expr, Attribute_value(e)); break; case Subscript_kind: - VISIT(st, expr, e->v.Subscript.value); - VISIT(st, slice, e->v.Subscript.slice); + VISIT(st, expr, Subscript_value(e)); + VISIT(st, slice, Subscript_slice(e)); break; case Name_kind: - if (!symtable_add_def(st, e->v.Name.id, - e->v.Name.ctx == Load ? USE : DEF_LOCAL)) + if (!symtable_add_def(st, Name_id(e), + expr_context_kind(Name_ctx(e)) == Load_kind ? USE : DEF_LOCAL)) return 0; break; /* child nodes of List and Tuple will have expr_context set */ case List_kind: - VISIT_SEQ(st, expr, e->v.List.elts); + VISIT_SEQ(st, expr, List_elts(e)); break; case Tuple_kind: - VISIT_SEQ(st, expr, e->v.Tuple.elts); + VISIT_SEQ(st, expr, Tuple_elts(e)); break; } return 1; @@ -1165,21 +1167,21 @@ } static int -symtable_visit_params(struct symtable *st, asdl_seq *args, int toplevel) +symtable_visit_params(struct symtable *st, PyObject *args, int toplevel) { int i, complex = 0; /* go through all the toplevel arguments first */ - for (i = 0; i < asdl_seq_LEN(args); i++) { - expr_ty arg = asdl_seq_GET(args, i); - if (arg->kind == Name_kind) { - assert(arg->v.Name.ctx == Param || - (arg->v.Name.ctx == Store && !toplevel)); - if (!symtable_add_def(st, arg->v.Name.id, DEF_PARAM)) + for (i = 0; i < PyList_GET_SIZE(args); i++) { + PyObject *arg = PyList_GET_ITEM(args, i); + if (expr_kind(arg) == Name_kind) { + assert(Name_ctx(arg) == Param || + (Name_ctx(arg) == Store && !toplevel)); + if (!symtable_add_def(st, Name_id(arg), DEF_PARAM)) return 0; } - else if (arg->kind == Tuple_kind) { - assert(arg->v.Tuple.ctx == Store); + else if (expr_kind(arg) == Tuple_kind) { + assert(Tuple_ctx(arg) == Store); complex = 1; if (toplevel) { if (!symtable_implicit_arg(st, i)) @@ -1204,13 +1206,13 @@ } static int -symtable_visit_params_nested(struct symtable *st, asdl_seq *args) +symtable_visit_params_nested(struct symtable *st, PyObject *args) { int i; - for (i = 0; i < asdl_seq_LEN(args); i++) { - expr_ty arg = asdl_seq_GET(args, i); - if (arg->kind == Tuple_kind && - !symtable_visit_params(st, arg->v.Tuple.elts, 0)) + for (i = 0; i < PyList_GET_SIZE(args); i++) { + PyObject *arg = PyList_GET_ITEM(args, i); + if (expr_kind(arg) == Tuple_kind && + !symtable_visit_params(st, Tuple_elts(arg), 0)) return 0; } @@ -1218,50 +1220,52 @@ } static int -symtable_visit_arguments(struct symtable *st, arguments_ty a) +symtable_visit_arguments(struct symtable *st, PyObject *a) { /* skip default arguments inside function block XXX should ast be different? */ - if (a->args && !symtable_visit_params(st, a->args, 1)) + /* if (arguments_args(a) && !symtable_visit_params(st, arguments_args(a), 1)) */ + if (!symtable_visit_params(st, arguments_args(a), 1)) return 0; - if (a->vararg) { - if (!symtable_add_def(st, a->vararg, DEF_PARAM)) + if (arguments_vararg(a) != Py_None) { + if (!symtable_add_def(st, arguments_vararg(a), DEF_PARAM)) return 0; st->st_cur->ste_varargs = 1; } - if (a->kwarg) { - if (!symtable_add_def(st, a->kwarg, DEF_PARAM)) + if (arguments_kwarg(a) != Py_None) { + if (!symtable_add_def(st, arguments_kwarg(a), DEF_PARAM)) return 0; st->st_cur->ste_varkeywords = 1; } - if (a->args && !symtable_visit_params_nested(st, a->args)) + /* if (arguments_args(a) && !symtable_visit_params_nested(st, arguments_args(a))) */ + if (!symtable_visit_params_nested(st, arguments_args(a))) return 0; return 1; } static int -symtable_visit_excepthandler(struct symtable *st, excepthandler_ty eh) +symtable_visit_excepthandler(struct symtable *st, PyObject *eh) { - if (eh->type) - VISIT(st, expr, eh->type); - if (eh->name) - VISIT(st, expr, eh->name); - VISIT_SEQ(st, stmt, eh->body); + if (excepthandler_type(eh) != Py_None) + VISIT(st, expr, excepthandler_type(eh)); + if (excepthandler_name(eh) != Py_None) + VISIT(st, expr, excepthandler_name(eh)); + VISIT_SEQ(st, stmt, excepthandler_body(eh)); return 1; } static int -symtable_visit_alias(struct symtable *st, alias_ty a) +symtable_visit_alias(struct symtable *st, PyObject *a) { /* Compute store_name, the name actually bound by the import - operation. It is diferent than a->name when a->name is a + operation. It is diferent than alias_name(a) when alias_name(a) is a dotted package name (e.g. spam.eggs) */ PyObject *store_name; - PyObject *name = (a->asname == NULL) ? a->name : a->asname; + PyObject *name = (alias_asname(a) == NULL) ? alias_name(a) : alias_asname(a); const char *base = PyString_AS_STRING(name); char *dot = strchr(base, '.'); if (dot) @@ -1291,40 +1295,40 @@ static int -symtable_visit_comprehension(struct symtable *st, comprehension_ty lc) +symtable_visit_comprehension(struct symtable *st, PyObject *lc) { - VISIT(st, expr, lc->target); - VISIT(st, expr, lc->iter); - VISIT_SEQ(st, expr, lc->ifs); + VISIT(st, expr, comprehension_target(lc)); + VISIT(st, expr, comprehension_iter(lc)); + VISIT_SEQ(st, expr, comprehension_ifs(lc)); return 1; } static int -symtable_visit_keyword(struct symtable *st, keyword_ty k) +symtable_visit_keyword(struct symtable *st, PyObject *k) { - VISIT(st, expr, k->value); + VISIT(st, expr, keyword_value(k)); return 1; } static int -symtable_visit_slice(struct symtable *st, slice_ty s) +symtable_visit_slice(struct symtable *st, PyObject *s) { - switch (s->kind) { + switch (slice_kind(s)) { case Slice_kind: - if (s->v.Slice.lower) - VISIT(st, expr, s->v.Slice.lower) - if (s->v.Slice.upper) - VISIT(st, expr, s->v.Slice.upper) - if (s->v.Slice.step) - VISIT(st, expr, s->v.Slice.step) + if (Slice_lower(s) != Py_None) + VISIT(st, expr, Slice_lower(s)) + if (Slice_upper(s) != Py_None) + VISIT(st, expr, Slice_upper(s)) + if (Slice_step(s) != Py_None) + VISIT(st, expr, Slice_step(s)) break; case ExtSlice_kind: - VISIT_SEQ(st, slice, s->v.ExtSlice.dims) + VISIT_SEQ(st, slice, ExtSlice_dims(s)) break; case Index_kind: - VISIT(st, expr, s->v.Index.value) + VISIT(st, expr, Index_value(s)) break; case Ellipsis_kind: break; @@ -1333,12 +1337,12 @@ } static int -symtable_visit_genexp(struct symtable *st, expr_ty e) +symtable_visit_genexp(struct symtable *st, PyObject *e) { - comprehension_ty outermost = ((comprehension_ty) - (asdl_seq_GET(e->v.GeneratorExp.generators, 0))); + PyObject *outermost = ((PyObject *) + (PyList_GET_ITEM(GeneratorExp_generators(e), 0))); /* Outermost iterator is evaluated in current scope */ - VISIT(st, expr, outermost->iter); + VISIT(st, expr, comprehension_iter(outermost)); /* Create generator scope for the rest */ if (!symtable_enter_block(st, GET_IDENTIFIER(genexpr), FunctionBlock, (void *)e, 0)) { @@ -1350,11 +1354,11 @@ symtable_exit_block(st, (void *)e); return 0; } - VISIT_IN_BLOCK(st, expr, outermost->target, (void*)e); - VISIT_SEQ_IN_BLOCK(st, expr, outermost->ifs, (void*)e); + VISIT_IN_BLOCK(st, expr, comprehension_target(outermost), (void*)e); + VISIT_SEQ_IN_BLOCK(st, expr, comprehension_ifs(outermost), (void*)e); VISIT_SEQ_TAIL_IN_BLOCK(st, comprehension, - e->v.GeneratorExp.generators, 1, (void*)e); - VISIT_IN_BLOCK(st, expr, e->v.GeneratorExp.elt, (void*)e); + GeneratorExp_generators(e), 1, (void*)e); + VISIT_IN_BLOCK(st, expr, GeneratorExp_elt(e), (void*)e); if (!symtable_exit_block(st, (void *)e)) return 0; return 1; From python-checkins at python.org Sun Feb 5 03:07:20 2006 From: python-checkins at python.org (neal.norwitz) Date: Sun, 5 Feb 2006 03:07:20 +0100 (CET) Subject: [Python-checkins] r42242 - python/trunk/Python/ast.c Message-ID: <20060205020720.634D71E4005@bag.python.org> Author: neal.norwitz Date: Sun Feb 5 03:07:19 2006 New Revision: 42242 Modified: python/trunk/Python/ast.c Log: Use C-style comment Modified: python/trunk/Python/ast.c ============================================================================== --- python/trunk/Python/ast.c (original) +++ python/trunk/Python/ast.c Sun Feb 5 03:07:19 2006 @@ -1808,7 +1808,7 @@ if (!expr1) return NULL; - // TODO(jhylton): Figure out why set_context() can't be used here. + /* TODO(jhylton): Figure out why set_context() can't be used here. */ switch (expr1->kind) { case GeneratorExp_kind: ast_error(ch, "augmented assignment to generator " From python-checkins at python.org Sun Feb 5 04:36:57 2006 From: python-checkins at python.org (neal.norwitz) Date: Sun, 5 Feb 2006 04:36:57 +0100 (CET) Subject: [Python-checkins] r42243 - python/trunk/Modules/mmapmodule.c Message-ID: <20060205033657.9F3A01E4005@bag.python.org> Author: neal.norwitz Date: Sun Feb 5 04:36:57 2006 New Revision: 42243 Modified: python/trunk/Modules/mmapmodule.c Log: Whitespace normalization Modified: python/trunk/Modules/mmapmodule.c ============================================================================== --- python/trunk/Modules/mmapmodule.c (original) +++ python/trunk/Modules/mmapmodule.c Sun Feb 5 04:36:57 2006 @@ -877,10 +877,10 @@ return NULL; if ((access != (int)ACCESS_DEFAULT) && - ((flags != MAP_SHARED) || ( prot != (PROT_WRITE | PROT_READ)))) + ((flags != MAP_SHARED) || (prot != (PROT_WRITE | PROT_READ)))) return PyErr_Format(PyExc_ValueError, "mmap can't specify both access and flags, prot."); - switch((access_mode)access) { + switch ((access_mode)access) { case ACCESS_READ: flags = MAP_SHARED; prot = PROT_READ; From python-checkins at python.org Sun Feb 5 06:45:48 2006 From: python-checkins at python.org (neal.norwitz) Date: Sun, 5 Feb 2006 06:45:48 +0100 (CET) Subject: [Python-checkins] r42244 - in python/trunk: Doc/lib/libmmap.tex Lib/test/output/test_mmap Lib/test/test_mmap.py Misc/NEWS Modules/mmapmodule.c Message-ID: <20060205054548.A9C981E4005@bag.python.org> Author: neal.norwitz Date: Sun Feb 5 06:45:43 2006 New Revision: 42244 Modified: python/trunk/Doc/lib/libmmap.tex python/trunk/Lib/test/output/test_mmap python/trunk/Lib/test/test_mmap.py python/trunk/Misc/NEWS python/trunk/Modules/mmapmodule.c Log: Patch #1407135, bug #1424041, make mmap.mmap(-1, length) work the same on both Unix (SVR4 and BSD) and Windows. Restores behaviour of passing -1 for anonymous memory on Unix. Use MAP_ANONYMOUS instead of _ANON since the latter is deprecated according to Linux (gentoo) man pages. Should we continue to allow mmap.mmap(0, length) to work on Windows? 0 is a valid fd. Will backport bugfix portions. Modified: python/trunk/Doc/lib/libmmap.tex ============================================================================== --- python/trunk/Doc/lib/libmmap.tex (original) +++ python/trunk/Doc/lib/libmmap.tex Sun Feb 5 06:45:43 2006 @@ -37,7 +37,8 @@ exception. Assignment to an \constant{ACCESS_WRITE} memory map affects both memory and the underlying file. Assignment to an \constant{ACCESS_COPY} memory map affects memory but does not update -the underlying file. +the underlying file. \versionchanged[To map anonymous memory, +-1 should be passed as the fileno along with the length]{2.5} \begin{funcdesc}{mmap}{fileno, length\optional{, tagname\optional{, access}}} \strong{(Windows version)} Maps \var{length} bytes from the file Modified: python/trunk/Lib/test/output/test_mmap ============================================================================== --- python/trunk/Lib/test/output/test_mmap (original) +++ python/trunk/Lib/test/output/test_mmap Sun Feb 5 06:45:43 2006 @@ -34,4 +34,5 @@ Try opening a bad file descriptor... Ensuring that passing 0 as map length sets map size to current file size. Ensuring that passing 0 as map length sets map size to current file size. + anonymous mmap.mmap(-1, PAGESIZE)... Test passed Modified: python/trunk/Lib/test/test_mmap.py ============================================================================== --- python/trunk/Lib/test/test_mmap.py (original) +++ python/trunk/Lib/test/test_mmap.py Sun Feb 5 06:45:43 2006 @@ -283,7 +283,7 @@ print ' Try opening a bad file descriptor...' try: - mmap.mmap(-1, 4096) + mmap.mmap(-2, 4096) except mmap.error: pass else: @@ -380,6 +380,16 @@ finally: os.unlink(TESTFN) - print ' Test passed' +def test_anon(): + print " anonymous mmap.mmap(-1, PAGESIZE)..." + m = mmap.mmap(-1, PAGESIZE) + for x in xrange(PAGESIZE): + verify(m[x] == '\0', "anonymously mmap'ed contents should be zero") + + for x in xrange(PAGESIZE): + m[x] = ch = chr(x & 255) + vereq(m[x], ch) test_both() +test_anon() +print ' Test passed' Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Sun Feb 5 06:45:43 2006 @@ -216,6 +216,10 @@ Extension Modules ----------------- +- Patch #1407135, bug #1424041: harmonize mmap behavior of anonymous memory. + mmap.mmap(-1, size) now returns anonymous memory in both Unix and Windows. + mmap.mmap(0, size) should not be used on Windows for anonymous memory. + - Patch #1422385: The nis module now supports access to domains other than the system default domain. Modified: python/trunk/Modules/mmapmodule.c ============================================================================== --- python/trunk/Modules/mmapmodule.c (original) +++ python/trunk/Modules/mmapmodule.c Sun Feb 5 06:45:43 2006 @@ -54,6 +54,11 @@ #include #include +/* maybe define MAP_ANON in terms of MAP_ANONYMOUS */ +#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) +# define MAP_ANONYMOUS MAP_ANON +#endif + static PyObject *mmap_module_error; typedef enum @@ -863,6 +868,7 @@ PyObject *map_size_obj = NULL; int map_size; int fd, flags = MAP_SHARED, prot = PROT_WRITE | PROT_READ; + int devzero = -1; int access = (int)ACCESS_DEFAULT; static const char *keywords[] = {"fileno", "length", "flags", "prot", @@ -921,15 +927,41 @@ m_obj->data = NULL; m_obj->size = (size_t) map_size; m_obj->pos = (size_t) 0; - m_obj->fd = dup(fd); - if (m_obj->fd == -1) { - Py_DECREF(m_obj); - PyErr_SetFromErrno(mmap_module_error); - return NULL; + if (fd == -1) { + m_obj->fd = -1; + /* Assume the caller wants to map anonymous memory. + This is the same behaviour as Windows. mmap.mmap(-1, size) + on both Windows and Unix map anonymous memory. + */ +#ifdef MAP_ANONYMOUS + /* BSD way to map anonymous memory */ + flags |= MAP_ANONYMOUS; +#else + /* SVR4 method to map anonymous memory is to open /dev/zero */ + fd = devzero = open("/dev/zero", O_RDWR); + if (devzero == -1) { + Py_DECREF(m_obj); + PyErr_SetFromErrno(mmap_module_error); + return NULL; + } +#endif + } else { + m_obj->fd = dup(fd); + if (m_obj->fd == -1) { + Py_DECREF(m_obj); + PyErr_SetFromErrno(mmap_module_error); + return NULL; + } } + m_obj->data = mmap(NULL, map_size, prot, flags, fd, 0); + + if (devzero != -1) { + close(devzero); + } + if (m_obj->data == (char *)-1) { m_obj->data = NULL; Py_DECREF(m_obj); @@ -986,8 +1018,15 @@ if (map_size < 0) return NULL; - /* if an actual filename has been specified */ - if (fileno != 0) { + /* assume -1 and 0 both mean invalid filedescriptor + to 'anonymously' map memory. + XXX: fileno == 0 is a valid fd, but was accepted prior to 2.5. + XXX: Should this code be added? + if (fileno == 0) + PyErr_Warn(PyExc_DeprecationWarning, + "don't use 0 for anonymous memory"); + */ + if (fileno != -1 && fileno != 0) { fh = (HANDLE)_get_osfhandle(fileno); if (fh==(HANDLE)-1) { PyErr_SetFromErrno(mmap_module_error); @@ -1123,10 +1162,10 @@ PyDict_SetItemString (dict, "MAP_EXECUTABLE", PyInt_FromLong(MAP_EXECUTABLE) ); #endif -#ifdef MAP_ANON - PyDict_SetItemString (dict, "MAP_ANON", PyInt_FromLong(MAP_ANON) ); +#ifdef MAP_ANONYMOUS + PyDict_SetItemString (dict, "MAP_ANON", PyInt_FromLong(MAP_ANONYMOUS) ); PyDict_SetItemString (dict, "MAP_ANONYMOUS", - PyInt_FromLong(MAP_ANON) ); + PyInt_FromLong(MAP_ANONYMOUS) ); #endif PyDict_SetItemString (dict, "PAGESIZE", From python-checkins at python.org Sun Feb 5 07:00:56 2006 From: python-checkins at python.org (neal.norwitz) Date: Sun, 5 Feb 2006 07:00:56 +0100 (CET) Subject: [Python-checkins] r42245 - in python/branches/release24-maint: Misc/NEWS Modules/mmapmodule.c Message-ID: <20060205060056.BA1981E4005@bag.python.org> Author: neal.norwitz Date: Sun Feb 5 07:00:54 2006 New Revision: 42245 Modified: python/branches/release24-maint/Misc/NEWS python/branches/release24-maint/Modules/mmapmodule.c Log: Backport: Patch #1407135, bug #1424041: mmap.mmap(-1, size, ...) can return anonymous memory again on Unix. Modified: python/branches/release24-maint/Misc/NEWS ============================================================================== --- python/branches/release24-maint/Misc/NEWS (original) +++ python/branches/release24-maint/Misc/NEWS Sun Feb 5 07:00:54 2006 @@ -45,6 +45,9 @@ Extension Modules ----------------- +- Patch #1407135, bug #1424041: mmap.mmap(-1, size, ...) can return + anonymous memory again on Unix. + - Bug #1215432: in bsddb DB.associate() would crash when a DBError was supposed to be raised. Modified: python/branches/release24-maint/Modules/mmapmodule.c ============================================================================== --- python/branches/release24-maint/Modules/mmapmodule.c (original) +++ python/branches/release24-maint/Modules/mmapmodule.c Sun Feb 5 07:00:54 2006 @@ -917,12 +917,17 @@ m_obj->data = NULL; m_obj->size = (size_t) map_size; m_obj->pos = (size_t) 0; - m_obj->fd = dup(fd); - if (m_obj->fd == -1) { - Py_DECREF(m_obj); - PyErr_SetFromErrno(mmap_module_error); - return NULL; + if (fd == -1) { + m_obj->fd = -1; + } else { + m_obj->fd = dup(fd); + if (m_obj->fd == -1) { + Py_DECREF(m_obj); + PyErr_SetFromErrno(mmap_module_error); + return NULL; + } } + m_obj->data = mmap(NULL, map_size, prot, flags, fd, 0); From python-checkins at python.org Sun Feb 5 09:21:09 2006 From: python-checkins at python.org (neal.norwitz) Date: Sun, 5 Feb 2006 09:21:09 +0100 (CET) Subject: [Python-checkins] r42246 - in python/trunk/Lib: logging/__init__.py test/test_logging.py Message-ID: <20060205082109.1E31E1E4005@bag.python.org> Author: neal.norwitz Date: Sun Feb 5 09:21:08 2006 New Revision: 42246 Modified: python/trunk/Lib/logging/__init__.py python/trunk/Lib/test/test_logging.py Log: Get test_logging to not hang when running under regrtest.py -R :: Not sure why/how _handlers/_handlerList is out of sync. This could indicate a deeper problem. In test_logging, the only absolutely necessary change to get working was tcpserver.abort = 1. But we don't want to wait infinitely to join the threads, so give a 2.0 second timeout. There doesn't appear to be a need for a local abort variable in serve_until_stopped, so just use the instance member. Note the problem is only on HEAD, not in 2.4. Modified: python/trunk/Lib/logging/__init__.py ============================================================================== --- python/trunk/Lib/logging/__init__.py (original) +++ python/trunk/Lib/logging/__init__.py Sun Feb 5 09:21:08 2006 @@ -671,7 +671,8 @@ #get the module data lock, as we're updating a shared structure. _acquireLock() try: #unlikely to raise an exception, but you never know... - del _handlers[self] + if _handlers.has_key(self): + del _handlers[self] _handlerList.remove(self) finally: _releaseLock() Modified: python/trunk/Lib/test/test_logging.py ============================================================================== --- python/trunk/Lib/test/test_logging.py (original) +++ python/trunk/Lib/test/test_logging.py Sun Feb 5 09:21:08 2006 @@ -99,14 +99,12 @@ self.timeout = 1 def serve_until_stopped(self): - abort = 0 - while not abort: + while not self.abort: rd, wr, ex = select.select([self.socket.fileno()], [], [], self.timeout) if rd: self.handle_request() - abort = self.abort #notify the main thread that we're about to exit socketDataProcessed.set() # close the listen socket @@ -620,8 +618,10 @@ finally: #wait for TCP receiver to terminate socketDataProcessed.wait() + # ensure the server dies + tcpserver.abort = 1 for thread in threads: - thread.join() + thread.join(2.0) banner("logrecv output", "begin") sys.stdout.write(sockOut.getvalue()) sockOut.close() From python-checkins at python.org Sun Feb 5 11:59:53 2006 From: python-checkins at python.org (neal.norwitz) Date: Sun, 5 Feb 2006 11:59:53 +0100 (CET) Subject: [Python-checkins] r42246 - in python/trunk/Lib: logging/__init__.py test/test_logging.py Message-ID: <20060205105953.455671E4003@bag.python.org> Author: neal.norwitz Date: Sun Feb 5 09:21:08 2006 New Revision: 42246 Modified: python/trunk/Lib/logging/__init__.py python/trunk/Lib/test/test_logging.py Log: Get test_logging to not hang when running under regrtest.py -R :: Not sure why/how _handlers/_handlerList is out of sync. This could indicate a deeper problem. In test_logging, the only absolutely necessary change to get working was tcpserver.abort = 1. But we don't want to wait infinitely to join the threads, so give a 2.0 second timeout. There doesn't appear to be a need for a local abort variable in serve_until_stopped, so just use the instance member. Note the problem is only on HEAD, not in 2.4. Modified: python/trunk/Lib/logging/__init__.py ============================================================================== --- python/trunk/Lib/logging/__init__.py (original) +++ python/trunk/Lib/logging/__init__.py Sun Feb 5 09:21:08 2006 @@ -671,7 +671,8 @@ #get the module data lock, as we're updating a shared structure. _acquireLock() try: #unlikely to raise an exception, but you never know... - del _handlers[self] + if _handlers.has_key(self): + del _handlers[self] _handlerList.remove(self) finally: _releaseLock() Modified: python/trunk/Lib/test/test_logging.py ============================================================================== --- python/trunk/Lib/test/test_logging.py (original) +++ python/trunk/Lib/test/test_logging.py Sun Feb 5 09:21:08 2006 @@ -99,14 +99,12 @@ self.timeout = 1 def serve_until_stopped(self): - abort = 0 - while not abort: + while not self.abort: rd, wr, ex = select.select([self.socket.fileno()], [], [], self.timeout) if rd: self.handle_request() - abort = self.abort #notify the main thread that we're about to exit socketDataProcessed.set() # close the listen socket @@ -620,8 +618,10 @@ finally: #wait for TCP receiver to terminate socketDataProcessed.wait() + # ensure the server dies + tcpserver.abort = 1 for thread in threads: - thread.join() + thread.join(2.0) banner("logrecv output", "begin") sys.stdout.write(sockOut.getvalue()) sockOut.close() From python-checkins at python.org Sun Feb 5 12:04:31 2006 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 5 Feb 2006 12:04:31 +0100 (CET) Subject: [Python-checkins] r41639 - projects projects/martin.v.loewis Message-ID: <20060205110431.C4C041E4003@bag.python.org> Author: martin.v.loewis Date: Sun Dec 11 18:19:31 2005 New Revision: 41639 Added: projects/ projects/martin.v.loewis Log: Setup ssh key management. Added: projects/martin.v.loewis ============================================================================== --- (empty file) +++ projects/martin.v.loewis Sun Dec 11 18:19:31 2005 @@ -0,0 +1 @@ +ssh-dss martin at mira From python-checkins at python.org Sun Feb 5 12:05:21 2006 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 5 Feb 2006 12:05:21 +0100 (CET) Subject: [Python-checkins] r41639 - projects projects/martin.v.loewis Message-ID: <20060205110521.8FAAC1E4003@bag.python.org> Author: martin.v.loewis Date: Sun Dec 11 18:19:31 2005 New Revision: 41639 Added: projects/ projects/martin.v.loewis Log: Setup ssh key management. Added: projects/martin.v.loewis ============================================================================== --- (empty file) +++ projects/martin.v.loewis Sun Dec 11 18:19:31 2005 @@ -0,0 +1 @@ +ssh-dss martin at mira From python-checkins at python.org Sun Feb 5 12:05:54 2006 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 5 Feb 2006 12:05:54 +0100 (CET) Subject: [Python-checkins] r42247 - sshkeys/martin.v.loewis Message-ID: <20060205110554.0F7921E4007@bag.python.org> Author: martin.v.loewis Date: Sun Feb 5 12:05:52 2006 New Revision: 42247 Modified: sshkeys/martin.v.loewis Log: Dummy commit Modified: sshkeys/martin.v.loewis ============================================================================== --- sshkeys/martin.v.loewis (original) +++ sshkeys/martin.v.loewis Sun Feb 5 12:05:52 2006 @@ -1 +1 @@ -ssh-dss martin at mira +ssh-dss martin at v.loewis.de From python-checkins at python.org Sun Feb 5 12:06:03 2006 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 5 Feb 2006 12:06:03 +0100 (CET) Subject: [Python-checkins] r42247 - sshkeys/martin.v.loewis Message-ID: <20060205110603.897AD1E4003@bag.python.org> Author: martin.v.loewis Date: Sun Feb 5 12:05:52 2006 New Revision: 42247 Modified: sshkeys/martin.v.loewis Log: Dummy commit Modified: sshkeys/martin.v.loewis ============================================================================== --- sshkeys/martin.v.loewis (original) +++ sshkeys/martin.v.loewis Sun Feb 5 12:05:52 2006 @@ -1 +1 @@ -ssh-dss martin at mira +ssh-dss martin at v.loewis.de From python-checkins at python.org Sun Feb 5 18:09:42 2006 From: python-checkins at python.org (martin.v.loewis) Date: Sun, 5 Feb 2006 18:09:42 +0100 (CET) Subject: [Python-checkins] r42250 - in python/trunk: Lib/zipfile.py Misc/NEWS Message-ID: <20060205170942.635B31E4003@bag.python.org> Author: martin.v.loewis Date: Sun Feb 5 18:09:41 2006 New Revision: 42250 Modified: python/trunk/Lib/zipfile.py python/trunk/Misc/NEWS Log: Patch #1412872: zipfile: use correct system type on unixy systems. Modified: python/trunk/Lib/zipfile.py ============================================================================== --- python/trunk/Lib/zipfile.py (original) +++ python/trunk/Lib/zipfile.py Sun Feb 5 18:09:41 2006 @@ -1,6 +1,6 @@ "Read and write ZIP files." -import struct, os, time +import struct, os, time, sys import binascii try: @@ -131,7 +131,11 @@ self.compress_type = ZIP_STORED # Type of compression for the file self.comment = "" # Comment for each file self.extra = "" # ZIP extra data - self.create_system = 0 # System which created ZIP archive + if sys.platform == 'win32': + self.create_system = 0 # System which created ZIP archive + else: + # Assume everything else is unix-y + self.create_system = 3 # System which created ZIP archive self.create_version = 20 # Version which created ZIP archive self.extract_version = 20 # Version needed to extract archive self.reserved = 0 # Must be zero Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Sun Feb 5 18:09:41 2006 @@ -358,6 +358,9 @@ Library ------- +- Patch #1412872: zipfile now sets the creator system to 3 (Unix) + unless the system is Win32. + - Patch #1349118: urllib now supports user:pass@ style proxy specifications, raises IOErrors when proxies for unsupported protocols are defined, and uses the https proxy on https redirections. From python-checkins at python.org Tue Feb 7 07:58:54 2006 From: python-checkins at python.org (neal.norwitz) Date: Tue, 7 Feb 2006 07:58:54 +0100 (CET) Subject: [Python-checkins] r42251 - python/trunk/Modules/_elementtree.c Message-ID: <20060207065854.BF4081E4004@bag.python.org> Author: neal.norwitz Date: Tue Feb 7 07:58:52 2006 New Revision: 42251 Modified: python/trunk/Modules/_elementtree.c Log: Check return results for errors Modified: python/trunk/Modules/_elementtree.c ============================================================================== --- python/trunk/Modules/_elementtree.c (original) +++ python/trunk/Modules/_elementtree.c Tue Feb 7 07:58:52 2006 @@ -905,6 +905,9 @@ } args = PyTuple_New(2); + if (args == NULL) + return NULL; + Py_INCREF(self); PyTuple_SET_ITEM(args, 0, (PyObject*) self); Py_INCREF(tag); PyTuple_SET_ITEM(args, 1, (PyObject*) tag); @@ -2596,6 +2599,8 @@ /* python glue code */ g = PyDict_New(); + if (g == NULL) + return; PyDict_SetItemString(g, "__builtins__", PyEval_GetBuiltins()); From python-checkins at python.org Tue Feb 7 07:59:24 2006 From: python-checkins at python.org (neal.norwitz) Date: Tue, 7 Feb 2006 07:59:24 +0100 (CET) Subject: [Python-checkins] r42252 - python/trunk/Modules/_ssl.c Message-ID: <20060207065924.078381E4004@bag.python.org> Author: neal.norwitz Date: Tue Feb 7 07:59:20 2006 New Revision: 42252 Modified: python/trunk/Modules/_ssl.c Log: Fix indentation (whitespace only). Modified: python/trunk/Modules/_ssl.c ============================================================================== --- python/trunk/Modules/_ssl.c (original) +++ python/trunk/Modules/_ssl.c Tue Feb 7 07:59:20 2006 @@ -266,7 +266,7 @@ } else { sockstate = SOCKET_OPERATION_OK; } - if (sockstate == SOCKET_HAS_TIMED_OUT) { + if (sockstate == SOCKET_HAS_TIMED_OUT) { PyErr_SetString(PySSLErrorObject, "The connect operation timed out"); goto fail; } else if (sockstate == SOCKET_HAS_BEEN_CLOSED) { @@ -426,7 +426,7 @@ } else { sockstate = SOCKET_OPERATION_OK; } - if (sockstate == SOCKET_HAS_TIMED_OUT) { + if (sockstate == SOCKET_HAS_TIMED_OUT) { PyErr_SetString(PySSLErrorObject, "The write operation timed out"); return NULL; } else if (sockstate == SOCKET_HAS_BEEN_CLOSED) { @@ -485,7 +485,7 @@ } else { sockstate = SOCKET_OPERATION_OK; } - if (sockstate == SOCKET_HAS_TIMED_OUT) { + if (sockstate == SOCKET_HAS_TIMED_OUT) { PyErr_SetString(PySSLErrorObject, "The read operation timed out"); Py_DECREF(buf); return NULL; From python-checkins at python.org Tue Feb 7 08:04:51 2006 From: python-checkins at python.org (neal.norwitz) Date: Tue, 7 Feb 2006 08:04:51 +0100 (CET) Subject: [Python-checkins] r42253 - in python/trunk: Misc/NEWS Modules/_ssl.c Modules/socketmodule.c Message-ID: <20060207070451.269DE1E4004@bag.python.org> Author: neal.norwitz Date: Tue Feb 7 08:04:46 2006 New Revision: 42253 Modified: python/trunk/Misc/NEWS python/trunk/Modules/_ssl.c python/trunk/Modules/socketmodule.c Log: Bug #876637, prevent stack corruption when socket descriptor is larger than FD_SETSIZE. This can only be acheived with ulimit -n SOME_NUMBER_BIGGER_THAN_FD_SETSIZE which is typically only available to root. Since this wouldn't normally be run in a test (ie, run as root), it doesn't seem too worthwhile to add a normal test. The bug report has one version of a test. I've written another. Not sure what the best thing to do is. Do the check before calling internal_select() because we can't set an error in between Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS. This seemed the clearest solution, ie handle before calling internal_select() rather than inside. Plus there is at least one place outside of internal_select() that needed to be handled. Will backport. Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Tue Feb 7 08:04:46 2006 @@ -216,6 +216,9 @@ Extension Modules ----------------- +- Bug #876637, prevent stack corruption when socket descriptor + is larger than FD_SETSIZE. + - Patch #1407135, bug #1424041: harmonize mmap behavior of anonymous memory. mmap.mmap(-1, size) now returns anonymous memory in both Unix and Windows. mmap.mmap(0, size) should not be used on Windows for anonymous memory. Modified: python/trunk/Modules/_ssl.c ============================================================================== --- python/trunk/Modules/_ssl.c (original) +++ python/trunk/Modules/_ssl.c Tue Feb 7 08:04:46 2006 @@ -74,6 +74,7 @@ SOCKET_IS_BLOCKING, SOCKET_HAS_TIMED_OUT, SOCKET_HAS_BEEN_CLOSED, + SOCKET_INVALID, SOCKET_OPERATION_OK } timeout_state; @@ -272,6 +273,9 @@ } else if (sockstate == SOCKET_HAS_BEEN_CLOSED) { PyErr_SetString(PySSLErrorObject, "Underlying socket has been closed."); goto fail; + } else if (sockstate == SOCKET_INVALID) { + PyErr_SetString(PySSLErrorObject, "Underlying socket too large for select()."); + goto fail; } else if (sockstate == SOCKET_IS_NONBLOCKING) { break; } @@ -372,6 +376,10 @@ if (s->sock_fd < 0) return SOCKET_HAS_BEEN_CLOSED; + /* Guard against socket too large for select*/ + if (s->sock_fd >= FD_SETSIZE) + return SOCKET_INVALID; + /* Construct the arguments to select */ tv.tv_sec = (int)s->sock_timeout; tv.tv_usec = (int)((s->sock_timeout - tv.tv_sec) * 1e6); @@ -409,6 +417,9 @@ } else if (sockstate == SOCKET_HAS_BEEN_CLOSED) { PyErr_SetString(PySSLErrorObject, "Underlying socket has been closed."); return NULL; + } else if (sockstate == SOCKET_INVALID) { + PyErr_SetString(PySSLErrorObject, "Underlying socket too large for select()."); + return NULL; } do { err = 0; @@ -467,6 +478,9 @@ PyErr_SetString(PySSLErrorObject, "The read operation timed out"); Py_DECREF(buf); return NULL; + } else if (sockstate == SOCKET_INVALID) { + PyErr_SetString(PySSLErrorObject, "Underlying socket too large for select()."); + return NULL; } do { err = 0; Modified: python/trunk/Modules/socketmodule.c ============================================================================== --- python/trunk/Modules/socketmodule.c (original) +++ python/trunk/Modules/socketmodule.c Tue Feb 7 08:04:46 2006 @@ -395,6 +395,16 @@ there has to be a circular reference. */ static PyTypeObject sock_type; +/* Can we call select() with this socket without a buffer overrun? */ +#define IS_SELECTABLE(s) ((s)->sock_fd < FD_SETSIZE) + +static PyObject* +select_error(void) +{ + PyErr_SetString(socket_error, "unable to select on socket"); + return NULL; +} + /* Convenience function to raise an error according to errno and return a NULL pointer from a function. */ @@ -1408,6 +1418,9 @@ newfd = -1; #endif + if (!IS_SELECTABLE(s)) + return select_error(); + Py_BEGIN_ALLOW_THREADS timeout = internal_select(s, 0); if (!timeout) @@ -1736,7 +1749,8 @@ #ifdef MS_WINDOWS if (s->sock_timeout > 0.0) { - if (res < 0 && WSAGetLastError() == WSAEWOULDBLOCK) { + if (res < 0 && WSAGetLastError() == WSAEWOULDBLOCK && + IS_SELECTABLE(s)) { /* This is a mess. Best solution: trust select */ fd_set fds; fd_set fds_exc; @@ -1781,7 +1795,7 @@ #else if (s->sock_timeout > 0.0) { - if (res < 0 && errno == EINPROGRESS) { + if (res < 0 && errno == EINPROGRESS && IS_SELECTABLE(s)) { timeout = internal_select(s, 1); res = connect(s->sock_fd, addr, addrlen); if (res < 0 && errno == EISCONN) @@ -2084,6 +2098,9 @@ if (buf == NULL) return NULL; + if (!IS_SELECTABLE(s)) + return select_error(); + #ifndef __VMS Py_BEGIN_ALLOW_THREADS timeout = internal_select(s, 0); @@ -2177,6 +2194,9 @@ if (buf == NULL) return NULL; + if (!IS_SELECTABLE(s)) + return select_error(); + Py_BEGIN_ALLOW_THREADS memset(&addrbuf, 0, addrlen); timeout = internal_select(s, 0); @@ -2238,6 +2258,9 @@ if (!PyArg_ParseTuple(args, "s#|i:send", &buf, &len, &flags)) return NULL; + if (!IS_SELECTABLE(s)) + return select_error(); + #ifndef __VMS Py_BEGIN_ALLOW_THREADS timeout = internal_select(s, 1); @@ -2303,6 +2326,9 @@ if (!PyArg_ParseTuple(args, "s#|i:sendall", &buf, &len, &flags)) return NULL; + if (!IS_SELECTABLE(s)) + return select_error(); + Py_BEGIN_ALLOW_THREADS do { timeout = internal_select(s, 1); @@ -2357,6 +2383,9 @@ if (!getsockaddrarg(s, addro, &addr, &addrlen)) return NULL; + if (!IS_SELECTABLE(s)) + return select_error(); + Py_BEGIN_ALLOW_THREADS timeout = internal_select(s, 1); if (!timeout) From martin at v.loewis.de Tue Feb 7 08:11:38 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 07 Feb 2006 08:11:38 +0100 Subject: [Python-checkins] r42251 - python/trunk/Modules/_elementtree.c In-Reply-To: <20060207065854.BF4081E4004@bag.python.org> References: <20060207065854.BF4081E4004@bag.python.org> Message-ID: <43E8482A.3040303@v.loewis.de> neal.norwitz wrote: > Author: neal.norwitz > Date: Tue Feb 7 07:58:52 2006 > New Revision: 42251 > > Modified: > python/trunk/Modules/_elementtree.c > Log: > Check return results for errors Such a change shouldn't be committed to the Python tree. Instead, a patch should be created at SF, and assigned to Fredrik Lundh, who will then integrate it into the elementtree repository, release it, and merge it into the Python tree. Regards, Martin From nnorwitz at gmail.com Tue Feb 7 08:16:31 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 6 Feb 2006 23:16:31 -0800 Subject: [Python-checkins] r42251 - python/trunk/Modules/_elementtree.c In-Reply-To: <43E8482A.3040303@v.loewis.de> References: <20060207065854.BF4081E4004@bag.python.org> <43E8482A.3040303@v.loewis.de> Message-ID: On 2/6/06, "Martin v. L?wis" wrote: > > Modified: > > python/trunk/Modules/_elementtree.c > > Such a change shouldn't be committed to the Python tree. > Instead, a patch should be created at SF, and assigned to > Fredrik Lundh, who will then integrate it into the elementtree > repository, release it, and merge it into the Python tree. Thanks for reminding me. I asked /f what he wants me to do. n From python-checkins at python.org Tue Feb 7 08:17:39 2006 From: python-checkins at python.org (neal.norwitz) Date: Tue, 7 Feb 2006 08:17:39 +0100 (CET) Subject: [Python-checkins] r42254 - in python/branches/release24-maint: Misc/NEWS Modules/_ssl.c Modules/socketmodule.c Message-ID: <20060207071739.920331E4007@bag.python.org> Author: neal.norwitz Date: Tue Feb 7 08:17:37 2006 New Revision: 42254 Modified: python/branches/release24-maint/Misc/NEWS python/branches/release24-maint/Modules/_ssl.c python/branches/release24-maint/Modules/socketmodule.c Log: Backport: Bug #876637, prevent stack corruption when socket descriptor is larger than FD_SETSIZE. This can only be acheived with ulimit -n SOME_NUMBER_BIGGER_THAN_FD_SETSIZE which is typically only available to root. Since this wouldn't normally be run in a test (ie, run as root), it doesn't seem too worthwhile to add a normal test. The bug report has one version of a test. I've written another. Not sure what the best thing to do is. Do the check before calling internal_select() because we can't set an error in between Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS. This seemed the clearest solution. Modified: python/branches/release24-maint/Misc/NEWS ============================================================================== --- python/branches/release24-maint/Misc/NEWS (original) +++ python/branches/release24-maint/Misc/NEWS Tue Feb 7 08:17:37 2006 @@ -45,6 +45,9 @@ Extension Modules ----------------- +- Bug #876637, prevent stack corruption when socket descriptor + is larger than FD_SETSIZE. + - Patch #1407135, bug #1424041: mmap.mmap(-1, size, ...) can return anonymous memory again on Unix. Modified: python/branches/release24-maint/Modules/_ssl.c ============================================================================== --- python/branches/release24-maint/Modules/_ssl.c (original) +++ python/branches/release24-maint/Modules/_ssl.c Tue Feb 7 08:17:37 2006 @@ -74,6 +74,7 @@ SOCKET_IS_BLOCKING, SOCKET_HAS_TIMED_OUT, SOCKET_HAS_BEEN_CLOSED, + SOCKET_INVALID, SOCKET_OPERATION_OK } timeout_state; @@ -272,6 +273,9 @@ } else if (sockstate == SOCKET_HAS_BEEN_CLOSED) { PyErr_SetString(PySSLErrorObject, "Underlying socket has been closed."); goto fail; + } else if (sockstate == SOCKET_INVALID) { + PyErr_SetString(PySSLErrorObject, "Underlying socket too large for select()."); + goto fail; } else if (sockstate == SOCKET_IS_NONBLOCKING) { break; } @@ -372,6 +376,10 @@ if (s->sock_fd < 0) return SOCKET_HAS_BEEN_CLOSED; + /* Guard against socket too large for select*/ + if (s->sock_fd >= FD_SETSIZE) + return SOCKET_INVALID; + /* Construct the arguments to select */ tv.tv_sec = (int)s->sock_timeout; tv.tv_usec = (int)((s->sock_timeout - tv.tv_sec) * 1e6); @@ -409,6 +417,9 @@ } else if (sockstate == SOCKET_HAS_BEEN_CLOSED) { PyErr_SetString(PySSLErrorObject, "Underlying socket has been closed."); return NULL; + } else if (sockstate == SOCKET_INVALID) { + PyErr_SetString(PySSLErrorObject, "Underlying socket too large for select()."); + return NULL; } do { err = 0; @@ -467,6 +478,9 @@ PyErr_SetString(PySSLErrorObject, "The read operation timed out"); Py_DECREF(buf); return NULL; + } else if (sockstate == SOCKET_INVALID) { + PyErr_SetString(PySSLErrorObject, "Underlying socket too large for select()."); + return NULL; } do { err = 0; Modified: python/branches/release24-maint/Modules/socketmodule.c ============================================================================== --- python/branches/release24-maint/Modules/socketmodule.c (original) +++ python/branches/release24-maint/Modules/socketmodule.c Tue Feb 7 08:17:37 2006 @@ -390,6 +390,16 @@ there has to be a circular reference. */ static PyTypeObject sock_type; +/* Can we call select() with this socket without a buffer overrun? */ +#define IS_SELECTABLE(s) ((s)->sock_fd < FD_SETSIZE) + +static PyObject* +select_error(void) +{ + PyErr_SetString(socket_error, "unable to select on socket"); + return NULL; +} + /* Convenience function to raise an error according to errno and return a NULL pointer from a function. */ @@ -1362,6 +1372,9 @@ newfd = -1; #endif + if (!IS_SELECTABLE(s)) + return select_error(); + Py_BEGIN_ALLOW_THREADS timeout = internal_select(s, 0); if (!timeout) @@ -1690,7 +1703,8 @@ #ifdef MS_WINDOWS if (s->sock_timeout > 0.0) { - if (res < 0 && WSAGetLastError() == WSAEWOULDBLOCK) { + if (res < 0 && WSAGetLastError() == WSAEWOULDBLOCK && + IS_SELECTABLE(s)) { /* This is a mess. Best solution: trust select */ fd_set fds; fd_set fds_exc; @@ -1735,7 +1749,7 @@ #else if (s->sock_timeout > 0.0) { - if (res < 0 && errno == EINPROGRESS) { + if (res < 0 && errno == EINPROGRESS && IS_SELECTABLE(s)) { timeout = internal_select(s, 1); res = connect(s->sock_fd, addr, addrlen); if (res < 0 && errno == EISCONN) @@ -2038,6 +2052,9 @@ if (buf == NULL) return NULL; + if (!IS_SELECTABLE(s)) + return select_error(); + #ifndef __VMS Py_BEGIN_ALLOW_THREADS timeout = internal_select(s, 0); @@ -2131,6 +2148,9 @@ if (buf == NULL) return NULL; + if (!IS_SELECTABLE(s)) + return select_error(); + Py_BEGIN_ALLOW_THREADS memset(&addrbuf, 0, addrlen); timeout = internal_select(s, 0); @@ -2192,6 +2212,9 @@ if (!PyArg_ParseTuple(args, "s#|i:send", &buf, &len, &flags)) return NULL; + if (!IS_SELECTABLE(s)) + return select_error(); + #ifndef __VMS Py_BEGIN_ALLOW_THREADS timeout = internal_select(s, 1); @@ -2257,6 +2280,9 @@ if (!PyArg_ParseTuple(args, "s#|i:sendall", &buf, &len, &flags)) return NULL; + if (!IS_SELECTABLE(s)) + return select_error(); + Py_BEGIN_ALLOW_THREADS do { timeout = internal_select(s, 1); @@ -2311,6 +2337,9 @@ if (!getsockaddrarg(s, addro, &addr, &addrlen)) return NULL; + if (!IS_SELECTABLE(s)) + return select_error(); + Py_BEGIN_ALLOW_THREADS timeout = internal_select(s, 1); if (!timeout) From python-checkins at python.org Tue Feb 7 08:23:27 2006 From: python-checkins at python.org (neal.norwitz) Date: Tue, 7 Feb 2006 08:23:27 +0100 (CET) Subject: [Python-checkins] r42255 - python/trunk/Doc/lib/libimaplib.tex Message-ID: <20060207072327.D3B8C1E4004@bag.python.org> Author: neal.norwitz Date: Tue Feb 7 08:23:26 2006 New Revision: 42255 Modified: python/trunk/Doc/lib/libimaplib.tex Log: Add versionadded note for new methods Modified: python/trunk/Doc/lib/libimaplib.tex ============================================================================== --- python/trunk/Doc/lib/libimaplib.tex (original) +++ python/trunk/Doc/lib/libimaplib.tex Tue Feb 7 08:23:26 2006 @@ -225,6 +225,7 @@ \begin{methoddesc}{getannotation}{mailbox, entry, attribute} Retrieve the specified \samp{ANNOTATION}s for \var{mailbox}. The method is non-standard, but is supported by the \samp{Cyrus} server. +\versionadded{2.5} \end{methoddesc} \begin{methoddesc}{getquota}{root} @@ -364,6 +365,7 @@ \begin{methoddesc}{setannotation}{mailbox, entry, attribute\optional{, ...}} Set \samp{ANNOTATION}s for \var{mailbox}. The method is non-standard, but is supported by the \samp{Cyrus} server. +\versionadded{2.5} \end{methoddesc} \begin{methoddesc}{setquota}{root, limits} From python-checkins at python.org Tue Feb 7 14:36:51 2006 From: python-checkins at python.org (phillip.eby) Date: Tue, 7 Feb 2006 14:36:51 +0100 (CET) Subject: [Python-checkins] r42256 - sandbox/trunk/setuptools/pkg_resources.py Message-ID: <20060207133651.AE0C21E4014@bag.python.org> Author: phillip.eby Date: Tue Feb 7 14:36:50 2006 New Revision: 42256 Modified: sandbox/trunk/setuptools/pkg_resources.py Log: Implement more Mac OS X version handling stuff requested by Bob Ippolito. Modified: sandbox/trunk/setuptools/pkg_resources.py ============================================================================== --- sandbox/trunk/setuptools/pkg_resources.py (original) +++ sandbox/trunk/setuptools/pkg_resources.py Tue Feb 7 14:36:50 2006 @@ -18,26 +18,26 @@ from os import utime, rename, unlink # capture these to bypass sandboxing from os import open as os_open +def _get_max_platform(plat): + """Return this platform's maximum compatible version. + distutils.util.get_platform() normally reports the minimum version + of Mac OS X that would be required to *use* extensions produced by + distutils. But what we want when checking compatibility is to know the + version of Mac OS X that we are *running*. To allow usage of packages that + explicitly require a newer version of Mac OS X, we must also know the + current version of the OS. - - - - - - - - - - - - - - - - - - + If this condition occurs for any other platform with a version in its + platform strings, this function should be extended accordingly. + """ + m = macosVersionString.match(plat) + if m is not None and sys.platform == "darwin": + try: + plat = 'macosx-%s-%s' % ('.'.join(_macosx_vers()[:2]), m.group(3)) + except ValueError: + pass # not Mac OS X + return plat __all__ = [ # Basic resource access and distribution/entry point discovery @@ -167,10 +167,12 @@ Returns true if either platform is ``None``, or the platforms are equal. - XXX Needs compatibility checks for Linux and Mac OS X. + XXX Needs compatibility checks for Linux and other unixy OSes. """ if provided is None or required is None or provided==required: return True # easy case + provided = _get_max_platform(provided) + if provided==required: return True # Mac OS X special cases reqMac = macosVersionString.match(required) @@ -194,7 +196,6 @@ # "use the macosx designation instead of darwin.", # category=DeprecationWarning) return True - return False # egg isn't macosx or legacy darwin # are they the same major version and machine type? @@ -202,7 +203,6 @@ provMac.group(3) != reqMac.group(3): return False - # is the required OS major update >= the provided one? if int(provMac.group(2)) > int(reqMac.group(2)): return False From python-checkins at python.org Tue Feb 7 14:44:48 2006 From: python-checkins at python.org (vinay.sajip) Date: Tue, 7 Feb 2006 14:44:48 +0100 (CET) Subject: [Python-checkins] r42257 - python/trunk/Lib/test/test_logging.py Message-ID: <20060207134448.C7CAD1E400A@bag.python.org> Author: vinay.sajip Date: Tue Feb 7 14:44:48 2006 New Revision: 42257 Modified: python/trunk/Lib/test/test_logging.py Log: Saved and restored logging._handlerList at the same time as saving/restoring logging._handlers. Modified: python/trunk/Lib/test/test_logging.py ============================================================================== --- python/trunk/Lib/test/test_logging.py (original) +++ python/trunk/Lib/test/test_logging.py Tue Feb 7 14:44:48 2006 @@ -467,6 +467,7 @@ sys.stdout.write('config%d: ' % i) loggerDict = logging.getLogger().manager.loggerDict saved_handlers = logging._handlers.copy() + saved_handler_list = logging._handlerList[:] saved_loggers = loggerDict.copy() try: fn = tempfile.mktemp(".ini") @@ -484,6 +485,7 @@ finally: logging._handlers.clear() logging._handlers.update(saved_handlers) + logging._handlerList = saved_handler_list loggerDict = logging.getLogger().manager.loggerDict loggerDict.clear() loggerDict.update(saved_loggers) @@ -526,6 +528,7 @@ def test5(): loggerDict = logging.getLogger().manager.loggerDict saved_handlers = logging._handlers.copy() + saved_handler_list = logging._handlerList[:] saved_loggers = loggerDict.copy() try: fn = tempfile.mktemp(".ini") @@ -541,6 +544,7 @@ finally: logging._handlers.clear() logging._handlers.update(saved_handlers) + logging._handlerList = saved_handler_list loggerDict = logging.getLogger().manager.loggerDict loggerDict.clear() loggerDict.update(saved_loggers) From python-checkins at python.org Tue Feb 7 14:55:55 2006 From: python-checkins at python.org (vinay.sajip) Date: Tue, 7 Feb 2006 14:55:55 +0100 (CET) Subject: [Python-checkins] r42258 - python/trunk/Lib/logging/__init__.py Message-ID: <20060207135555.E67651E47DA@bag.python.org> Author: vinay.sajip Date: Tue Feb 7 14:55:52 2006 New Revision: 42258 Modified: python/trunk/Lib/logging/__init__.py Log: Removed defensive test in Handler.close Modified: python/trunk/Lib/logging/__init__.py ============================================================================== --- python/trunk/Lib/logging/__init__.py (original) +++ python/trunk/Lib/logging/__init__.py Tue Feb 7 14:55:52 2006 @@ -41,8 +41,8 @@ __author__ = "Vinay Sajip " __status__ = "beta" -__version__ = "0.4.9.7" -__date__ = "07 October 2005" +__version__ = "0.4.9.9" +__date__ = "06 February 2006" #--------------------------------------------------------------------------- # Miscellaneous module data @@ -671,8 +671,7 @@ #get the module data lock, as we're updating a shared structure. _acquireLock() try: #unlikely to raise an exception, but you never know... - if _handlers.has_key(self): - del _handlers[self] + del _handlers[self] _handlerList.remove(self) finally: _releaseLock() From jimjjewett at gmail.com Tue Feb 7 16:36:52 2006 From: jimjjewett at gmail.com (Jim Jewett) Date: Tue, 7 Feb 2006 10:36:52 -0500 Subject: [Python-checkins] r42254 - in python/branches/release24-maint: Misc/NEWS Modules/_ssl.c Modules/socketmodule.c In-Reply-To: <20060207071739.920331E4007@bag.python.org> References: <20060207071739.920331E4007@bag.python.org> Message-ID: Is there any other way that a file descriptor could be invalid? I keep wanting to see SOCKET_TOO_LARGE instead of SOCKET_INVALID since that is all it checks. Also, if I am understanding correctly, the problem isn't with the size of the socket, it is with the total number of file descriptors, and this socket just happened to get one outside the valid set (which is presumably numbered sequentially). Maybe SOCKET_INVALID_FD ? (Also, would a negative number or float or something cause the same problems, except that they presumably get weeded out earlier?) -jJ On 2/7/06, neal.norwitz wrote: > Author: neal.norwitz > Date: Tue Feb 7 08:17:37 2006 > New Revision: 42254 > > Modified: > python/branches/release24-maint/Misc/NEWS > python/branches/release24-maint/Modules/_ssl.c > python/branches/release24-maint/Modules/socketmodule.c > Log: > Backport: > Bug #876637, prevent stack corruption when socket descriptor > is larger than FD_SETSIZE. > > This can only be acheived with ulimit -n SOME_NUMBER_BIGGER_THAN_FD_SETSIZE > which is typically only available to root. Since this wouldn't normally > be run in a test (ie, run as root), it doesn't seem too worthwhile to > add a normal test. The bug report has one version of a test. I've > written another. Not sure what the best thing to do is. > > Do the check before calling internal_select() because we can't set > an error in between Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS. > This seemed the clearest solution. > > > > Modified: python/branches/release24-maint/Misc/NEWS > ============================================================================== > --- python/branches/release24-maint/Misc/NEWS (original) > +++ python/branches/release24-maint/Misc/NEWS Tue Feb 7 08:17:37 2006 > @@ -45,6 +45,9 @@ > Extension Modules > ----------------- > > +- Bug #876637, prevent stack corruption when socket descriptor > + is larger than FD_SETSIZE. > + > - Patch #1407135, bug #1424041: mmap.mmap(-1, size, ...) can return > anonymous memory again on Unix. > > > Modified: python/branches/release24-maint/Modules/_ssl.c > ============================================================================== > --- python/branches/release24-maint/Modules/_ssl.c (original) > +++ python/branches/release24-maint/Modules/_ssl.c Tue Feb 7 08:17:37 2006 > @@ -74,6 +74,7 @@ > SOCKET_IS_BLOCKING, > SOCKET_HAS_TIMED_OUT, > SOCKET_HAS_BEEN_CLOSED, > + SOCKET_INVALID, > SOCKET_OPERATION_OK > } timeout_state; > > @@ -272,6 +273,9 @@ > } else if (sockstate == SOCKET_HAS_BEEN_CLOSED) { > PyErr_SetString(PySSLErrorObject, "Underlying socket has been closed."); > goto fail; > + } else if (sockstate == SOCKET_INVALID) { > + PyErr_SetString(PySSLErrorObject, "Underlying socket too large for select()."); > + goto fail; > } else if (sockstate == SOCKET_IS_NONBLOCKING) { > break; > } > @@ -372,6 +376,10 @@ > if (s->sock_fd < 0) > return SOCKET_HAS_BEEN_CLOSED; > > + /* Guard against socket too large for select*/ > + if (s->sock_fd >= FD_SETSIZE) > + return SOCKET_INVALID; > + > /* Construct the arguments to select */ > tv.tv_sec = (int)s->sock_timeout; > tv.tv_usec = (int)((s->sock_timeout - tv.tv_sec) * 1e6); > @@ -409,6 +417,9 @@ > } else if (sockstate == SOCKET_HAS_BEEN_CLOSED) { > PyErr_SetString(PySSLErrorObject, "Underlying socket has been closed."); > return NULL; > + } else if (sockstate == SOCKET_INVALID) { > + PyErr_SetString(PySSLErrorObject, "Underlying socket too large for select()."); > + return NULL; > } > do { > err = 0; > @@ -467,6 +478,9 @@ > PyErr_SetString(PySSLErrorObject, "The read operation timed out"); > Py_DECREF(buf); > return NULL; > + } else if (sockstate == SOCKET_INVALID) { > + PyErr_SetString(PySSLErrorObject, "Underlying socket too large for select()."); > + return NULL; > } > do { > err = 0; > > Modified: python/branches/release24-maint/Modules/socketmodule.c > ============================================================================== > --- python/branches/release24-maint/Modules/socketmodule.c (original) > +++ python/branches/release24-maint/Modules/socketmodule.c Tue Feb 7 08:17:37 2006 > @@ -390,6 +390,16 @@ > there has to be a circular reference. */ > static PyTypeObject sock_type; > > +/* Can we call select() with this socket without a buffer overrun? */ > +#define IS_SELECTABLE(s) ((s)->sock_fd < FD_SETSIZE) > + > +static PyObject* > +select_error(void) > +{ > + PyErr_SetString(socket_error, "unable to select on socket"); > + return NULL; > +} > + > /* Convenience function to raise an error according to errno > and return a NULL pointer from a function. */ > > @@ -1362,6 +1372,9 @@ > newfd = -1; > #endif > > + if (!IS_SELECTABLE(s)) > + return select_error(); > + > Py_BEGIN_ALLOW_THREADS > timeout = internal_select(s, 0); > if (!timeout) > @@ -1690,7 +1703,8 @@ > #ifdef MS_WINDOWS > > if (s->sock_timeout > 0.0) { > - if (res < 0 && WSAGetLastError() == WSAEWOULDBLOCK) { > + if (res < 0 && WSAGetLastError() == WSAEWOULDBLOCK && > + IS_SELECTABLE(s)) { > /* This is a mess. Best solution: trust select */ > fd_set fds; > fd_set fds_exc; > @@ -1735,7 +1749,7 @@ > #else > > if (s->sock_timeout > 0.0) { > - if (res < 0 && errno == EINPROGRESS) { > + if (res < 0 && errno == EINPROGRESS && IS_SELECTABLE(s)) { > timeout = internal_select(s, 1); > res = connect(s->sock_fd, addr, addrlen); > if (res < 0 && errno == EISCONN) > @@ -2038,6 +2052,9 @@ > if (buf == NULL) > return NULL; > > + if (!IS_SELECTABLE(s)) > + return select_error(); > + > #ifndef __VMS > Py_BEGIN_ALLOW_THREADS > timeout = internal_select(s, 0); > @@ -2131,6 +2148,9 @@ > if (buf == NULL) > return NULL; > > + if (!IS_SELECTABLE(s)) > + return select_error(); > + > Py_BEGIN_ALLOW_THREADS > memset(&addrbuf, 0, addrlen); > timeout = internal_select(s, 0); > @@ -2192,6 +2212,9 @@ > if (!PyArg_ParseTuple(args, "s#|i:send", &buf, &len, &flags)) > return NULL; > > + if (!IS_SELECTABLE(s)) > + return select_error(); > + > #ifndef __VMS > Py_BEGIN_ALLOW_THREADS > timeout = internal_select(s, 1); > @@ -2257,6 +2280,9 @@ > if (!PyArg_ParseTuple(args, "s#|i:sendall", &buf, &len, &flags)) > return NULL; > > + if (!IS_SELECTABLE(s)) > + return select_error(); > + > Py_BEGIN_ALLOW_THREADS > do { > timeout = internal_select(s, 1); > @@ -2311,6 +2337,9 @@ > if (!getsockaddrarg(s, addro, &addr, &addrlen)) > return NULL; > > + if (!IS_SELECTABLE(s)) > + return select_error(); > + > Py_BEGIN_ALLOW_THREADS > timeout = internal_select(s, 1); > if (!timeout) > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > From python-checkins at python.org Tue Feb 7 17:40:58 2006 From: python-checkins at python.org (phillip.eby) Date: Tue, 7 Feb 2006 17:40:58 +0100 (CET) Subject: [Python-checkins] r42259 - sandbox/trunk/setuptools/pkg_resources.py sandbox/trunk/setuptools/pkg_resources.txt Message-ID: <20060207164058.839F61E4009@bag.python.org> Author: phillip.eby Date: Tue Feb 7 17:40:55 2006 New Revision: 42259 Modified: sandbox/trunk/setuptools/pkg_resources.py sandbox/trunk/setuptools/pkg_resources.txt Log: Added ``Distribution.clone()`` method, and keyword argument support to other ``Distribution`` constructors. Added the ``DEVELOP_DIST`` precedence, and automatically assign it to eggs using ``.egg-info`` format. Modified: sandbox/trunk/setuptools/pkg_resources.py ============================================================================== --- sandbox/trunk/setuptools/pkg_resources.py (original) +++ sandbox/trunk/setuptools/pkg_resources.py Tue Feb 7 17:40:55 2006 @@ -67,7 +67,7 @@ 'ensure_directory', 'normalize_path', # Distribution "precedence" constants - 'EGG_DIST', 'BINARY_DIST', 'SOURCE_DIST', 'CHECKOUT_DIST', + 'EGG_DIST', 'BINARY_DIST', 'SOURCE_DIST', 'CHECKOUT_DIST', 'DEVELOP_DIST', # "Provider" interfaces, implementations, and registration/lookup APIs 'IMetadataProvider', 'IResourceProvider', 'FileMetadata', @@ -94,11 +94,11 @@ _provider_factories = {} PY_MAJOR = sys.version[:3] - EGG_DIST = 3 BINARY_DIST = 2 SOURCE_DIST = 1 CHECKOUT_DIST = 0 +DEVELOP_DIST = -1 def register_loader_type(loader_type, provider_factory): """Register `provider_factory` to make providers for `loader_type` @@ -1378,8 +1378,9 @@ metadata = PathMetadata(path_item, fullpath) else: metadata = FileMetadata(fullpath) - yield Distribution.from_location(path_item,entry,metadata) - + yield Distribution.from_location( + path_item,entry,metadata,precedence=DEVELOP_DIST + ) elif not only and lower.endswith('.egg'): for dist in find_distributions(os.path.join(path_item, entry)): yield dist @@ -1391,7 +1392,6 @@ register_finder(ImpWrapper,find_on_path) - _namespace_handlers = {} _namespace_packages = {} @@ -1736,7 +1736,7 @@ self._provider = metadata or empty_provider #@classmethod - def from_location(cls,location,basename,metadata=None): + def from_location(cls,location,basename,metadata=None,**kw): project_name, version, py_version, platform = [None]*4 basename, ext = os.path.splitext(basename) if ext.lower() in (".egg",".egg-info"): @@ -1747,7 +1747,7 @@ ) return cls( location, metadata, project_name=project_name, version=version, - py_version=py_version, platform=platform + py_version=py_version, platform=platform, **kw ) from_location = classmethod(from_location) @@ -1852,7 +1852,6 @@ if self.platform: filename += '-'+self.platform - return filename def __repr__(self): @@ -1874,9 +1873,10 @@ return getattr(self._provider, attr) #@classmethod - def from_filename(cls,filename,metadata=None): + def from_filename(cls,filename,metadata=None, **kw): return cls.from_location( - _normalize_cached(filename), os.path.basename(filename), metadata + _normalize_cached(filename), os.path.basename(filename), metadata, + **kw ) from_filename = classmethod(from_filename) @@ -1953,6 +1953,19 @@ return False return True + def clone(self,**kw): + """Copy this distribution, substituting in any changed keyword args""" + for attr in ( + 'project_name', 'version', 'py_version', 'platform', 'location', + 'precedence' + ): + kw.setdefault(attr, getattr(self,attr,None)) + kw.setdefault('metadata', self._provider) + return self.__class__(**kw) + + + + def issue_warning(*args,**kw): level = 1 g = globals() @@ -1966,6 +1979,34 @@ from warnings import warn warn(stacklevel = level+1, *args, **kw) + + + + + + + + + + + + + + + + + + + + + + + + + + + + def parse_requirements(strs): """Yield ``Requirement`` objects for each specification in `strs` Modified: sandbox/trunk/setuptools/pkg_resources.txt ============================================================================== --- sandbox/trunk/setuptools/pkg_resources.txt (original) +++ sandbox/trunk/setuptools/pkg_resources.txt Tue Feb 7 17:40:55 2006 @@ -754,20 +754,23 @@ implement both the `IResourceProvider`_ and `IMetadataProvider Methods`_ by delegating them to the `metadata` object. -``Distribution.from_location(location, basename, metadata=None)`` (classmethod) +``Distribution.from_location(location, basename, metadata=None, **kw)`` (classmethod) Create a distribution for `location`, which must be a string such as a URL, filename, or other string that might be used on ``sys.path``. `basename` is a string naming the distribution, like ``Foo-1.2-py2.4.egg``. If `basename` ends with ``.egg``, then the project's name, version, python version and platform are extracted from the filename and used to set those - properties of the created distribution. + properties of the created distribution. Any additional keyword arguments + are forwarded to the ``Distribution()`` constructor. -``Distribution.from_filename(filename, metadata=None)`` (classmethod) +``Distribution.from_filename(filename, metadata=None**kw)`` (classmethod) Create a distribution by parsing a local filename. This is a shorter way of saying ``Distribution.from_location(normalize_path(filename), os.path.basename(filename), metadata)``. In other words, it creates a distribution whose location is the normalize form of the filename, parsing - name and version information from the base portion of the filename. + name and version information from the base portion of the filename. Any + additional keyword arguments are forwarded to the ``Distribution()`` + constructor. ``Distribution(location,metadata,project_name,version,py_version,platform,precedence)`` Create a distribution by setting its properties. All arguments are @@ -834,10 +837,12 @@ ``parsed_version``. The default precedence is ``pkg_resources.EGG_DIST``, which is the highest (i.e. most preferred) precedence. The full list of predefined precedences, from most preferred to least preferred, is: - ``EGG_DIST``, ``BINARY_DIST``, ``SOURCE_DIST``, and ``CHECKOUT_DIST``. - Normally, precedences other than ``EGG_DIST`` are used only by the - ``setuptools.package_index`` module, when sorting distributions found in a - package index to determine their suitability for installation. + ``EGG_DIST``, ``BINARY_DIST``, ``SOURCE_DIST``, ``CHECKOUT_DIST``, and + ``DEVELOP_DIST``. Normally, precedences other than ``EGG_DIST`` are used + only by the ``setuptools.package_index`` module, when sorting distributions + found in a package index to determine their suitability for installation. + "System" and "Development" eggs (i.e., ones that use the ``.egg-info`` + format), however, are automatically given a precedence of ``DEVELOP_DIST``. @@ -871,6 +876,11 @@ of "extras" defined by the distribution, and the list returned will then include any dependencies needed to support the named "extras". +``clone(**kw)`` + Create a copy of the distribution. Any supplied keyword arguments override + the corresponding argument to the ``Distribution()`` constructor, allowing + you to change some of the copied distribution's attributes. + ``egg_name()`` Return what this distribution's standard filename should be, not including the ".egg" extension. For example, a distribution for project "Foo" @@ -1524,6 +1534,12 @@ versions for safe use in constructing egg filenames from a Distribution object's metadata. + * Added ``Distribution.clone()`` method, and keyword argument support to other + ``Distribution`` constructors. + + * Added the ``DEVELOP_DIST`` precedence, and automatically assign it to + eggs using ``.egg-info`` format. + 0.6a9 * Don't raise an error when an invalid (unfinished) distribution is found unless absolutely necessary. Warn about skipping invalid/unfinished eggs From python-checkins at python.org Tue Feb 7 17:43:42 2006 From: python-checkins at python.org (phillip.eby) Date: Tue, 7 Feb 2006 17:43:42 +0100 (CET) Subject: [Python-checkins] r42260 - in sandbox/trunk/setuptools: EasyInstall.txt setuptools/command/easy_install.py setuptools/package_index.py Message-ID: <20060207164342.74C9F1E4009@bag.python.org> Author: phillip.eby Date: Tue Feb 7 17:43:41 2006 New Revision: 42260 Modified: sandbox/trunk/setuptools/EasyInstall.txt sandbox/trunk/setuptools/setuptools/command/easy_install.py sandbox/trunk/setuptools/setuptools/package_index.py Log: The ``--always-copy`` option now skips "system" and "development" eggs since they can't be reliably copied. Note that this may cause EasyInstall to choose an older version of a package than what you expected, or it may cause downloading and installation of a fresh version of what's already installed. Modified: sandbox/trunk/setuptools/EasyInstall.txt ============================================================================== --- sandbox/trunk/setuptools/EasyInstall.txt (original) +++ sandbox/trunk/setuptools/EasyInstall.txt Tue Feb 7 17:43:41 2006 @@ -589,6 +589,14 @@ from other sys.path directories to the installation directory, unless you explicitly gave the distribution's filename on the command line. + Note that as of 0.6a10, using this option excludes "system" and + "development" eggs from consideration because they can't be reliably + copied. This may cause EasyInstall to choose an older version of a package + than what you expected, or it may cause downloading and installation of a + fresh copy of something that's already installed. You will see warning + messages for any eggs that EasyInstall skips, before it falls back to an + older version or attempts to download a fresh copy. + ``--find-links=URL, -f URL`` (Option renamed in 0.4a2) Scan the specified "download pages" for direct links to downloadable eggs or source distributions. Any usable packages will be downloaded if they @@ -982,6 +990,11 @@ linking to them (e.g. from within their own PyPI page or download links page). + * The ``--always-copy`` option now skips "system" and "development" eggs since + they can't be reliably copied. Note that this may cause EasyInstall to + choose an older version of a package than what you expected, or it may cause + downloading and installation of a fresh version of what's already installed. + 0.6a9 * Fixed ``.pth`` file processing picking up nested eggs (i.e. ones inside "baskets") when they weren't explicitly listed in the ``.pth`` file. Modified: sandbox/trunk/setuptools/setuptools/command/easy_install.py ============================================================================== --- sandbox/trunk/setuptools/setuptools/command/easy_install.py (original) +++ sandbox/trunk/setuptools/setuptools/command/easy_install.py Tue Feb 7 17:43:41 2006 @@ -305,27 +305,27 @@ spec = parse_requirement_arg(spec) self.check_editable(spec) - download = self.package_index.fetch( - spec, tmpdir, self.upgrade, self.editable + dist = self.package_index.fetch_distribution( + spec, tmpdir, self.upgrade, self.editable, not self.always_copy ) - if download is None: - raise DistutilsError( - "Could not find distribution for %r" % spec - ) - - return self.install_item(spec, download, tmpdir, deps) + if dist is None: + msg = "Could not find suitable distribution for %r" % spec + if self.always_copy: + msg+=" (--always-copy skips system and development eggs)" + raise DistutilsError(msg) + elif dist.precedence==DEVELOP_DIST: + # .egg-info dists don't need installing, just process deps + self.process_distribution(spec, dist, deps, "Using") + return dist + else: + return self.install_item(spec, dist.location, tmpdir, deps) finally: if os.path.exists(tmpdir): rmtree(tmpdir) - - - - - def install_item(self, spec, download, tmpdir, deps, install_needed=False): # Installation is also needed if file in tmpdir or is not an egg Modified: sandbox/trunk/setuptools/setuptools/package_index.py ============================================================================== --- sandbox/trunk/setuptools/setuptools/package_index.py (original) +++ sandbox/trunk/setuptools/setuptools/package_index.py Tue Feb 7 17:43:41 2006 @@ -337,12 +337,12 @@ automatically created alongside the downloaded file. If `spec` is a ``Requirement`` object or a string containing a - project/version requirement spec, this method is equivalent to - the ``fetch()`` method. If `spec` is a local, existing file or - directory name, it is simply returned unchanged. If `spec` is a URL, - it is downloaded to a subpath of `tmpdir`, and the local filename is - returned. Various errors may be raised if a problem occurs during - downloading. + project/version requirement spec, this method returns the location of + a matching distribution (possibly after downloading it to `tmpdir`). + If `spec` is a locally existing file or directory name, it is simply + returned unchanged. If `spec` is a URL, it is downloaded to a subpath + of `tmpdir`, and the local filename is returned. Various errors may be + raised if a problem occurs during downloading. """ if not isinstance(spec,Requirement): scheme = URL_SCHEME(spec) @@ -364,31 +364,49 @@ "Not a URL, existing file, or requirement spec: %r" % (spec,) ) - return self.fetch(spec, tmpdir) + return getattr(self.fetch_distribution(spec, tmpdir),'location',None) - def fetch(self, requirement, tmpdir, force_scan=False, source=False): - """Obtain a file suitable for fulfilling `requirement` + def fetch_distribution(self, + requirement, tmpdir, force_scan=False, source=False, develop_ok=False + ): + """Obtain a distribution suitable for fulfilling `requirement` `requirement` must be a ``pkg_resources.Requirement`` instance. If necessary, or if the `force_scan` flag is set, the requirement is searched for in the (online) package index as well as the locally installed packages. If a distribution matching `requirement` is found, - the return value is the same as if you had called the ``download()`` - method with the matching distribution's URL. If no matching - distribution is found, returns ``None``. + the returned distribution's ``location`` is the value you would have + gotten from calling the ``download()`` method with the matching + distribution's URL or filename. If no matching distribution is found, + ``None`` is returned. If the `source` flag is set, only source distributions and source - checkout links will be considered. + checkout links will be considered. Unless the `develop_ok` flag is + set, development and system eggs (i.e., those using the ``.egg-info`` + format) will be ignored. """ + # process a Requirement self.info("Searching for %s", requirement) + skipped = {} def find(req): + # Find a matching distribution; may be called more than once + for dist in self[req.key]: + + if dist.precedence==DEVELOP_DIST and not develop_ok: + if dist not in skipped: + self.warn("Skipping development or system egg: %s",dist) + skipped[dist] = 1 + continue + if dist in req and (dist.precedence<=SOURCE_DIST or not source): self.info("Best match: %s", dist) - return self.download(dist.location, tmpdir) + return dist.clone( + location=self.download(dist.location, tmpdir) + ) if force_scan: self.find_packages(requirement) @@ -407,6 +425,29 @@ ) return dist + def fetch(self, requirement, tmpdir, force_scan=False, source=False): + """Obtain a file suitable for fulfilling `requirement` + + DEPRECATED; use the ``fetch_distribution()`` method now instead. For + backward compatibility, this routine is identical but returns the + ``location`` of the downloaded distribution instead of a distribution + object. + """ + dist = self.fetch_dist(requirement,tmpdir,force_scan,source) + if dist is not None: + return dist.location + return None + + + + + + + + + + + def gen_setup(self, filename, fragment, tmpdir): match = EGG_FRAGMENT.match(fragment); #import pdb; pdb.set_trace() From python-checkins at python.org Tue Feb 7 23:28:10 2006 From: python-checkins at python.org (jack.jansen) Date: Tue, 7 Feb 2006 23:28:10 +0100 (CET) Subject: [Python-checkins] r42261 - python/trunk/Tools/bgen/bgen/bgenObjectDefinition.py Message-ID: <20060207222810.36E931E4004@bag.python.org> Author: jack.jansen Date: Tue Feb 7 23:28:09 2006 New Revision: 42261 Modified: python/trunk/Tools/bgen/bgen/bgenObjectDefinition.py Log: Fixed an oversight and a misunderstanding of PEP253: - Call tp_dealloc on the static baseclass, not dynamic (which leads to infinite loops with more than one baseclass) - Call tp_new and tp_init on baseclasses (overridable) -This line, and those below, will be ignored-- M bgen/bgenObjectDefinition.py Modified: python/trunk/Tools/bgen/bgen/bgenObjectDefinition.py ============================================================================== --- python/trunk/Tools/bgen/bgen/bgenObjectDefinition.py (original) +++ python/trunk/Tools/bgen/bgen/bgenObjectDefinition.py Tue Feb 7 23:28:09 2006 @@ -141,7 +141,7 @@ OutLbrace() self.outputCleanupStructMembers() if self.basetype: - Output("self->ob_type->tp_base->tp_dealloc((PyObject *)self);") + Output("%s.tp_dealloc((PyObject *)self);", self.basetype) elif hasattr(self, 'output_tp_free'): # This is a new-style object with tp_free slot Output("self->ob_type->tp_free((PyObject *)self);") @@ -382,12 +382,20 @@ def outputHook_tp_free(self): Output("%s_tp_free, /* tp_free */", self.prefix) + def output_tp_initBody_basecall(self): + if self.basetype: + Output("if (%s.tp_init)", self.basetype) + OutLbrace() + Output("if ( (*%s.tp_init)(_self, _args, _kwds) < 0) return -1;", self.basetype) + OutRbrace() + output_tp_initBody = None def output_tp_init(self): if self.output_tp_initBody: Output("static int %s_tp_init(PyObject *_self, PyObject *_args, PyObject *_kwds)", self.prefix) OutLbrace() + self.output_tp_initBody_basecall() self.output_tp_initBody() OutRbrace() else: @@ -414,7 +422,17 @@ Output() Output("if (!PyArg_ParseTupleAndKeywords(_args, _kwds, \"O&\", kw, %s_Convert, &itself)) return NULL;", self.prefix); - Output("if ((_self = type->tp_alloc(type, 0)) == NULL) return NULL;") + if self.basetype: + Output("if (%s.tp_new)", self.basetype) + OutLbrace() + Output("if ( (*%s.tp_init)(_self, _args, _kwds) == NULL) return NULL;", self.basetype) + Dedent() + Output("} else {") + Indent() + Output("if ((_self = type->tp_alloc(type, 0)) == NULL) return NULL;") + OutRbrace() + else: + Output("if ((_self = type->tp_alloc(type, 0)) == NULL) return NULL;") Output("((%s *)_self)->ob_itself = itself;", self.objecttype) Output("return _self;") From python-checkins at python.org Wed Feb 8 06:46:56 2006 From: python-checkins at python.org (phillip.eby) Date: Wed, 8 Feb 2006 06:46:56 +0100 (CET) Subject: [Python-checkins] r42262 - in sandbox/trunk/setuptools: EasyInstall.txt setuptools.txt setuptools/command/easy_install.py setuptools/package_index.py Message-ID: <20060208054656.048AC1E4002@bag.python.org> Author: phillip.eby Date: Wed Feb 8 06:46:54 2006 New Revision: 42262 Modified: sandbox/trunk/setuptools/EasyInstall.txt sandbox/trunk/setuptools/setuptools.txt sandbox/trunk/setuptools/setuptools/command/easy_install.py sandbox/trunk/setuptools/setuptools/package_index.py Log: The ``--find-links`` option previously scanned all supplied URLs and directories as early as possible, but now only directories and direct archive links are scanned immediately. URLs are not retrieved unless a package search was already going to go online due to a package not being available locally, or due to the use of the ``--update`` or ``-U`` option. Also, fixed the ``develop`` command ignoring ``--find-links``. Modified: sandbox/trunk/setuptools/EasyInstall.txt ============================================================================== --- sandbox/trunk/setuptools/EasyInstall.txt (original) +++ sandbox/trunk/setuptools/EasyInstall.txt Wed Feb 8 06:46:54 2006 @@ -545,14 +545,14 @@ file(s)) you must also use ``require()`` to enable packages at runtime. ``--upgrade, -U`` (New in 0.5a4) - By default, EasyInstall only searches the Python Package Index if a - project/version requirement can't be met by distributions already installed + By default, EasyInstall only searches online if a project/version + requirement can't be met by distributions already installed on sys.path or the installation directory. However, if you supply the ``--upgrade`` or ``-U`` flag, EasyInstall will always check the package - index before selecting a version to install. In this way, you can force - EasyInstall to use the latest available version of any package it installs - (subject to any version requirements that might exclude such later - versions). + index and ``--find-links`` URLs before selecting a version to install. In + this way, you can force EasyInstall to use the latest available version of + any package it installs (subject to any version requirements that might + exclude such later versions). ``--install-dir=DIR, -d DIR`` Set the installation directory. It is up to you to ensure that this @@ -597,29 +597,41 @@ messages for any eggs that EasyInstall skips, before it falls back to an older version or attempts to download a fresh copy. -``--find-links=URL, -f URL`` (Option renamed in 0.4a2) - Scan the specified "download pages" for direct links to downloadable eggs - or source distributions. Any usable packages will be downloaded if they - are required by a command line argument. For example, this:: +``--find-links=URLS_OR_FILENAMES, -f URLS_OR_FILENAMES`` + Scan the specified "download pages" or directories for direct links to eggs + or other distributions. Any existing file or directory names or direct + download URLs are immediately added to EasyInstall's search cache, and any + indirect URLs (ones that don't point to eggs or other recognized archive + formats) are added to a list of additional places to search for download + links. As soon as EasyInstall has to go online to find a package (either + because it doesn't exist locally, or because ``--upgrade`` or ``-U`` was + used), the specified URLs will be downloaded and scanned for additional + direct links. + + Eggs and archives found by way of ``--find-links`` are only downloaded if + they are needed to meet a requirement specified on the command line; links + to unneeded packages are ignored. - easy_install -f http://peak.telecommunity.com/dist PyProtocols - - will download and install the latest version of PyProtocols linked from - the PEAK downloads page, but ignore the other download links on that page. If all requested packages can be found using links on the specified - download pages, the Python Package Index will *not* be consulted. You can - use a ``file:`` URL to reference a local HTML file containing links, or you - can just use the name of a directory containing "distribution files" - (source archives, eggs, Windows installers, etc.), and EasyInstall will - then be aware of the files available there. - - You may specify multiple URLs or directories with this option, separated by - whitespace. Note that on the command line, you will probably have to - surround the URL list with quotes, so that it is recognized as a single - option value. You can also specify URLs in a configuration file; see - `Configuration Files`_, above; but note that this means the specified pages - will be downloaded every time you use EasyInstall (unless overridden on the - command line) and thus may make startup slower. + download pages, the Python Package Index will not be consulted unless you + also specified the ``--upgrade`` or ``-U`` option. + + (Note: if you want to refer to a local HTML file containing links, you must + use a ``file:`` URL, as filenames that do not refer to a directory, egg, or + archive are ignored.) + + You may specify multiple URLs or file/directory names with this option, + separated by whitespace. Note that on the command line, you will probably + have to surround the URL list with quotes, so that it is recognized as a + single option value. You can also specify URLs in a configuration file; + see `Configuration Files`_, above. + + Changed in 0.6a10: previously all URLs and directories passed to this + option were scanned as early as possible, but from 0.6a10 on, only + directories and direct archive links are scanned immediately; URLs are not + retrieved unless a package search was already going to go online due to a + package not being available locally, or due to the use of the ``--update`` + or ``-U`` option. ``--delete-conflicting, -D`` (New in 0.5a9) If you are replacing a package that was previously installed *without* @@ -995,6 +1007,13 @@ choose an older version of a package than what you expected, or it may cause downloading and installation of a fresh version of what's already installed. + * The ``--find-links`` option previously scanned all supplied URLs and + directories as early as possible, but now only directories and direct + archive links are scanned immediately. URLs are not retrieved unless a + package search was already going to go online due to a package not being + available locally, or due to the use of the ``--update`` or ``-U`` option. + + 0.6a9 * Fixed ``.pth`` file processing picking up nested eggs (i.e. ones inside "baskets") when they weren't explicitly listed in the ``.pth`` file. Modified: sandbox/trunk/setuptools/setuptools.txt ============================================================================== --- sandbox/trunk/setuptools/setuptools.txt (original) +++ sandbox/trunk/setuptools/setuptools.txt Wed Feb 8 06:46:54 2006 @@ -2342,6 +2342,9 @@ Release Notes/Change History ---------------------------- +0.6a10 + * Fixed the ``develop`` command ignoring ``--find-links``. + 0.6a9 * The ``sdist`` command no longer uses the traditional ``MANIFEST`` file to create source distributions. ``MANIFEST.in`` is still read and processed, Modified: sandbox/trunk/setuptools/setuptools/command/easy_install.py ============================================================================== --- sandbox/trunk/setuptools/setuptools/command/easy_install.py (original) +++ sandbox/trunk/setuptools/setuptools/command/easy_install.py Wed Feb 8 06:46:54 2006 @@ -194,7 +194,7 @@ self.find_links = self.find_links.split() else: self.find_links = [] - + self.package_index.add_find_links(self.find_links) self.set_undefined_options('install_lib', ('optimize','optimize')) if not isinstance(self.optimize,int): try: @@ -224,8 +224,6 @@ if self.verbose<>self.distribution.verbose: log.set_verbosity(self.verbose) try: - for link in self.find_links: - self.package_index.scan_url(link) for spec in self.args: self.easy_install(spec, not self.no_deps) if self.record: @@ -244,6 +242,8 @@ log.set_verbosity(self.distribution.verbose) + + def install_egg_scripts(self, dist): """Write all the scripts for `dist`, unless scripts are excluded""" Modified: sandbox/trunk/setuptools/setuptools/package_index.py ============================================================================== --- sandbox/trunk/setuptools/setuptools/package_index.py (original) +++ sandbox/trunk/setuptools/setuptools/package_index.py Wed Feb 8 06:46:54 2006 @@ -131,6 +131,7 @@ self.fetched_urls = {} self.package_pages = {} self.allows = re.compile('|'.join(map(translate,hosts))).match + self.to_scan = [] def process_url(self, url, retrieve=False): """Evaluate a URL as a possible download, and maybe retrieve it""" @@ -139,18 +140,8 @@ return self.scanned_urls[url] = True if not URL_SCHEME(url): - # process filenames or directories - if os.path.isfile(url): - map(self.add, distros_for_filename(url)) - return # no need to retrieve anything - elif os.path.isdir(url): - url = os.path.realpath(url) - for item in os.listdir(url): - self.process_url(os.path.join(url,item)) - return - else: - self.warn("Not found: %s", url) - return + self.process_filename(url) + return else: dists = list(distros_for_url(url)) if dists: @@ -170,6 +161,7 @@ f = self.open_url(url) self.fetched_urls[url] = self.fetched_urls[f.url] = True + if 'html' not in f.headers['content-type'].lower(): f.close() # not html, we can't process it return @@ -184,6 +176,21 @@ link = urlparse.urljoin(base, match.group(1)) self.process_url(link) + def process_filename(self, fn, nested=False): + # process filenames or directories + if not os.path.exists(fn): + self.warn("Not found: %s", url) + return + + if os.path.isdir(fn): + path = os.path.realpath(fn) + for item in os.listdir(path): + self.process_filename(os.path.join(path,item), True) + + dists = distros_for_filename(fn) + if dists: + self.debug("Found: %s", fn) + map(self.add, dists) def url_ok(self, url, fatal=False): if self.allows(urlparse.urlparse(url)[1]): @@ -196,13 +203,6 @@ - - - - - - - def process_index(self,url,page): """Process the contents of a PyPI page""" def scan(link): @@ -260,9 +260,11 @@ def find_packages(self, requirement): self.scan_url(self.index_url + requirement.unsafe_name+'/') + if not self.package_pages.get(requirement.key): # Fall back to safe version of the name self.scan_url(self.index_url + requirement.project_name+'/') + if not self.package_pages.get(requirement.key): # We couldn't find the target package, so search the index page too self.warn( @@ -276,15 +278,13 @@ self.scan_url(url) def obtain(self, requirement, installer=None): - self.find_packages(requirement) + self.prescan(); self.find_packages(requirement) for dist in self[requirement.key]: if dist in requirement: return dist self.debug("%s does not match %s", requirement, dist) return super(PackageIndex, self).obtain(requirement,installer) - - def check_md5(self, cs, info, filename, tfp): if re.match('md5=[0-9a-f]{32}$', info): self.debug("Validating md5 checksum for %s", filename) @@ -296,26 +296,26 @@ "; possible download problem?" ) + def add_find_links(self, urls): + """Add `urls` to the list that will be prescanned for searches""" + for url in urls: + if ( + self.to_scan is None # if we have already "gone online" + or not URL_SCHEME(url) # or it's a local file/directory + or url.startswith('file:') + or list(distros_for_url(url)) # or a direct package link + ): + # then go ahead and process it now + self.scan_url(url) + else: + # otherwise, defer retrieval till later + self.to_scan.append(url) - - - - - - - - - - - - - - - - - - - + def prescan(self): + """Scan urls scheduled for prescanning (e.g. --find-links)""" + if self.to_scan: + map(self.scan_url, self.to_scan) + self.to_scan = None # from now on, go ahead and process immediately @@ -409,13 +409,17 @@ ) if force_scan: + self.prescan() self.find_packages(requirement) + + dist = find(requirement) + if dist is None and self.to_scan is not None: + self.prescan() dist = find(requirement) - else: + + if dist is None and not force_scan: + self.find_packages(requirement) dist = find(requirement) - if dist is None: - self.find_packages(requirement) - dist = find(requirement) if dist is None: self.warn( @@ -445,10 +449,6 @@ - - - - def gen_setup(self, filename, fragment, tmpdir): match = EGG_FRAGMENT.match(fragment); #import pdb; pdb.set_trace() dists = match and [d for d in From python-checkins at python.org Wed Feb 8 06:53:11 2006 From: python-checkins at python.org (neal.norwitz) Date: Wed, 8 Feb 2006 06:53:11 +0100 (CET) Subject: [Python-checkins] r42263 - peps/trunk/pep-0000.txt peps/trunk/pep-0341.txt Message-ID: <20060208055311.727C21E4002@bag.python.org> Author: neal.norwitz Date: Wed Feb 8 06:53:10 2006 New Revision: 42263 Modified: peps/trunk/pep-0000.txt peps/trunk/pep-0341.txt Log: PEP 341 was commited Modified: peps/trunk/pep-0000.txt ============================================================================== --- peps/trunk/pep-0000.txt (original) +++ peps/trunk/pep-0000.txt Wed Feb 8 06:53:10 2006 @@ -66,7 +66,6 @@ SA 308 Conditional Expressions GvR, Hettinger SA 328 Imports: Multi-Line and Absolute/Relative Aahz - SA 341 Unifying try-except and try-finally Brandl SA 342 Coroutines via Enhanced Generators GvR, Eby Open PEPs (under consideration) @@ -167,6 +166,7 @@ SF 322 Reverse Iteration Hettinger SF 324 subprocess - New process module Astrand SF 327 Decimal Data Type Batista + SF 341 Unifying try-except and try-finally Brandl Empty PEPs (or containing only an abstract) @@ -391,7 +391,7 @@ S 338 Executing modules inside packages with '-m' Coghlan I 339 How to Change CPython's Bytecode Cannon SR 340 Anonymous Block Statements GvR - SA 341 Unifying try-except and try-finally Brandl + SF 341 Unifying try-except and try-finally Brandl SA 342 Coroutines via Enhanced Generators GvR, Eby S 343 Anonymous Block Redux and Generator Enhancements GvR S 344 Exception Chaining and Embedded Tracebacks Yee @@ -486,7 +486,7 @@ Meyer, Mike mwm at mired.org Montanaro, Skip skip at pobox.com Moore, Paul gustav at morpheus.demon.co.uk - Norwitz, Neal neal at metaslash.com + Norwitz, Neal nnorwitz at gmail.com Oliphant, Travis oliphant at ee.byu.edu Pedroni, Samuele pedronis at python.org Pelletier, Michel michel at users.sourceforge.net Modified: peps/trunk/pep-0341.txt ============================================================================== --- peps/trunk/pep-0341.txt (original) +++ peps/trunk/pep-0341.txt Wed Feb 8 06:53:10 2006 @@ -3,7 +3,7 @@ Version: $Revision$ Last-Modified: $Date$ Author: Georg Brandl -Status: Accepted +Status: Final Type: Standards Track Content-Type: text/plain Created: 04-May-2005 @@ -99,11 +99,14 @@ However, according to Guido, it should be a piece of cake to implement[1] -- at least for a core hacker. + This patch was commited 17 December 2005, SVN revision 41740 [3]. + References [1] http://mail.python.org/pipermail/python-dev/2005-May/053319.html [2] http://python.org/sf/1355913 + [3] http://mail.python.org/pipermail/python-checkins/2005-December/048457.html Copyright From python-checkins at python.org Wed Feb 8 06:53:44 2006 From: python-checkins at python.org (neal.norwitz) Date: Wed, 8 Feb 2006 06:53:44 +0100 (CET) Subject: [Python-checkins] r42264 - peps/trunk/pep-0291.txt Message-ID: <20060208055344.B62E01E4002@bag.python.org> Author: neal.norwitz Date: Wed Feb 8 06:53:43 2006 New Revision: 42264 Modified: peps/trunk/pep-0291.txt Log: Update my address Modified: peps/trunk/pep-0291.txt ============================================================================== --- peps/trunk/pep-0291.txt (original) +++ peps/trunk/pep-0291.txt Wed Feb 8 06:53:43 2006 @@ -2,7 +2,7 @@ Title: Backward Compatibility for Standard Library Version: $Revision$ Last-Modified: $Date$ -Author: neal at metaslash.com (Neal Norwitz) +Author: nnorwitz at gmail.com (Neal Norwitz) Status: Active Type: Informational Created: 06-Jun-2002 From python-checkins at python.org Wed Feb 8 07:22:24 2006 From: python-checkins at python.org (neal.norwitz) Date: Wed, 8 Feb 2006 07:22:24 +0100 (CET) Subject: [Python-checkins] r42265 - peps/trunk/pep-0000.txt peps/trunk/pep-0342.txt peps/trunk/pep-0343.txt Message-ID: <20060208062224.6ADAA1E4002@bag.python.org> Author: neal.norwitz Date: Wed Feb 8 07:22:22 2006 New Revision: 42265 Modified: peps/trunk/pep-0000.txt peps/trunk/pep-0342.txt peps/trunk/pep-0343.txt Log: Guido accepted 342 and 343 at EuroPython keynote. 342 was commited. Modified: peps/trunk/pep-0000.txt ============================================================================== --- peps/trunk/pep-0000.txt (original) +++ peps/trunk/pep-0000.txt Wed Feb 8 07:22:22 2006 @@ -66,7 +66,7 @@ SA 308 Conditional Expressions GvR, Hettinger SA 328 Imports: Multi-Line and Absolute/Relative Aahz - SA 342 Coroutines via Enhanced Generators GvR, Eby + SA 343 The "with" Statement GvR, Coghlan Open PEPs (under consideration) @@ -100,7 +100,6 @@ S 335 Overloadable Boolean Operators Ewing S 337 Logging Usage in the Standard Library Dubner S 338 Executing modules inside packages with '-m' Coghlan - S 343 The "with" Statement GvR, Coghlan S 344 Exception Chaining and Embedded Tracebacks Yee S 345 Metadata for Python Software Packages 1.2 Jones I 350 Codetags Elliott @@ -167,6 +166,7 @@ SF 324 subprocess - New process module Astrand SF 327 Decimal Data Type Batista SF 341 Unifying try-except and try-finally Brandl + SF 342 Coroutines via Enhanced Generators GvR, Eby Empty PEPs (or containing only an abstract) @@ -392,8 +392,8 @@ I 339 How to Change CPython's Bytecode Cannon SR 340 Anonymous Block Statements GvR SF 341 Unifying try-except and try-finally Brandl - SA 342 Coroutines via Enhanced Generators GvR, Eby - S 343 Anonymous Block Redux and Generator Enhancements GvR + SF 342 Coroutines via Enhanced Generators GvR, Eby + SA 343 Anonymous Block Redux and Generator Enhancements GvR S 344 Exception Chaining and Embedded Tracebacks Yee S 345 Metadata for Python Software Packages 1.2 Jones SR 346 User Defined ("with") Statements Coghlan Modified: peps/trunk/pep-0342.txt ============================================================================== --- peps/trunk/pep-0342.txt (original) +++ peps/trunk/pep-0342.txt Wed Feb 8 07:22:22 2006 @@ -3,7 +3,7 @@ Version: $Revision$ Last-Modified: $Date$ Author: Guido van Rossum, Phillip J. Eby -Status: Accepted +Status: Final Type: Standards Track Content-Type: text/plain Created: 10-May-2005 @@ -572,6 +572,9 @@ PEP is available as SourceForge patch #1223381 (http://python.org/sf/1223381). + This patch was commited to CVS 01-02 August 2005. + + Acknowledgements Raymond Hettinger (PEP 288) and Samuele Pedroni (PEP 325) first Modified: peps/trunk/pep-0343.txt ============================================================================== --- peps/trunk/pep-0343.txt (original) +++ peps/trunk/pep-0343.txt Wed Feb 8 07:22:22 2006 @@ -3,7 +3,7 @@ Version: $Revision$ Last-Modified: $Date$ Author: Guido van Rossum, Nick Coghlan -Status: Draft +Status: Accepted Type: Standards Track Content-Type: text/plain Created: 13-May-2005 @@ -830,6 +830,12 @@ # Perform operation +Reference Implementation + + There is no implementation at this time. This PEP was accepted + by Guido at his EuroPython keynote, 27 June 2005. + + References [1] http://blogs.msdn.com/oldnewthing/archive/2005/01/06/347666.aspx @@ -862,6 +868,10 @@ [11] http://mail.python.org/pipermail/python-dev/2005-October/057625.html + [12] + http://sourceforge.net/tracker/index.php?func=detail&aid=1223381&group_id=5470&atid=305470 + + Copyright This document has been placed in the public domain. From python-checkins at python.org Wed Feb 8 07:33:27 2006 From: python-checkins at python.org (neal.norwitz) Date: Wed, 8 Feb 2006 07:33:27 +0100 (CET) Subject: [Python-checkins] r42266 - peps/trunk/pep-0000.txt peps/trunk/pep-0356.txt Message-ID: <20060208063327.E1CB81E4002@bag.python.org> Author: neal.norwitz Date: Wed Feb 8 07:33:27 2006 New Revision: 42266 Added: peps/trunk/pep-0356.txt Modified: peps/trunk/pep-0000.txt Log: Very draft version of 2.5 release schedule. Modified: peps/trunk/pep-0000.txt ============================================================================== --- peps/trunk/pep-0000.txt (original) +++ peps/trunk/pep-0000.txt Wed Feb 8 07:33:27 2006 @@ -60,6 +60,7 @@ I 306 How to Change Python's Grammar Hudson I 333 Python Web Server Gateway Interface v1.0 Eby I 339 How to Change CPython's Bytecode Cannon + I 356 Python 2.5 Release Schedule Norwitz, et al I 3000 Python 3.0 Plans Kuchling, Cannon Accepted PEPs (accepted; may not be implemented yet) @@ -406,6 +407,7 @@ S 353 Using ssize_t as the index type von Loewis S 354 Enumerations in Python Finney S 355 Path - Object oriented filesystem paths Lindqvist + I 356 Python 2.5 Release Schedule Norwitz, et al SR 666 Reject Foolish Indentation Creighton S 754 IEEE 754 Floating Point Special Values Warnes I 3000 Python 3.0 Plans Kuchling, Cannon Added: peps/trunk/pep-0356.txt ============================================================================== --- (empty file) +++ peps/trunk/pep-0356.txt Wed Feb 8 07:33:27 2006 @@ -0,0 +1,212 @@ +PEP: 356 +Title: Python 2.5 Release Schedule +Version: $Revision$ +Author: Neal Norwitz +Status: Draft +Type: Informational +Created: 07-Feb-2006 +Python-Version: 2.5 +Post-History: + +Abstract + + This document describes the development and release schedule for + Python 2.5. The schedule primarily concerns itself with PEP-sized + items. Small features may be added up to and including the first + beta release. Bugs may be fixed until the final release. + + There will be at least two alpha releases, two beta releases, and + one release candidate. The release date is planned 31 October 2006. + + +Release Manager + + TBD (Anthony Baxter?) + + TBD (Martin von Loewis?) is building the Windows installers, + TBD (Fred Drake?) the doc packages, + TBD (Sean Reifschneider?) the RPMs. + + +Release Schedule + + alpha 1: June 2006 [planned] + alpha 2: July 2006 [planned] + beta 1: August 2006 [planned] + beta 2: September 2006 [planned] + rc 1: October 2006 [planned] + final: October 2006 [planned] + + +Completed features for 2.5 + + PEP 309: Partial Function Application + PEP 314: Metadata for Python Software Packages v1.1 + (should PEP 314 be marked final?) + PEP 341: Unified try-except/try-finally to try-except-finally + PEP 342: Coroutines via Enhanced Generators + + - AST-based compiler + + - Add support for reading shadow passwords (www.python.org/sf/579435) + + - any()/all() builtin truth functions + + - new hashlib module add support for SHA-224, -256, -384, and -512 + (replaces old md5 and sha modules) + + +Planned features for 2.5 + + PEP 308: Conditional Expressions + PEP 328: Absolute/Relative Imports + PEP 343: The "with" Statement + PEP 352: Required Superclass for Exceptions + PEP 353: Using ssize_t as the index type + + +Deferred until 2.6: + + - None + + +Ongoing tasks + + The following are ongoing TO-DO items which we should attempt to + work on without hoping for completion by any particular date. + + - Documentation: complete the distribution and installation + manuals. + + - Documentation: complete the documentation for new-style + classes. + + - Look over the Demos/ directory and update where required (Andrew + Kuchling has done a lot of this) + + - New tests. + + - Fix doc bugs on SF. + + - Remove use of deprecated features in the core. + + - Document deprecated features appropriately and update PEP 3000. + + - Mark deprecated C APIs with Py_DEPRECATED. + + - Deprecate modules which are unmaintained, or perhaps make a new + category for modules 'Unmaintained' + + - In general, lots of cleanup so it is easier to move forward. + + +Open issues + + This PEP needs to be updated and release managers confirmed. + + +Carryover features from Python 2.4 + + Are any of these done or planned for 2.5? + + - Deprecate and/or remove the modules listed in PEP 4 (posixfile, + gopherlib, pre, others) + + - Remove support for platforms as described in PEP 11. + + - Finish implementing the Distutils bdist_dpkg command. (AMK) + + - It would be nice if the built-in SSL socket type could be used + for non-blocking SSL I/O. Currently packages such as Twisted + which implement async servers using SSL have to require third-party + packages such as pyopenssl. + + - reST is going to be used a lot in Zope3. Maybe it could become + a standard library module? (Since reST's author thinks it's too + unstable, I'm inclined not to do this.) + + +Carryover features from Python 2.3 + + - The import lock could use some redesign. (SF 683658.) + + - A nicer API to open text files, replacing the ugly (in some + people's eyes) "U" mode flag. There's a proposal out there to + have a new built-in type textfile(filename, mode, encoding). + (Shouldn't it have a bufsize argument too?) + + - New widgets for Tkinter??? + + Has anyone gotten the time for this? *Are* there any new + widgets in Tk 8.4? Note that we've got better Tix support + already (though not on Windows yet). + + - PEP 304 (Controlling Generation of Bytecode Files by Montanaro) + seems to have lost steam. + + - For a class defined inside another class, the __name__ should be + "outer.inner", and pickling should work. (SF 633930. I'm no + longer certain this is easy or even right.) + + - Decide on a clearer deprecation policy (especially for modules) + and act on it. For a start, see this message from Neal Norwitz: + http://mail.python.org/pipermail/python-dev/2002-April/023165.html + There seems insufficient interest in moving this further in an + organized fashion, and it's not particularly important. + + - Provide alternatives for common uses of the types module; + Skip Montanaro has posted a proto-PEP for this idea: + http://mail.python.org/pipermail/python-dev/2002-May/024346.html + There hasn't been any progress on this, AFAICT. + + - Use pending deprecation for the types and string modules. This + requires providing alternatives for the parts that aren't + covered yet (e.g. string.whitespace and types.TracebackType). + It seems we can't get consensus on this. + + - PEP 262 Database of Installed Python Packages Kuchling + + This turns out to be useful for Jack Jansen's Python installer, + so the database is worth implementing. Code will go in + sandbox/pep262. + + - PEP 269 Pgen Module for Python Riehl + + (Some necessary changes are in; the pgen module itself needs to + mature more.) + + - PEP 266 Optimizing Global Variable/Attribute Access Montanaro + PEP 267 Optimized Access to Module Namespaces Hylton + PEP 280 Optimizing access to globals van Rossum + + These are basically three friendly competing proposals. Jeremy + has made a little progress with a new compiler, but it's going + slowly and the compiler is only the first step. Maybe we'll be + able to refactor the compiler in this release. I'm tempted to + say we won't hold our breath. + + - Lazily tracking tuples? + http://mail.python.org/pipermail/python-dev/2002-May/023926.html + http://www.python.org/sf/558745 + Not much enthusiasm I believe. + + - PEP 286 Enhanced Argument Tuples von Loewis + + I haven't had the time to review this thoroughly. It seems a + deep optimization hack (also makes better correctness guarantees + though). + + - Make 'as' a keyword. It has been a pseudo-keyword long enough. + Too much effort to bother. + + +Copyright + + This document has been placed in the public domain. + + + +Local Variables: +mode: indented-text +indent-tabs-mode: nil +End: From python-checkins at python.org Wed Feb 8 07:52:01 2006 From: python-checkins at python.org (martin.v.loewis) Date: Wed, 8 Feb 2006 07:52:01 +0100 (CET) Subject: [Python-checkins] r42267 - peps/trunk/pep-0356.txt Message-ID: <20060208065201.A53811E4002@bag.python.org> Author: martin.v.loewis Date: Wed Feb 8 07:52:00 2006 New Revision: 42267 Modified: peps/trunk/pep-0356.txt Log: I will do the Windows installer. Modified: peps/trunk/pep-0356.txt ============================================================================== --- peps/trunk/pep-0356.txt (original) +++ peps/trunk/pep-0356.txt Wed Feb 8 07:52:00 2006 @@ -23,7 +23,7 @@ TBD (Anthony Baxter?) - TBD (Martin von Loewis?) is building the Windows installers, + Martin von Loewis is building the Windows installers, TBD (Fred Drake?) the doc packages, TBD (Sean Reifschneider?) the RPMs. From raymond.hettinger at verizon.net Wed Feb 8 07:48:47 2006 From: raymond.hettinger at verizon.net (Raymond Hettinger) Date: Wed, 08 Feb 2006 01:48:47 -0500 Subject: [Python-checkins] SVN and PuTTY questions References: <20060208063327.E1CB81E4002@bag.python.org> Message-ID: <000801c62c7b$b6b7dba0$b83efea9@RaymondLaptop1> Are you on Windows XP? I'm looking to see if someone can help me work a couple of issues on my SVN access. Raymond From python-checkins at python.org Wed Feb 8 12:36:10 2006 From: python-checkins at python.org (andrew.kuchling) Date: Wed, 8 Feb 2006 12:36:10 +0100 (CET) Subject: [Python-checkins] r42268 - python/trunk/Doc/whatsnew/whatsnew25.tex Message-ID: <20060208113610.EE4681E4002@bag.python.org> Author: andrew.kuchling Date: Wed Feb 8 12:36:09 2006 New Revision: 42268 Modified: python/trunk/Doc/whatsnew/whatsnew25.tex Log: Update projected release date Modified: python/trunk/Doc/whatsnew/whatsnew25.tex ============================================================================== --- python/trunk/Doc/whatsnew/whatsnew25.tex (original) +++ python/trunk/Doc/whatsnew/whatsnew25.tex Wed Feb 8 12:36:09 2006 @@ -13,7 +13,8 @@ \tableofcontents This article explains the new features in Python 2.5. No release date -for Python 2.5 has been set; it will probably be released in early 2006. +for Python 2.5 has been set; it will probably be released in the +autumn of 2006. % Compare with previous release in 2 - 3 sentences here. From python-checkins at python.org Wed Feb 8 13:54:07 2006 From: python-checkins at python.org (armin.rigo) Date: Wed, 8 Feb 2006 13:54:07 +0100 (CET) Subject: [Python-checkins] r42269 - in python/trunk: Doc/lib/lib.tex Doc/lib/libhotshot.tex Doc/lib/libprofile.tex Lib/cProfile.py Lib/pstats.py Lib/test/output/test_cProfile Lib/test/output/test_profile Lib/test/test_cProfile.py Lib/test/test_profile.py Misc/NEWS Modules/_lsprof.c Modules/rotatingtree.c Modules/rotatingtree.h setup.py Message-ID: <20060208125407.1217B1E4002@bag.python.org> Author: armin.rigo Date: Wed Feb 8 13:53:56 2006 New Revision: 42269 Added: python/trunk/Lib/cProfile.py (contents, props changed) python/trunk/Lib/test/output/test_cProfile (contents, props changed) python/trunk/Lib/test/test_cProfile.py (contents, props changed) python/trunk/Modules/_lsprof.c (contents, props changed) python/trunk/Modules/rotatingtree.c (contents, props changed) python/trunk/Modules/rotatingtree.h (contents, props changed) Modified: python/trunk/Doc/lib/lib.tex python/trunk/Doc/lib/libhotshot.tex python/trunk/Doc/lib/libprofile.tex python/trunk/Lib/pstats.py python/trunk/Lib/test/output/test_profile python/trunk/Lib/test/test_profile.py python/trunk/Misc/NEWS python/trunk/setup.py Log: Added the cProfile module. Based on lsprof (patch #1212837) by Brett Rosen and Ted Czotter. With further editing by Michael Hudson and myself. History in svn repo: http://codespeak.net/svn/user/arigo/hack/misc/lsprof * Module/_lsprof.c is the internal C module, Lib/cProfile.py a wrapper. * pstats.py updated to display cProfile's caller/callee timings if available. * setup.py and NEWS updated. * documentation updates in the profiler section: - explain the differences between the three profilers that we have now - profile and cProfile can use a unified documentation, like (c)Pickle - mention that hotshot is "for specialized usage" now - removed references to the "old profiler" that no longer exists * test updates: - extended test_profile to cover delicate cases like recursion - added tests for the caller/callee displays - added test_cProfile, performing the same tests for cProfile * TO-DO: - cProfile gives a nicer name to built-in, particularly built-in methods, which could be backported to profile. - not tested on Windows recently! Modified: python/trunk/Doc/lib/lib.tex ============================================================================== --- python/trunk/Doc/lib/lib.tex (original) +++ python/trunk/Doc/lib/lib.tex Wed Feb 8 13:53:56 2006 @@ -358,7 +358,7 @@ \input{libpdb} % The Python Debugger \input{libprofile} % The Python Profiler -\input{libhotshot} % New profiler +\input{libhotshot} % unmaintained C profiler \input{libtimeit} Modified: python/trunk/Doc/lib/libhotshot.tex ============================================================================== --- python/trunk/Doc/lib/libhotshot.tex (original) +++ python/trunk/Doc/lib/libhotshot.tex Wed Feb 8 13:53:56 2006 @@ -14,6 +14,17 @@ written mostly in C, it should result in a much smaller performance impact than the existing \refmodule{profile} module. +\begin{notice}[note] + The \module{hotshot} module focuses on minimizing the overhead + while profiling, at the expense of long data post-processing times. + For common usages it is recommended to use \module{cProfile} instead. + \module{hotshot} is not maintained and might be removed from the + standard library in the future. +\end{notice} + +\versionchanged[the results should be more meaningful than in the +past: the timing core contained a critical bug]{2.5} + \begin{notice}[warning] The \module{hotshot} profiler does not yet work well with threads. It is useful to use an unthreaded script to run the profiler over Modified: python/trunk/Doc/lib/libprofile.tex ============================================================================== --- python/trunk/Doc/lib/libprofile.tex (original) +++ python/trunk/Doc/lib/libprofile.tex Wed Feb 8 13:53:56 2006 @@ -1,4 +1,4 @@ -\chapter{The Python Profiler \label{profile}} +\chapter{The Python Profilers \label{profile}} \sectionauthor{James Roskind}{} @@ -6,8 +6,9 @@ \index{InfoSeek Corporation} Written by James Roskind.\footnote{ - Updated and converted to \LaTeX\ by Guido van Rossum. The references to - the old profiler are left in the text, although it no longer exists.} + Updated and converted to \LaTeX\ by Guido van Rossum. + Further updated by Armin Rigo to integrate the documentation for the new + \module{cProfile} module of Python 2.5.} Permission to use, copy, modify, and distribute this Python software and its associated documentation for any purpose (subject to the @@ -41,7 +42,7 @@ I'd appreciate the feedback. -\section{Introduction to the profiler} +\section{Introduction to the profilers} \nodename{Profiler Introduction} A \dfn{profiler} is a program that describes the run time performance @@ -54,6 +55,31 @@ \index{deterministic profiling} \index{profiling, deterministic} +The Python standard library provides three different profilers: + +\begin{enumerate} +\item \module{profile}, a pure Python module, described in the sequel. + Copyright \copyright{} 1994, by InfoSeek Corporation. + \versionchanged[also reports the time spent in calls to built-in + functions and methods]{2.4} + +\item \module{cProfile}, a module written in C, with a reasonable + overhead that makes it suitable for profiling long-running programs. + Based on \module{lsprof}, contributed by Brett Rosen and Ted Czotter. + \versionadded{2.5} + +\item \module{hotshot}, a C module focusing on minimizing the overhead + while profiling, at the expense of long data post-processing times. + \versionchanged[the results should be more meaningful than in the + past: the timing core contained a critical bug]{2.5} +\end{enumerate} + +The \module{profile} and \module{cProfile} modules export the same +interface, so they are mostly interchangeables; \module{cProfile} has a +much lower overhead but is not so far as well-tested and might not be +available on all systems. \module{cProfile} is really a compatibility +layer on top of the internal \module{_lsprof} module. The +\module{hotshot} module is reserved to specialized usages. %\section{How Is This Profiler Different From The Old Profiler?} %\nodename{Profiler Changes} @@ -108,10 +134,13 @@ you would add the following to your module: \begin{verbatim} -import profile -profile.run('foo()') +import cProfile +cProfile.run('foo()') \end{verbatim} +(Use \module{profile} instead of \module{cProfile} if the latter is not +available on your system.) + The above action would cause \function{foo()} to be run, and a series of informative lines (the profile) to be printed. The above approach is most useful when working with the interpreter. If you would like to @@ -120,21 +149,21 @@ function: \begin{verbatim} -import profile -profile.run('foo()', 'fooprof') +import cProfile +cProfile.run('foo()', 'fooprof') \end{verbatim} -The file \file{profile.py} can also be invoked as +The file \file{cProfile.py} can also be invoked as a script to profile another script. For example: \begin{verbatim} -python -m profile myscript.py +python -m cProfile myscript.py \end{verbatim} -\file{profile.py} accepts two optional arguments on the command line: +\file{cProfile.py} accepts two optional arguments on the command line: \begin{verbatim} -profile.py [-o output_file] [-s sort_order] +cProfile.py [-o output_file] [-s sort_order] \end{verbatim} \programopt{-s} only applies to standard output (\programopt{-o} is @@ -153,7 +182,7 @@ The class \class{Stats} (the above code just created an instance of this class) has a variety of methods for manipulating and printing the data that was just read into \code{p}. When you ran -\function{profile.run()} above, what was printed was the result of three +\function{cProfile.run()} above, what was printed was the result of three method calls: \begin{verbatim} @@ -162,8 +191,9 @@ The first method removed the extraneous path from all the module names. The second method sorted all the entries according to the -standard module/line/name string that is printed (this is to comply -with the semantics of the old profiler). The third method printed out +standard module/line/name string that is printed. +%(this is to comply with the semantics of the old profiler). +The third method printed out all the statistics. You might try the following sort calls: \begin{verbatim} @@ -268,15 +298,17 @@ of algorithms to be directly compared to iterative implementations. -\section{Reference Manual} +\section{Reference Manual -- \module{profile} and \module{cProfile}} \declaremodule{standard}{profile} +\declaremodule{standard}{cProfile} \modulesynopsis{Python profiler} The primary entry point for the profiler is the global function -\function{profile.run()}. It is typically used to create any profile +\function{profile.run()} (resp. \function{cProfile.run()}). +It is typically used to create any profile information. The reports are formatted and printed using methods of the class \class{pstats.Stats}. The following is a description of all of these standard entry points and functions. For a more in-depth @@ -296,7 +328,6 @@ each line. The following is a typical output from such a call: \begin{verbatim} - main() 2706 function calls (2004 primitive calls) in 4.504 CPU seconds Ordered by: standard name @@ -307,9 +338,7 @@ ... \end{verbatim} -The first line indicates that this profile was generated by the call:\\ -\code{profile.run('main()')}, and hence the exec'ed string is -\code{'main()'}. The second line indicates that 2706 calls were +The first line indicates that 2706 calls were monitored. Of those calls, 2004 were \dfn{primitive}. We define \dfn{primitive} to mean that the call was not induced via recursion. The next line: \code{Ordered by:\ standard name}, indicates that @@ -350,7 +379,7 @@ \end{funcdesc} \begin{funcdesc}{runctx}{command, globals, locals\optional{, filename}} -This function is similar to \function{profile.run()}, with added +This function is similar to \function{run()}, with added arguments to supply the globals and locals dictionaries for the \var{command} string. \end{funcdesc} @@ -368,10 +397,12 @@ manipulated by methods, in order to print useful reports. The file selected by the above constructor must have been created by -the corresponding version of \module{profile}. To be specific, there is +the corresponding version of \module{profile} or \module{cProfile}. +To be specific, there is \emph{no} file compatibility guaranteed with future versions of this profiler, and there is no compatibility with files produced by other -profilers (such as the old system profiler). +profilers. +%(such as the old system profiler). If several files are provided, all the statistics for identical functions will be coalesced, so that an overall view of several @@ -403,7 +434,8 @@ This method of the \class{Stats} class accumulates additional profiling information into the current profiling object. Its arguments should refer to filenames created by the corresponding -version of \function{profile.run()}. Statistics for identically named +version of \function{profile.run()} or \function{cProfile.run()}. +Statistics for identically named (re: file, line, name) functions are automatically accumulated into single function statistics. \end{methoddesc} @@ -412,7 +444,8 @@ Save the data loaded into the \class{Stats} object to a file named \var{filename}. The file is created if it does not exist, and is overwritten if it already exists. This is equivalent to the method of -the same name on the \class{profile.Profile} class. +the same name on the \class{profile.Profile} and +\class{cProfile.Profile} classes. \versionadded{2.3} \end{methoddesc} @@ -456,7 +489,8 @@ compare of the line numbers. In fact, \code{sort_stats('nfl')} is the same as \code{sort_stats('name', 'file', 'line')}. -For compatibility with the old profiler, the numeric arguments +%For compatibility with the old profiler, +For backward-compatibility reasons, the numeric arguments \code{-1}, \code{0}, \code{1}, and \code{2} are permitted. They are interpreted as \code{'stdname'}, \code{'calls'}, \code{'time'}, and \code{'cumulative'} respectively. If this old style format (numeric) @@ -467,10 +501,10 @@ \begin{methoddesc}[Stats]{reverse_order}{} This method for the \class{Stats} class reverses the ordering of the basic -list within the object. This method is provided primarily for -compatibility with the old profiler. Its utility is questionable -now that ascending vs descending order is properly selected based on -the sort key of choice. +list within the object. %This method is provided primarily for +%compatibility with the old profiler. +Note that by default ascending vs descending order is properly selected +based on the sort key of choice. \end{methoddesc} \begin{methoddesc}[Stats]{print_stats}{\optional{restriction, \moreargs}} @@ -512,10 +546,21 @@ This method for the \class{Stats} class prints a list of all functions that called each function in the profiled database. The ordering is identical to that provided by \method{print_stats()}, and the definition -of the restricting argument is also identical. For convenience, a -number is shown in parentheses after each caller to show how many -times this specific call was made. A second non-parenthesized number -is the cumulative time spent in the function at the right. +of the restricting argument is also identical. Each caller is reported on +its own line. The format differs slightly depending on the profiler that +produced the stats: + +\begin{itemize} +\item With \module{profile}, a number is shown in parentheses after each + caller to show how many times this specific call was made. For + convenience, a second non-parenthesized number repeats the cumulative + time spent in the function at the right. + +\item With \module{cProfile}, each caller is preceeded by three numbers: + the number of times this specific call was made, and the total and + cumulative times spent in the current function while it was invoked by + this specific caller. +\end{itemize} \end{methoddesc} \begin{methoddesc}[Stats]{print_callees}{\optional{restriction, \moreargs}} @@ -546,7 +591,10 @@ times, or call many functions, will typically accumulate this error. The error that accumulates in this fashion is typically less than the accuracy of the clock (less than one clock tick), but it -\emph{can} accumulate and become very significant. This profiler +\emph{can} accumulate and become very significant. + +The problem is more important with \module{profile} than with the +lower-overhead \module{cProfile}. For this reason, \module{profile} provides a means of calibrating itself for a given platform so that this error can be probabilistically (on the average) removed. After the profiler is calibrated, it will be more accurate (in a least @@ -560,7 +608,7 @@ \section{Calibration \label{profile-calibration}} -The profiler subtracts a constant from each +The profiler of the \module{profile} module subtracts a constant from each event handling time to compensate for the overhead of calling the time function, and socking away the results. By default, the constant is 0. The following procedure can @@ -614,11 +662,12 @@ \section{Extensions --- Deriving Better Profilers} \nodename{Profiler Extensions} -The \class{Profile} class of module \module{profile} was written so that +The \class{Profile} class of both modules, \module{profile} and +\module{cProfile}, were written so that derived classes could be developed to extend the profiler. The details are not described here, as doing this successfully requires an expert understanding of how the \class{Profile} class works internally. Study -the source code of module \module{profile} carefully if you want to +the source code of the module carefully if you want to pursue this. If all you want to do is change how current time is determined (for @@ -630,8 +679,11 @@ pr = profile.Profile(your_time_func) \end{verbatim} -The resulting profiler will then call \code{your_time_func()}. -The function should return a single number, or a list of +The resulting profiler will then call \function{your_time_func()}. + +\begin{description} +\item[\class{profile.Profile}] +\function{your_time_func()} should return a single number, or a list of numbers whose sum is the current time (like what \function{os.times()} returns). If the function returns a single time number, or the list of returned numbers has length 2, then you will get an especially fast @@ -646,3 +698,22 @@ derive a class and hardwire a replacement dispatch method that best handles your timer call, along with the appropriate calibration constant. + +\item[\class{cProfile.Profile}] +\function{your_time_func()} should return a single number. If it returns +plain integers, you can also invoke the class constructor with a second +argument specifying the real duration of one unit of time. For example, +if \function{your_integer_time_func()} returns times measured in thousands +of seconds, you would constuct the \class{Profile} instance as follows: + +\begin{verbatim} +pr = profile.Profile(your_integer_time_func, 0.001) +\end{verbatim} + +As the \module{cProfile.Profile} class cannot be calibrated, custom +timer functions should be used with care and should be as fast as +possible. For the best results with a custom timer, it might be +necessary to hard-code it in the C source of the internal +\module{_lsprof} module. + +\end{description} Added: python/trunk/Lib/cProfile.py ============================================================================== --- (empty file) +++ python/trunk/Lib/cProfile.py Wed Feb 8 13:53:56 2006 @@ -0,0 +1,190 @@ +#! /usr/bin/env python + +"""Python interface for the 'lsprof' profiler. + Compatible with the 'profile' module. +""" + +__all__ = ["run", "runctx", "help", "Profile"] + +import _lsprof + +# ____________________________________________________________ +# Simple interface + +def run(statement, filename=None, sort=-1): + """Run statement under profiler optionally saving results in filename + + This function takes a single argument that can be passed to the + "exec" statement, and an optional file name. In all cases this + routine attempts to "exec" its first argument and gather profiling + statistics from the execution. If no file name is present, then this + function automatically prints a simple profiling report, sorted by the + standard name string (file/line/function-name) that is presented in + each line. + """ + prof = Profile() + result = None + try: + try: + prof = prof.run(statement) + except SystemExit: + pass + finally: + if filename is not None: + prof.dump_stats(filename) + else: + result = prof.print_stats(sort) + return result + +def runctx(statement, globals, locals, filename=None): + """Run statement under profiler, supplying your own globals and locals, + optionally saving results in filename. + + statement and filename have the same semantics as profile.run + """ + prof = Profile() + result = None + try: + try: + prof = prof.runctx(statement, globals, locals) + except SystemExit: + pass + finally: + if filename is not None: + prof.dump_stats(filename) + else: + result = prof.print_stats() + return result + +# Backwards compatibility. +def help(): + print "Documentation for the profile/cProfile modules can be found " + print "in the Python Library Reference, section 'The Python Profiler'." + +# ____________________________________________________________ + +class Profile(_lsprof.Profiler): + """Profile(custom_timer=None, time_unit=None, subcalls=True, builtins=True) + + Builds a profiler object using the specified timer function. + The default timer is a fast built-in one based on real time. + For custom timer functions returning integers, time_unit can + be a float specifying a scale (i.e. how long each integer unit + is, in seconds). + """ + + # Most of the functionality is in the base class. + # This subclass only adds convenient and backward-compatible methods. + + def print_stats(self, sort=-1): + import pstats + pstats.Stats(self).strip_dirs().sort_stats(sort).print_stats() + + def dump_stats(self, file): + import marshal + f = open(file, 'wb') + self.create_stats() + marshal.dump(self.stats, f) + f.close() + + def create_stats(self): + self.disable() + self.snapshot_stats() + + def snapshot_stats(self): + entries = self.getstats() + self.stats = {} + callersdicts = {} + # call information + for entry in entries: + func = label(entry.code) + nc = entry.callcount # ncalls column of pstats (before '/') + cc = nc - entry.reccallcount # ncalls column of pstats (after '/') + tt = entry.inlinetime # tottime column of pstats + ct = entry.totaltime # cumtime column of pstats + callers = {} + callersdicts[id(entry.code)] = callers + self.stats[func] = cc, nc, tt, ct, callers + # subcall information + for entry in entries: + if entry.calls: + func = label(entry.code) + for subentry in entry.calls: + try: + callers = callersdicts[id(subentry.code)] + except KeyError: + continue + nc = subentry.callcount + cc = nc - subentry.reccallcount + tt = subentry.inlinetime + ct = subentry.totaltime + if func in callers: + prev = callers[func] + nc += prev[0] + cc += prev[1] + tt += prev[2] + ct += prev[3] + callers[func] = nc, cc, tt, ct + + # The following two methods can be called by clients to use + # a profiler to profile a statement, given as a string. + + def run(self, cmd): + import __main__ + dict = __main__.__dict__ + return self.runctx(cmd, dict, dict) + + def runctx(self, cmd, globals, locals): + self.enable() + try: + exec cmd in globals, locals + finally: + self.disable() + return self + + # This method is more useful to profile a single function call. + def runcall(self, func, *args, **kw): + self.enable() + try: + return func(*args, **kw) + finally: + self.disable() + +# ____________________________________________________________ + +def label(code): + if isinstance(code, str): + return ('~', 0, code) # built-in functions ('~' sorts at the end) + else: + return (code.co_filename, code.co_firstlineno, code.co_name) + +# ____________________________________________________________ + +def main(): + import os, sys + from optparse import OptionParser + usage = "cProfile.py [-o output_file_path] [-s sort] scriptfile [arg] ..." + parser = OptionParser(usage=usage) + parser.allow_interspersed_args = False + parser.add_option('-o', '--outfile', dest="outfile", + help="Save stats to ", default=None) + parser.add_option('-s', '--sort', dest="sort", + help="Sort order when printing to stdout, based on pstats.Stats class", default=-1) + + if not sys.argv[1:]: + parser.print_usage() + sys.exit(2) + + (options, args) = parser.parse_args() + sys.argv[:] = args + + if (len(sys.argv) > 0): + sys.path.insert(0, os.path.dirname(sys.argv[0])) + run('execfile(%r)' % (sys.argv[0],), options.outfile, options.sort) + else: + parser.print_usage() + return parser + +# When invoked as main program, invoke the profiler on a script +if __name__ == '__main__': + main() Modified: python/trunk/Lib/pstats.py ============================================================================== --- python/trunk/Lib/pstats.py (original) +++ python/trunk/Lib/pstats.py Wed Feb 8 13:53:56 2006 @@ -371,27 +371,47 @@ self.print_call_heading(width, "was called by...") for func in list: cc, nc, tt, ct, callers = self.stats[func] - self.print_call_line(width, func, callers) + self.print_call_line(width, func, callers, "<-") print print return self def print_call_heading(self, name_size, column_title): print "Function ".ljust(name_size) + column_title + # print sub-header only if we have new-style callers + subheader = False + for cc, nc, tt, ct, callers in self.stats.itervalues(): + if callers: + value = callers.itervalues().next() + subheader = isinstance(value, tuple) + break + if subheader: + print " "*name_size + " ncalls tottime cumtime" - def print_call_line(self, name_size, source, call_dict): - print func_std_string(source).ljust(name_size), + def print_call_line(self, name_size, source, call_dict, arrow="->"): + print func_std_string(source).ljust(name_size) + arrow, if not call_dict: - print "--" + print return clist = call_dict.keys() clist.sort() - name_size = name_size + 1 indent = "" for func in clist: name = func_std_string(func) - print indent*name_size + name + '(%r)' % (call_dict[func],), \ - f8(self.stats[func][3]) + value = call_dict[func] + if isinstance(value, tuple): + nc, cc, tt, ct = value + if nc != cc: + substats = '%d/%d' % (nc, cc) + else: + substats = '%d' % (nc,) + substats = '%s %s %s %s' % (substats.rjust(7+2*len(indent)), + f8(tt), f8(ct), name) + left_width = name_size + 1 + else: + substats = '%s(%r) %s' % (name, value, f8(self.stats[func][3])) + left_width = name_size + 3 + print indent*left_width + substats indent = " " def print_title(self): @@ -448,7 +468,15 @@ return func[2] def func_std_string(func_name): # match what old profile produced - return "%s:%d(%s)" % func_name + if func_name[:2] == ('~', 0): + # special case for built-in functions + name = func_name[2] + if name.startswith('<') and name.endswith('>'): + return '{%s}' % name[1:-1] + else: + return name + else: + return "%s:%d(%s)" % func_name #************************************************************************** # The following functions combine statists for pairs functions. Added: python/trunk/Lib/test/output/test_cProfile ============================================================================== --- (empty file) +++ python/trunk/Lib/test/output/test_cProfile Wed Feb 8 13:53:56 2006 @@ -0,0 +1,79 @@ +test_cProfile + 126 function calls (106 primitive calls) in 1.000 CPU seconds + + Ordered by: standard name + + ncalls tottime percall cumtime percall filename:lineno(function) + 1 0.000 0.000 1.000 1.000 :1() + 8 0.064 0.008 0.080 0.010 test_cProfile.py:103(subhelper) + 28 0.028 0.001 0.028 0.001 test_cProfile.py:115(__getattr__) + 1 0.270 0.270 1.000 1.000 test_cProfile.py:30(testfunc) + 23/3 0.150 0.007 0.170 0.057 test_cProfile.py:40(factorial) + 20 0.020 0.001 0.020 0.001 test_cProfile.py:53(mul) + 2 0.040 0.020 0.600 0.300 test_cProfile.py:60(helper) + 4 0.116 0.029 0.120 0.030 test_cProfile.py:78(helper1) + 2 0.000 0.000 0.140 0.070 test_cProfile.py:89(helper2_indirect) + 8 0.312 0.039 0.400 0.050 test_cProfile.py:93(helper2) + 12 0.000 0.000 0.012 0.001 {hasattr} + 4 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects} + 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} + 8 0.000 0.000 0.000 0.000 {range} + 4 0.000 0.000 0.000 0.000 {sys.exc_info} + + + Ordered by: standard name + +Function called... + ncalls tottime cumtime +:1() -> 1 0.270 1.000 test_cProfile.py:30(testfunc) +test_cProfile.py:103(subhelper) -> 16 0.016 0.016 test_cProfile.py:115(__getattr__) + 8 0.000 0.000 {range} +test_cProfile.py:115(__getattr__) -> +test_cProfile.py:30(testfunc) -> 1 0.014 0.130 test_cProfile.py:40(factorial) + 2 0.040 0.600 test_cProfile.py:60(helper) +test_cProfile.py:40(factorial) -> 20/3 0.130 0.147 test_cProfile.py:40(factorial) + 20 0.020 0.020 test_cProfile.py:53(mul) +test_cProfile.py:53(mul) -> +test_cProfile.py:60(helper) -> 4 0.116 0.120 test_cProfile.py:78(helper1) + 2 0.000 0.140 test_cProfile.py:89(helper2_indirect) + 6 0.234 0.300 test_cProfile.py:93(helper2) +test_cProfile.py:78(helper1) -> 4 0.000 0.004 {hasattr} + 4 0.000 0.000 {method 'append' of 'list' objects} + 4 0.000 0.000 {sys.exc_info} +test_cProfile.py:89(helper2_indirect) -> 2 0.006 0.040 test_cProfile.py:40(factorial) + 2 0.078 0.100 test_cProfile.py:93(helper2) +test_cProfile.py:93(helper2) -> 8 0.064 0.080 test_cProfile.py:103(subhelper) + 8 0.000 0.008 {hasattr} +{hasattr} -> 12 0.012 0.012 test_cProfile.py:115(__getattr__) +{method 'append' of 'list' objects} -> +{method 'disable' of '_lsprof.Profiler' objects} -> +{range} -> +{sys.exc_info} -> + + + Ordered by: standard name + +Function was called by... + ncalls tottime cumtime +:1() <- +test_cProfile.py:103(subhelper) <- 8 0.064 0.080 test_cProfile.py:93(helper2) +test_cProfile.py:115(__getattr__) <- 16 0.016 0.016 test_cProfile.py:103(subhelper) + 12 0.012 0.012 {hasattr} +test_cProfile.py:30(testfunc) <- 1 0.270 1.000 :1() +test_cProfile.py:40(factorial) <- 1 0.014 0.130 test_cProfile.py:30(testfunc) + 20/3 0.130 0.147 test_cProfile.py:40(factorial) + 2 0.006 0.040 test_cProfile.py:89(helper2_indirect) +test_cProfile.py:53(mul) <- 20 0.020 0.020 test_cProfile.py:40(factorial) +test_cProfile.py:60(helper) <- 2 0.040 0.600 test_cProfile.py:30(testfunc) +test_cProfile.py:78(helper1) <- 4 0.116 0.120 test_cProfile.py:60(helper) +test_cProfile.py:89(helper2_indirect) <- 2 0.000 0.140 test_cProfile.py:60(helper) +test_cProfile.py:93(helper2) <- 6 0.234 0.300 test_cProfile.py:60(helper) + 2 0.078 0.100 test_cProfile.py:89(helper2_indirect) +{hasattr} <- 4 0.000 0.004 test_cProfile.py:78(helper1) + 8 0.000 0.008 test_cProfile.py:93(helper2) +{method 'append' of 'list' objects} <- 4 0.000 0.000 test_cProfile.py:78(helper1) +{method 'disable' of '_lsprof.Profiler' objects} <- +{range} <- 8 0.000 0.000 test_cProfile.py:103(subhelper) +{sys.exc_info} <- 4 0.000 0.000 test_cProfile.py:78(helper1) + + Modified: python/trunk/Lib/test/output/test_profile ============================================================================== --- python/trunk/Lib/test/output/test_profile (original) +++ python/trunk/Lib/test/output/test_profile Wed Feb 8 13:53:56 2006 @@ -1,20 +1,84 @@ test_profile - 74 function calls in 1.000 CPU seconds + 127 function calls (107 primitive calls) in 1.000 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) + 4 0.000 0.000 0.000 0.000 :0(append) + 4 0.000 0.000 0.000 0.000 :0(exc_info) 12 0.000 0.000 0.012 0.001 :0(hasattr) 8 0.000 0.000 0.000 0.000 :0(range) 1 0.000 0.000 0.000 0.000 :0(setprofile) 1 0.000 0.000 1.000 1.000 :1() 0 0.000 0.000 profile:0(profiler) 1 0.000 0.000 1.000 1.000 profile:0(testfunc()) - 1 0.400 0.400 1.000 1.000 test_profile.py:23(testfunc) - 2 0.080 0.040 0.600 0.300 test_profile.py:32(helper) - 4 0.116 0.029 0.120 0.030 test_profile.py:50(helper1) - 8 0.312 0.039 0.400 0.050 test_profile.py:58(helper2) - 8 0.064 0.008 0.080 0.010 test_profile.py:68(subhelper) - 28 0.028 0.001 0.028 0.001 test_profile.py:80(__getattr__) + 8 0.064 0.008 0.080 0.010 test_profile.py:103(subhelper) + 28 0.028 0.001 0.028 0.001 test_profile.py:115(__getattr__) + 1 0.270 0.270 1.000 1.000 test_profile.py:30(testfunc) + 23/3 0.150 0.007 0.170 0.057 test_profile.py:40(factorial) + 20 0.020 0.001 0.020 0.001 test_profile.py:53(mul) + 2 0.040 0.020 0.600 0.300 test_profile.py:60(helper) + 4 0.116 0.029 0.120 0.030 test_profile.py:78(helper1) + 2 0.000 0.000 0.140 0.070 test_profile.py:89(helper2_indirect) + 8 0.312 0.039 0.400 0.050 test_profile.py:93(helper2) + + + Ordered by: standard name + +Function called... +:0(append) -> +:0(exc_info) -> +:0(hasattr) -> test_profile.py:115(__getattr__)(12) 0.028 +:0(range) -> +:0(setprofile) -> +:1() -> test_profile.py:30(testfunc)(1) 1.000 +profile:0(profiler) -> profile:0(testfunc())(1) 1.000 +profile:0(testfunc()) -> :0(setprofile)(1) 0.000 + :1()(1) 1.000 +test_profile.py:103(subhelper) -> :0(range)(8) 0.000 + test_profile.py:115(__getattr__)(16) 0.028 +test_profile.py:115(__getattr__) -> +test_profile.py:30(testfunc) -> test_profile.py:40(factorial)(1) 0.170 + test_profile.py:60(helper)(2) 0.600 +test_profile.py:40(factorial) -> test_profile.py:40(factorial)(20) 0.170 + test_profile.py:53(mul)(20) 0.020 +test_profile.py:53(mul) -> +test_profile.py:60(helper) -> test_profile.py:78(helper1)(4) 0.120 + test_profile.py:89(helper2_indirect)(2) 0.140 + test_profile.py:93(helper2)(6) 0.400 +test_profile.py:78(helper1) -> :0(append)(4) 0.000 + :0(exc_info)(4) 0.000 + :0(hasattr)(4) 0.012 +test_profile.py:89(helper2_indirect) -> test_profile.py:40(factorial)(2) 0.170 + test_profile.py:93(helper2)(2) 0.400 +test_profile.py:93(helper2) -> :0(hasattr)(8) 0.012 + test_profile.py:103(subhelper)(8) 0.080 + + + Ordered by: standard name + +Function was called by... +:0(append) <- test_profile.py:78(helper1)(4) 0.120 +:0(exc_info) <- test_profile.py:78(helper1)(4) 0.120 +:0(hasattr) <- test_profile.py:78(helper1)(4) 0.120 + test_profile.py:93(helper2)(8) 0.400 +:0(range) <- test_profile.py:103(subhelper)(8) 0.080 +:0(setprofile) <- profile:0(testfunc())(1) 1.000 +:1() <- profile:0(testfunc())(1) 1.000 +profile:0(profiler) <- +profile:0(testfunc()) <- profile:0(profiler)(1) 0.000 +test_profile.py:103(subhelper) <- test_profile.py:93(helper2)(8) 0.400 +test_profile.py:115(__getattr__) <- :0(hasattr)(12) 0.012 + test_profile.py:103(subhelper)(16) 0.080 +test_profile.py:30(testfunc) <- :1()(1) 1.000 +test_profile.py:40(factorial) <- test_profile.py:30(testfunc)(1) 1.000 + test_profile.py:40(factorial)(20) 0.170 + test_profile.py:89(helper2_indirect)(2) 0.140 +test_profile.py:53(mul) <- test_profile.py:40(factorial)(20) 0.170 +test_profile.py:60(helper) <- test_profile.py:30(testfunc)(2) 1.000 +test_profile.py:78(helper1) <- test_profile.py:60(helper)(4) 0.600 +test_profile.py:89(helper2_indirect) <- test_profile.py:60(helper)(2) 0.600 +test_profile.py:93(helper2) <- test_profile.py:60(helper)(6) 0.600 + test_profile.py:89(helper2_indirect)(2) 0.140 Added: python/trunk/Lib/test/test_cProfile.py ============================================================================== --- (empty file) +++ python/trunk/Lib/test/test_cProfile.py Wed Feb 8 13:53:56 2006 @@ -0,0 +1,123 @@ +"""Test suite for the cProfile module.""" + +import cProfile, pstats, sys + +# In order to have reproducible time, we simulate a timer in the global +# variable 'ticks', which represents simulated time in milliseconds. +# (We can't use a helper function increment the timer since it would be +# included in the profile and would appear to consume all the time.) +ticks = 0 + +# IMPORTANT: this is an output test. *ALL* NUMBERS in the expected +# output are relevant. If you change the formatting of pstats, +# please don't just regenerate output/test_cProfile without checking +# very carefully that not a single number has changed. + +def test_main(): + global ticks + ticks = 42000 + prof = cProfile.Profile(timer, 0.001) + prof.runctx("testfunc()", globals(), locals()) + assert ticks == 43000, ticks + st = pstats.Stats(prof) + st.strip_dirs().sort_stats('stdname').print_stats() + st.print_callees() + st.print_callers() + +def timer(): + return ticks + +def testfunc(): + # 1 call + # 1000 ticks total: 270 ticks local, 730 ticks in subfunctions + global ticks + ticks += 99 + helper() # 300 + helper() # 300 + ticks += 171 + factorial(14) # 130 + +def factorial(n): + # 23 calls total + # 170 ticks total, 150 ticks local + # 3 primitive calls, 130, 20 and 20 ticks total + # including 116, 17, 17 ticks local + global ticks + if n > 0: + ticks += n + return mul(n, factorial(n-1)) + else: + ticks += 11 + return 1 + +def mul(a, b): + # 20 calls + # 1 tick, local + global ticks + ticks += 1 + return a * b + +def helper(): + # 2 calls + # 300 ticks total: 20 ticks local, 260 ticks in subfunctions + global ticks + ticks += 1 + helper1() # 30 + ticks += 2 + helper1() # 30 + ticks += 6 + helper2() # 50 + ticks += 3 + helper2() # 50 + ticks += 2 + helper2() # 50 + ticks += 5 + helper2_indirect() # 70 + ticks += 1 + +def helper1(): + # 4 calls + # 30 ticks total: 29 ticks local, 1 tick in subfunctions + global ticks + ticks += 10 + hasattr(C(), "foo") # 1 + ticks += 19 + lst = [] + lst.append(42) # 0 + sys.exc_info() # 0 + +def helper2_indirect(): + helper2() # 50 + factorial(3) # 20 + +def helper2(): + # 8 calls + # 50 ticks local: 39 ticks local, 11 ticks in subfunctions + global ticks + ticks += 11 + hasattr(C(), "bar") # 1 + ticks += 13 + subhelper() # 10 + ticks += 15 + +def subhelper(): + # 8 calls + # 10 ticks total: 8 ticks local, 2 ticks in subfunctions + global ticks + ticks += 2 + for i in range(2): # 0 + try: + C().foo # 1 x 2 + except AttributeError: + ticks += 3 # 3 x 2 + +class C: + def __getattr__(self, name): + # 28 calls + # 1 tick, local + global ticks + ticks += 1 + raise AttributeError + +if __name__ == "__main__": + test_main() Modified: python/trunk/Lib/test/test_profile.py ============================================================================== --- python/trunk/Lib/test/test_profile.py (original) +++ python/trunk/Lib/test/test_profile.py Wed Feb 8 13:53:56 2006 @@ -1,8 +1,6 @@ """Test suite for the profile module.""" -import profile -import os -from test.test_support import TESTFN, vereq +import profile, pstats, sys # In order to have reproducible time, we simulate a timer in the global # variable 'ticks', which represents simulated time in milliseconds. @@ -10,50 +8,87 @@ # included in the profile and would appear to consume all the time.) ticks = 0 -def test_1(): +# IMPORTANT: this is an output test. *ALL* NUMBERS in the expected +# output are relevant. If you change the formatting of pstats, +# please don't just regenerate output/test_profile without checking +# very carefully that not a single number has changed. + +def test_main(): global ticks - ticks = 0 + ticks = 42000 prof = profile.Profile(timer) - prof.runctx("testfunc()", globals(), globals()) - prof.print_stats() + prof.runctx("testfunc()", globals(), locals()) + assert ticks == 43000, ticks + st = pstats.Stats(prof) + st.strip_dirs().sort_stats('stdname').print_stats() + st.print_callees() + st.print_callers() def timer(): return ticks*0.001 def testfunc(): # 1 call - # 1000 ticks total: 400 ticks local, 600 ticks in subfunctions + # 1000 ticks total: 270 ticks local, 730 ticks in subfunctions global ticks - ticks += 199 + ticks += 99 helper() # 300 helper() # 300 - ticks += 201 + ticks += 171 + factorial(14) # 130 + +def factorial(n): + # 23 calls total + # 170 ticks total, 150 ticks local + # 3 primitive calls, 130, 20 and 20 ticks total + # including 116, 17, 17 ticks local + global ticks + if n > 0: + ticks += n + return mul(n, factorial(n-1)) + else: + ticks += 11 + return 1 + +def mul(a, b): + # 20 calls + # 1 tick, local + global ticks + ticks += 1 + return a * b def helper(): # 2 calls - # 300 ticks total: 40 ticks local, 260 ticks in subfunctions + # 300 ticks total: 20 ticks local, 260 ticks in subfunctions global ticks ticks += 1 helper1() # 30 - ticks += 3 + ticks += 2 helper1() # 30 ticks += 6 helper2() # 50 - ticks += 5 - helper2() # 50 - ticks += 4 + ticks += 3 helper2() # 50 - ticks += 7 + ticks += 2 helper2() # 50 - ticks += 14 + ticks += 5 + helper2_indirect() # 70 + ticks += 1 def helper1(): # 4 calls # 30 ticks total: 29 ticks local, 1 tick in subfunctions global ticks ticks += 10 - hasattr(C(), "foo") + hasattr(C(), "foo") # 1 ticks += 19 + lst = [] + lst.append(42) # 0 + sys.exc_info() # 0 + +def helper2_indirect(): + helper2() # 50 + factorial(3) # 20 def helper2(): # 8 calls @@ -70,7 +105,7 @@ # 10 ticks total: 8 ticks local, 2 ticks in subfunctions global ticks ticks += 2 - for i in range(2): + for i in range(2): # 0 try: C().foo # 1 x 2 except AttributeError: @@ -84,36 +119,5 @@ ticks += 1 raise AttributeError - -def test_2(): - d = globals().copy() - def testfunc(): - global x - x = 1 - d['testfunc'] = testfunc - profile.runctx("testfunc()", d, d, TESTFN) - vereq (x, 1) - os.unlink (TESTFN) - -def test_3(): - result = [] - def testfunc1(): - try: len(None) - except: pass - try: len(None) - except: pass - result.append(True) - def testfunc2(): - testfunc1() - testfunc1() - profile.runctx("testfunc2()", locals(), locals(), TESTFN) - vereq(result, [True, True]) - os.unlink(TESTFN) - -def test_main(): - test_1() - test_2() - test_3() - if __name__ == "__main__": test_main() Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Feb 8 13:53:56 2006 @@ -2019,6 +2019,11 @@ Library ------- +- Added a new module: cProfile, a C profiler with the same interface as the + profile module. cProfile avoids some of the drawbacks of the hotshot + profiler and provides a bit more information than the other two profilers. + Based on "lsprof" (patch #1212837). + - Bug #1266283: The new function "lexists" is now in os.path.__all__. - Bug #981530: Fix UnboundLocalError in shutil.rmtree(). This affects Added: python/trunk/Modules/_lsprof.c ============================================================================== --- (empty file) +++ python/trunk/Modules/_lsprof.c Wed Feb 8 13:53:56 2006 @@ -0,0 +1,867 @@ +#include "Python.h" +#include "compile.h" +#include "frameobject.h" +#include "structseq.h" +#include "rotatingtree.h" + +#if !defined(HAVE_LONG_LONG) +#error "This module requires long longs!" +#endif + +/*** Selection of a high-precision timer ***/ + +#ifdef MS_WINDOWS + +#include + +static PY_LONG_LONG +hpTimer(void) +{ + LARGE_INTEGER li; + QueryPerformanceCounter(&li); + return li.QuadPart; +} + +static double +hpTimerUnit(void) +{ + LARGE_INTEGER li; + if (QueryPerformanceFrequency(&li)) + return 1000.0 / li.QuadPart; + else + return 0.001; /* unlikely */ +} + +#else /* !MS_WINDOWS */ + +#ifndef HAVE_GETTIMEOFDAY +#error "This module requires gettimeofday() on non-Windows platforms!" +#endif + +#if (defined(PYOS_OS2) && defined(PYCC_GCC)) +#include +#else +#include +#include +#endif + +static PY_LONG_LONG +hpTimer(void) +{ + struct timeval tv; + PY_LONG_LONG ret; +#ifdef GETTIMEOFDAY_NO_TZ + gettimeofday(&tv); +#else + gettimeofday(&tv, (struct timezone *)NULL); +#endif + ret = tv.tv_sec; + ret = ret * 1000000 + tv.tv_usec; + return ret; +} + +static double +hpTimerUnit(void) +{ + return 0.001; +} + +#endif /* MS_WINDOWS */ + +/************************************************************/ +/* Written by Brett Rosen and Ted Czotter */ + +struct _ProfilerEntry; + +/* represents a function called from another function */ +typedef struct _ProfilerSubEntry { + rotating_node_t header; + PY_LONG_LONG tt; + PY_LONG_LONG it; + long callcount; + long recursivecallcount; + long recursionLevel; +} ProfilerSubEntry; + +/* represents a function or user defined block */ +typedef struct _ProfilerEntry { + rotating_node_t header; + PyObject *userObj; /* PyCodeObject, or a descriptive str for builtins */ + PY_LONG_LONG tt; /* total time in this entry */ + PY_LONG_LONG it; /* inline time in this entry (not in subcalls) */ + long callcount; /* how many times this was called */ + long recursivecallcount; /* how many times called recursively */ + long recursionLevel; + rotating_node_t *calls; +} ProfilerEntry; + +typedef struct _ProfilerContext { + PY_LONG_LONG t0; + PY_LONG_LONG subt; + struct _ProfilerContext *previous; + ProfilerEntry *ctxEntry; +} ProfilerContext; + +typedef struct { + PyObject_HEAD + rotating_node_t *profilerEntries; + ProfilerContext *currentProfilerContext; + ProfilerContext *freelistProfilerContext; + int flags; + PyObject *externalTimer; + double externalTimerUnit; +} ProfilerObject; + +#define POF_ENABLED 0x001 +#define POF_SUBCALLS 0x002 +#define POF_BUILTINS 0x004 +#define POF_NOMEMORY 0x100 + +staticforward PyTypeObject PyProfiler_Type; + +#define PyProfiler_Check(op) PyObject_TypeCheck(op, &PyProfiler_Type) +#define PyProfiler_CheckExact(op) ((op)->ob_type == &PyProfiler_Type) + +/*** External Timers ***/ + +#define DOUBLE_TIMER_PRECISION 4294967296.0 +static PyObject *empty_tuple; + +static PY_LONG_LONG CallExternalTimer(ProfilerObject *pObj) +{ + PY_LONG_LONG result; + PyObject *o = PyObject_Call(pObj->externalTimer, empty_tuple, NULL); + if (o == NULL) { + PyErr_WriteUnraisable(pObj->externalTimer); + return 0; + } + if (pObj->externalTimerUnit > 0.0) { + /* interpret the result as an integer that will be scaled + in profiler_getstats() */ + result = PyLong_AsLongLong(o); + } + else { + /* interpret the result as a double measured in seconds. + As the profiler works with PY_LONG_LONG internally + we convert it to a large integer */ + double val = PyFloat_AsDouble(o); + /* error handling delayed to the code below */ + result = (PY_LONG_LONG) (val * DOUBLE_TIMER_PRECISION); + } + Py_DECREF(o); + if (PyErr_Occurred()) { + PyErr_WriteUnraisable((PyObject *) pObj); + return 0; + } + return result; +} + +#define CALL_TIMER(pObj) ((pObj)->externalTimer ? \ + CallExternalTimer(pObj) : \ + hpTimer()) + +/*** ProfilerObject ***/ + +static PyObject * +normalizeUserObj(PyObject *obj) +{ + PyCFunctionObject *fn; + if (!PyCFunction_Check(obj)) { + Py_INCREF(obj); + return obj; + } + /* Replace built-in function objects with a descriptive string + because of built-in methods -- keeping a reference to + __self__ is probably not a good idea. */ + fn = (PyCFunctionObject *)obj; + + if (fn->m_self == NULL) { + /* built-in function: look up the module name */ + PyObject *mod = fn->m_module; + char *modname; + if (mod && PyString_Check(mod)) { + modname = PyString_AS_STRING(mod); + } + else if (mod && PyModule_Check(mod)) { + modname = PyModule_GetName(mod); + if (modname == NULL) { + PyErr_Clear(); + modname = "__builtin__"; + } + } + else { + modname = "__builtin__"; + } + if (strcmp(modname, "__builtin__") != 0) + return PyString_FromFormat("<%s.%s>", + modname, + fn->m_ml->ml_name); + else + return PyString_FromFormat("<%s>", + fn->m_ml->ml_name); + } + else { + /* built-in method: try to return + repr(getattr(type(__self__), __name__)) + */ + PyObject *self = fn->m_self; + PyObject *name = PyString_FromString(fn->m_ml->ml_name); + if (name != NULL) { + PyObject *mo = _PyType_Lookup(self->ob_type, name); + Py_XINCREF(mo); + Py_DECREF(name); + if (mo != NULL) { + PyObject *res = PyObject_Repr(mo); + Py_DECREF(mo); + if (res != NULL) + return res; + } + } + PyErr_Clear(); + return PyString_FromFormat("", + fn->m_ml->ml_name); + } +} + +static ProfilerEntry* +newProfilerEntry(ProfilerObject *pObj, void *key, PyObject *userObj) +{ + ProfilerEntry *self; + self = (ProfilerEntry*) malloc(sizeof(ProfilerEntry)); + if (self == NULL) { + pObj->flags |= POF_NOMEMORY; + return NULL; + } + userObj = normalizeUserObj(userObj); + if (userObj == NULL) { + PyErr_Clear(); + free(self); + pObj->flags |= POF_NOMEMORY; + return NULL; + } + self->header.key = key; + self->userObj = userObj; + self->tt = 0; + self->it = 0; + self->callcount = 0; + self->recursivecallcount = 0; + self->recursionLevel = 0; + self->calls = EMPTY_ROTATING_TREE; + RotatingTree_Add(&pObj->profilerEntries, &self->header); + return self; +} + +static ProfilerEntry* +getEntry(ProfilerObject *pObj, void *key) +{ + return (ProfilerEntry*) RotatingTree_Get(&pObj->profilerEntries, key); +} + +static ProfilerSubEntry * +getSubEntry(ProfilerObject *pObj, ProfilerEntry *caller, ProfilerEntry* entry) +{ + return (ProfilerSubEntry*) RotatingTree_Get(&caller->calls, + (void *)entry); +} + +static ProfilerSubEntry * +newSubEntry(ProfilerObject *pObj, ProfilerEntry *caller, ProfilerEntry* entry) +{ + ProfilerSubEntry *self; + self = (ProfilerSubEntry*) malloc(sizeof(ProfilerSubEntry)); + if (self == NULL) { + pObj->flags |= POF_NOMEMORY; + return NULL; + } + self->header.key = (void *)entry; + self->tt = 0; + self->it = 0; + self->callcount = 0; + self->recursivecallcount = 0; + self->recursionLevel = 0; + RotatingTree_Add(&caller->calls, &self->header); + return self; +} + +static int freeSubEntry(rotating_node_t *header, void *arg) +{ + ProfilerSubEntry *subentry = (ProfilerSubEntry*) header; + free(subentry); + return 0; +} + +static int freeEntry(rotating_node_t *header, void *arg) +{ + ProfilerEntry *entry = (ProfilerEntry*) header; + RotatingTree_Enum(entry->calls, freeSubEntry, NULL); + Py_DECREF(entry->userObj); + free(entry); + return 0; +} + +static void clearEntries(ProfilerObject *pObj) +{ + RotatingTree_Enum(pObj->profilerEntries, freeEntry, NULL); + pObj->profilerEntries = EMPTY_ROTATING_TREE; + /* release the memory hold by the free list of ProfilerContexts */ + while (pObj->freelistProfilerContext) { + ProfilerContext *c = pObj->freelistProfilerContext; + pObj->freelistProfilerContext = c->previous; + free(c); + } +} + +static void +initContext(ProfilerObject *pObj, ProfilerContext *self, ProfilerEntry *entry) +{ + self->ctxEntry = entry; + self->subt = 0; + self->previous = pObj->currentProfilerContext; + pObj->currentProfilerContext = self; + ++entry->recursionLevel; + if ((pObj->flags & POF_SUBCALLS) && self->previous) { + /* find or create an entry for me in my caller's entry */ + ProfilerEntry *caller = self->previous->ctxEntry; + ProfilerSubEntry *subentry = getSubEntry(pObj, caller, entry); + if (subentry == NULL) + subentry = newSubEntry(pObj, caller, entry); + if (subentry) + ++subentry->recursionLevel; + } + self->t0 = CALL_TIMER(pObj); +} + +static void +Stop(ProfilerObject *pObj, ProfilerContext *self, ProfilerEntry *entry) +{ + PY_LONG_LONG tt = CALL_TIMER(pObj) - self->t0; + PY_LONG_LONG it = tt - self->subt; + if (self->previous) + self->previous->subt += tt; + pObj->currentProfilerContext = self->previous; + if (--entry->recursionLevel == 0) + entry->tt += tt; + else + ++entry->recursivecallcount; + entry->it += it; + entry->callcount++; + if ((pObj->flags & POF_SUBCALLS) && self->previous) { + /* find or create an entry for me in my caller's entry */ + ProfilerEntry *caller = self->previous->ctxEntry; + ProfilerSubEntry *subentry = getSubEntry(pObj, caller, entry); + if (subentry) { + if (--subentry->recursionLevel == 0) + subentry->tt += tt; + else + ++subentry->recursivecallcount; + subentry->it += it; + ++subentry->callcount; + } + } +} + +static void +ptrace_enter_call(PyObject *self, void *key, PyObject *userObj) +{ + /* entering a call to the function identified by 'key' + (which can be a PyCodeObject or a PyMethodDef pointer) */ + ProfilerObject *pObj = (ProfilerObject*)self; + ProfilerEntry *profEntry; + ProfilerContext *pContext; + + profEntry = getEntry(pObj, key); + if (profEntry == NULL) { + profEntry = newProfilerEntry(pObj, key, userObj); + if (profEntry == NULL) + return; + } + /* grab a ProfilerContext out of the free list */ + pContext = pObj->freelistProfilerContext; + if (pContext) { + pObj->freelistProfilerContext = pContext->previous; + } + else { + /* free list exhausted, allocate a new one */ + pContext = (ProfilerContext*) + malloc(sizeof(ProfilerContext)); + if (pContext == NULL) { + pObj->flags |= POF_NOMEMORY; + return; + } + } + initContext(pObj, pContext, profEntry); +} + +static void +ptrace_leave_call(PyObject *self, void *key) +{ + /* leaving a call to the function identified by 'key' */ + ProfilerObject *pObj = (ProfilerObject*)self; + ProfilerEntry *profEntry; + ProfilerContext *pContext; + + pContext = pObj->currentProfilerContext; + if (pContext == NULL) + return; + profEntry = getEntry(pObj, key); + if (profEntry) { + Stop(pObj, pContext, profEntry); + } + else { + pObj->currentProfilerContext = pContext->previous; + } + /* put pContext into the free list */ + pContext->previous = pObj->freelistProfilerContext; + pObj->freelistProfilerContext = pContext; +} + +static int +profiler_callback(PyObject *self, PyFrameObject *frame, int what, + PyObject *arg) +{ + switch (what) { + + /* the 'frame' of a called function is about to start its execution */ + case PyTrace_CALL: + ptrace_enter_call(self, (void *)frame->f_code, + (PyObject *)frame->f_code); + break; + + /* the 'frame' of a called function is about to finish + (either normally or with an exception) */ + case PyTrace_RETURN: + ptrace_leave_call(self, (void *)frame->f_code); + break; + + /* case PyTrace_EXCEPTION: + If the exception results in the function exiting, a + PyTrace_RETURN event will be generated, so we don't need to + handle it. */ + +#ifdef PyTrace_C_CALL /* not defined in Python <= 2.3 */ + /* the Python function 'frame' is issuing a call to the built-in + function 'arg' */ + case PyTrace_C_CALL: + if ((((ProfilerObject *)self)->flags & POF_BUILTINS) + && PyCFunction_Check(arg)) { + ptrace_enter_call(self, + ((PyCFunctionObject *)arg)->m_ml, + arg); + } + break; + + /* the call to the built-in function 'arg' is returning into its + caller 'frame' */ + case PyTrace_C_RETURN: /* ...normally */ + case PyTrace_C_EXCEPTION: /* ...with an exception set */ + if ((((ProfilerObject *)self)->flags & POF_BUILTINS) + && PyCFunction_Check(arg)) { + ptrace_leave_call(self, + ((PyCFunctionObject *)arg)->m_ml); + } + break; +#endif + + default: + break; + } + return 0; +} + +static int +pending_exception(ProfilerObject *pObj) +{ + if (pObj->flags & POF_NOMEMORY) { + pObj->flags -= POF_NOMEMORY; + PyErr_SetString(PyExc_MemoryError, + "memory was exhausted while profiling"); + return -1; + } + return 0; +} + +/************************************************************/ + +static PyStructSequence_Field profiler_entry_fields[] = { + {"code", "code object or built-in function name"}, + {"callcount", "how many times this was called"}, + {"reccallcount", "how many times called recursively"}, + {"totaltime", "total time in this entry"}, + {"inlinetime", "inline time in this entry (not in subcalls)"}, + {"calls", "details of the calls"}, + {0} +}; + +static PyStructSequence_Field profiler_subentry_fields[] = { + {"code", "called code object or built-in function name"}, + {"callcount", "how many times this is called"}, + {"reccallcount", "how many times this is called recursively"}, + {"totaltime", "total time spent in this call"}, + {"inlinetime", "inline time (not in further subcalls)"}, + {0} +}; + +static PyStructSequence_Desc profiler_entry_desc = { + "_lsprof.profiler_entry", /* name */ + NULL, /* doc */ + profiler_entry_fields, + 6 +}; + +static PyStructSequence_Desc profiler_subentry_desc = { + "_lsprof.profiler_subentry", /* name */ + NULL, /* doc */ + profiler_subentry_fields, + 5 +}; + +static PyTypeObject StatsEntryType; +static PyTypeObject StatsSubEntryType; + + +typedef struct { + PyObject *list; + PyObject *sublist; + double factor; +} statscollector_t; + +static int statsForSubEntry(rotating_node_t *node, void *arg) +{ + ProfilerSubEntry *sentry = (ProfilerSubEntry*) node; + statscollector_t *collect = (statscollector_t*) arg; + ProfilerEntry *entry = (ProfilerEntry*) sentry->header.key; + int err; + PyObject *sinfo; + sinfo = PyObject_CallFunction((PyObject*) &StatsSubEntryType, + "((Olldd))", + entry->userObj, + sentry->callcount, + sentry->recursivecallcount, + collect->factor * sentry->tt, + collect->factor * sentry->it); + if (sinfo == NULL) + return -1; + err = PyList_Append(collect->sublist, sinfo); + Py_DECREF(sinfo); + return err; +} + +static int statsForEntry(rotating_node_t *node, void *arg) +{ + ProfilerEntry *entry = (ProfilerEntry*) node; + statscollector_t *collect = (statscollector_t*) arg; + PyObject *info; + int err; + if (entry->callcount == 0) + return 0; /* skip */ + + if (entry->calls != EMPTY_ROTATING_TREE) { + collect->sublist = PyList_New(0); + if (collect->sublist == NULL) + return -1; + if (RotatingTree_Enum(entry->calls, + statsForSubEntry, collect) != 0) { + Py_DECREF(collect->sublist); + return -1; + } + } + else { + Py_INCREF(Py_None); + collect->sublist = Py_None; + } + + info = PyObject_CallFunction((PyObject*) &StatsEntryType, + "((OllddO))", + entry->userObj, + entry->callcount, + entry->recursivecallcount, + collect->factor * entry->tt, + collect->factor * entry->it, + collect->sublist); + Py_DECREF(collect->sublist); + if (info == NULL) + return -1; + err = PyList_Append(collect->list, info); + Py_DECREF(info); + return err; +} + +PyDoc_STRVAR(getstats_doc, "\ +getstats() -> list of profiler_entry objects\n\ +\n\ +Return all information collected by the profiler.\n\ +Each profiler_entry is a tuple-like object with the\n\ +following attributes:\n\ +\n\ + code code object\n\ + callcount how many times this was called\n\ + reccallcount how many times called recursively\n\ + totaltime total time in this entry\n\ + inlinetime inline time in this entry (not in subcalls)\n\ + calls details of the calls\n\ +\n\ +The calls attribute is either None or a list of\n\ +profiler_subentry objects:\n\ +\n\ + code called code object\n\ + callcount how many times this is called\n\ + reccallcount how many times this is called recursively\n\ + totaltime total time spent in this call\n\ + inlinetime inline time (not in further subcalls)\n\ +"); + +static PyObject* +profiler_getstats(ProfilerObject *pObj, PyObject* noarg) +{ + statscollector_t collect; + if (pending_exception(pObj)) + return NULL; + if (!pObj->externalTimer) + collect.factor = hpTimerUnit(); + else if (pObj->externalTimerUnit > 0.0) + collect.factor = pObj->externalTimerUnit; + else + collect.factor = 1.0 / DOUBLE_TIMER_PRECISION; + collect.list = PyList_New(0); + if (collect.list == NULL) + return NULL; + if (RotatingTree_Enum(pObj->profilerEntries, statsForEntry, &collect) + != 0) { + Py_DECREF(collect.list); + return NULL; + } + return collect.list; +} + +static int +setSubcalls(ProfilerObject *pObj, int nvalue) +{ + if (nvalue == 0) + pObj->flags &= ~POF_SUBCALLS; + else if (nvalue > 0) + pObj->flags |= POF_SUBCALLS; + return 0; +} + +static int +setBuiltins(ProfilerObject *pObj, int nvalue) +{ + if (nvalue == 0) + pObj->flags &= ~POF_BUILTINS; + else if (nvalue > 0) { +#ifndef PyTrace_C_CALL + PyErr_SetString(PyExc_ValueError, + "builtins=True requires Python >= 2.4"); + return -1; +#else + pObj->flags |= POF_BUILTINS; +#endif + } + return 0; +} + +PyDoc_STRVAR(enable_doc, "\ +enable(subcalls=True, builtins=True)\n\ +\n\ +Start collecting profiling information.\n\ +If 'subcalls' is True, also records for each function\n\ +statistics separated according to its current caller.\n\ +If 'builtins' is True, records the time spent in\n\ +built-in functions separately from their caller.\n\ +"); + +static PyObject* +profiler_enable(ProfilerObject *self, PyObject *args, PyObject *kwds) +{ + int subcalls = -1; + int builtins = -1; + static const char *kwlist[] = {"subcalls", "builtins", 0}; + if (!PyArg_ParseTupleAndKeywords(args, kwds, "|ii:enable", + kwlist, &subcalls, &builtins)) + return NULL; + if (setSubcalls(self, subcalls) < 0 || setBuiltins(self, builtins) < 0) + return NULL; + PyEval_SetProfile(profiler_callback, (PyObject*)self); + self->flags |= POF_ENABLED; + Py_INCREF(Py_None); + return Py_None; +} + +static void +flush_unmatched(ProfilerObject *pObj) +{ + while (pObj->currentProfilerContext) { + ProfilerContext *pContext = pObj->currentProfilerContext; + ProfilerEntry *profEntry= pContext->ctxEntry; + if (profEntry) + Stop(pObj, pContext, profEntry); + else + pObj->currentProfilerContext = pContext->previous; + if (pContext) + free(pContext); + } + +} + +PyDoc_STRVAR(disable_doc, "\ +disable()\n\ +\n\ +Stop collecting profiling information.\n\ +"); + +static PyObject* +profiler_disable(ProfilerObject *self, PyObject* noarg) +{ + self->flags &= ~POF_ENABLED; + PyEval_SetProfile(NULL, NULL); + flush_unmatched(self); + if (pending_exception(self)) + return NULL; + Py_INCREF(Py_None); + return Py_None; +} + +PyDoc_STRVAR(clear_doc, "\ +clear()\n\ +\n\ +Clear all profiling information collected so far.\n\ +"); + +static PyObject* +profiler_clear(ProfilerObject *pObj, PyObject* noarg) +{ + clearEntries(pObj); + Py_INCREF(Py_None); + return Py_None; +} + +static void +profiler_dealloc(ProfilerObject *op) +{ + if (op->flags & POF_ENABLED) + PyEval_SetProfile(NULL, NULL); + flush_unmatched(op); + clearEntries(op); + Py_XDECREF(op->externalTimer); + op->ob_type->tp_free(op); +} + +static int +profiler_init(ProfilerObject *pObj, PyObject *args, PyObject *kw) +{ + PyObject *o; + PyObject *timer = NULL; + double timeunit = 0.0; + int subcalls = 1; +#ifdef PyTrace_C_CALL + int builtins = 1; +#else + int builtins = 0; +#endif + static const char *kwlist[] = {"timer", "timeunit", + "subcalls", "builtins", 0}; + + if (!PyArg_ParseTupleAndKeywords(args, kw, "|Odii:Profiler", kwlist, + &timer, &timeunit, + &subcalls, &builtins)) + return -1; + + if (setSubcalls(pObj, subcalls) < 0 || setBuiltins(pObj, builtins) < 0) + return -1; + o = pObj->externalTimer; + pObj->externalTimer = timer; + Py_XINCREF(timer); + Py_XDECREF(o); + pObj->externalTimerUnit = timeunit; + return 0; +} + +static PyMethodDef profiler_methods[] = { + {"getstats", (PyCFunction)profiler_getstats, + METH_NOARGS, getstats_doc}, + {"enable", (PyCFunction)profiler_enable, + METH_VARARGS | METH_KEYWORDS, enable_doc}, + {"disable", (PyCFunction)profiler_disable, + METH_NOARGS, disable_doc}, + {"clear", (PyCFunction)profiler_clear, + METH_NOARGS, clear_doc}, + {NULL, NULL} +}; + +PyDoc_STRVAR(profiler_doc, "\ +Profiler(custom_timer=None, time_unit=None, subcalls=True, builtins=True)\n\ +\n\ + Builds a profiler object using the specified timer function.\n\ + The default timer is a fast built-in one based on real time.\n\ + For custom timer functions returning integers, time_unit can\n\ + be a float specifying a scale (i.e. how long each integer unit\n\ + is, in seconds).\n\ +"); + +statichere PyTypeObject PyProfiler_Type = { + PyObject_HEAD_INIT(NULL) + 0, /* ob_size */ + "_lsprof.Profiler", /* tp_name */ + sizeof(ProfilerObject), /* tp_basicsize */ + 0, /* tp_itemsize */ + (destructor)profiler_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */ + profiler_doc, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + profiler_methods, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + (initproc)profiler_init, /* tp_init */ + PyType_GenericAlloc, /* tp_alloc */ + PyType_GenericNew, /* tp_new */ + PyObject_Del, /* tp_free */ +}; + +static PyMethodDef moduleMethods[] = { + {NULL, NULL} +}; + +PyMODINIT_FUNC +init_lsprof(void) +{ + PyObject *module, *d; + module = Py_InitModule3("_lsprof", moduleMethods, "Fast profiler"); + d = PyModule_GetDict(module); + if (PyType_Ready(&PyProfiler_Type) < 0) + return; + PyDict_SetItemString(d, "Profiler", (PyObject *)&PyProfiler_Type); + + PyStructSequence_InitType(&StatsEntryType, &profiler_entry_desc); + PyStructSequence_InitType(&StatsSubEntryType, &profiler_subentry_desc); + Py_INCREF((PyObject*) &StatsEntryType); + Py_INCREF((PyObject*) &StatsSubEntryType); + PyModule_AddObject(module, "profiler_entry", + (PyObject*) &StatsEntryType); + PyModule_AddObject(module, "profiler_subentry", + (PyObject*) &StatsSubEntryType); + empty_tuple = PyTuple_New(0); +} Added: python/trunk/Modules/rotatingtree.c ============================================================================== --- (empty file) +++ python/trunk/Modules/rotatingtree.c Wed Feb 8 13:53:56 2006 @@ -0,0 +1,121 @@ +#include "rotatingtree.h" + +#define KEY_LOWER_THAN(key1, key2) ((char*)(key1) < (char*)(key2)) + +/* The randombits() function below is a fast-and-dirty generator that + * is probably irregular enough for our purposes. Note that it's biased: + * I think that ones are slightly more probable than zeroes. It's not + * important here, though. + */ + +static unsigned int random_value = 1; +static unsigned int random_stream = 0; + +static int +randombits(int bits) +{ + int result; + if (random_stream < (1<>= bits; + return result; +} + + +/* Insert a new node into the tree. + (*root) is modified to point to the new root. */ +void +RotatingTree_Add(rotating_node_t **root, rotating_node_t *node) +{ + while (*root != NULL) { + if (KEY_LOWER_THAN(node->key, (*root)->key)) + root = &((*root)->left); + else + root = &((*root)->right); + } + node->left = NULL; + node->right = NULL; + *root = node; +} + +/* Locate the node with the given key. This is the most complicated + function because it occasionally rebalances the tree to move the + resulting node closer to the root. */ +rotating_node_t * +RotatingTree_Get(rotating_node_t **root, void *key) +{ + if (randombits(3) != 4) { + /* Fast path, no rebalancing */ + rotating_node_t *node = *root; + while (node != NULL) { + if (node->key == key) + return node; + if (KEY_LOWER_THAN(key, node->key)) + node = node->left; + else + node = node->right; + } + return NULL; + } + else { + rotating_node_t **pnode = root; + rotating_node_t *node = *pnode; + rotating_node_t *next; + int rotate; + if (node == NULL) + return NULL; + while (1) { + if (node->key == key) + return node; + rotate = !randombits(1); + if (KEY_LOWER_THAN(key, node->key)) { + next = node->left; + if (next == NULL) + return NULL; + if (rotate) { + node->left = next->right; + next->right = node; + *pnode = next; + } + else + pnode = &(node->left); + } + else { + next = node->right; + if (next == NULL) + return NULL; + if (rotate) { + node->right = next->left; + next->left = node; + *pnode = next; + } + else + pnode = &(node->right); + } + node = next; + } + } +} + +/* Enumerate all nodes in the tree. The callback enumfn() should return + zero to continue the enumeration, or non-zero to interrupt it. + A non-zero value is directly returned by RotatingTree_Enum(). */ +int +RotatingTree_Enum(rotating_node_t *root, rotating_tree_enum_fn enumfn, + void *arg) +{ + int result; + rotating_node_t *node; + while (root != NULL) { + result = RotatingTree_Enum(root->left, enumfn, arg); + if (result != 0) return result; + node = root->right; + result = enumfn(root, arg); + if (result != 0) return result; + root = node; + } + return 0; +} Added: python/trunk/Modules/rotatingtree.h ============================================================================== --- (empty file) +++ python/trunk/Modules/rotatingtree.h Wed Feb 8 13:53:56 2006 @@ -0,0 +1,27 @@ +/* "Rotating trees" (Armin Rigo) + * + * Google "splay trees" for the general idea. + * + * It's a dict-like data structure that works best when accesses are not + * random, but follow a strong pattern. The one implemented here is for + * accesses patterns where the same small set of keys is looked up over + * and over again, and this set of keys evolves slowly over time. + */ + +#include + +#define EMPTY_ROTATING_TREE ((rotating_node_t *)NULL) + +typedef struct rotating_node_s rotating_node_t; +typedef int (*rotating_tree_enum_fn) (rotating_node_t *node, void *arg); + +struct rotating_node_s { + void *key; + rotating_node_t *left; + rotating_node_t *right; +}; + +void RotatingTree_Add(rotating_node_t **root, rotating_node_t *node); +rotating_node_t* RotatingTree_Get(rotating_node_t **root, void *key); +int RotatingTree_Enum(rotating_node_t *root, rotating_tree_enum_fn enumfn, + void *arg); Modified: python/trunk/setup.py ============================================================================== --- python/trunk/setup.py (original) +++ python/trunk/setup.py Wed Feb 8 13:53:56 2006 @@ -328,7 +328,6 @@ # Some modules that are normally always on: exts.append( Extension('regex', ['regexmodule.c', 'regexpr.c']) ) - exts.append( Extension('_hotshot', ['_hotshot.c']) ) exts.append( Extension('_weakref', ['_weakref.c']) ) # array objects @@ -363,6 +362,9 @@ exts.append( Extension("functional", ["functionalmodule.c"]) ) # Python C API test module exts.append( Extension('_testcapi', ['_testcapimodule.c']) ) + # profilers (_lsprof is for cProfile.py) + exts.append( Extension('_hotshot', ['_hotshot.c']) ) + exts.append( Extension('_lsprof', ['_lsprof.c', 'rotatingtree.c']) ) # static Unicode character database if have_unicode: exts.append( Extension('unicodedata', ['unicodedata.c']) ) From python-checkins at python.org Wed Feb 8 14:33:25 2006 From: python-checkins at python.org (barry.warsaw) Date: Wed, 8 Feb 2006 14:33:25 +0100 (CET) Subject: [Python-checkins] r42270 - in python/branches/release23-maint/Lib/email: Charset.py Generator.py Message.py test/test_email.py test/test_email_codecs.py Message-ID: <20060208133325.62DA61E43AC@bag.python.org> Author: barry.warsaw Date: Wed Feb 8 14:33:20 2006 New Revision: 42270 Modified: python/branches/release23-maint/Lib/email/Charset.py python/branches/release23-maint/Lib/email/Generator.py python/branches/release23-maint/Lib/email/Message.py python/branches/release23-maint/Lib/email/test/test_email.py python/branches/release23-maint/Lib/email/test/test_email_codecs.py Log: Patches to address SF bugs 1409538 (Japanese codecs in CODEC_MAP) and 1409455 (.set_payload() gives bad .get_payload() results). Specific changes include: Simplfy the default CODEC_MAP in Charset.py to not include the Japanese and Korean codecs. The names of the codecs are different depending on whether you're using Python 2.4 and 2.5, which include the codecs by default, or earlier Python's which provide the codecs under different names as a third party library. Now, we attempt to discover which (if either) is available and populate the CODEC_MAP as appropriate. Message.set_charset(): When the message does not already have a Content-Transfer-Encoding header, instead of just adding the header, we also encode the body as defined by the assigned Charset. As before, if the body_encoding is callable, we just call that. If not, then we add a call to body_encode() before setting the header. This way, we guarantee that a message's text payload is always encoded properly. Remove the payload encoding code from Generator._handle_text(). With the above patch, this would cause the body to be doubly encoded. Doing this in the Message class is better than only doing it in the Generator. Added some new tests to ensure everything works correctly. Also changed the way the test_email_codecs.py tests get added (using the same lookup code that the CODEC_MAP adjustments use). This resolves both issues for email 2.5/Python 2.3. I will patch forward to email 3.0 for both Python 2.4 and 2.5. Modified: python/branches/release23-maint/Lib/email/Charset.py ============================================================================== --- python/branches/release23-maint/Lib/email/Charset.py (original) +++ python/branches/release23-maint/Lib/email/Charset.py Wed Feb 8 14:33:20 2006 @@ -1,5 +1,5 @@ -# Copyright (C) 2001,2002 Python Software Foundation -# Author: che at debian.org (Ben Gertzfield), barry at zope.com (Barry Warsaw) +# Copyright (C) 2001-2006 Python Software Foundation +# Author: che at debian.org (Ben Gertzfield), barry at python.org (Barry Warsaw) from types import UnicodeType from email.Encoders import encode_7or8bit @@ -99,20 +99,13 @@ # of stability and useability. CODEC_MAP = { - 'euc-jp': 'japanese.euc-jp', - 'iso-2022-jp': 'japanese.iso-2022-jp', - 'shift_jis': 'japanese.shift_jis', - 'euc-kr': 'korean.euc-kr', - 'ks_c_5601-1987': 'korean.cp949', - 'iso-2022-kr': 'korean.iso-2022-kr', - 'johab': 'korean.johab', - 'gb2132': 'eucgb2312_cn', - 'big5': 'big5_tw', - 'utf-8': 'utf-8', + 'gb2132': 'eucgb2312_cn', + 'big5': 'big5_tw', + 'utf-8': 'utf-8', # Hack: We don't want *any* conversion for stuff marked us-ascii, as all # sorts of garbage might be sent to us in the guise of 7-bit us-ascii. # Let that stuff pass through without conversion to/from Unicode. - 'us-ascii': None, + 'us-ascii': None, } @@ -165,6 +158,26 @@ CODEC_MAP[charset] = codecname +def _find_asian_codec(charset, language): + try: + unicode('foo', charset) + return charset + except LookupError: + try: + codec = language + '.' + charset + unicode('foo', codec) + return codec + except LookupError: + return None + + +for _charset in ('euc-jp', 'iso-2022-jp', 'shift_jis'): + add_codec(_charset, _find_asian_codec(_charset, 'japanese') or _charset) + +for _charset in ('euc-kr', 'cp949', 'iso-2022-kr', 'johab'): + add_codec(_charset, _find_asian_codec(_charset, 'korean') or _charset) + + class Charset: """Map character sets to their email properties. @@ -229,7 +242,7 @@ self.input_codec = CODEC_MAP.get(self.input_charset, self.input_charset) self.output_codec = CODEC_MAP.get(self.output_charset, - self.input_codec) + self.input_codec) def __str__(self): return self.input_charset.lower() Modified: python/branches/release23-maint/Lib/email/Generator.py ============================================================================== --- python/branches/release23-maint/Lib/email/Generator.py (original) +++ python/branches/release23-maint/Lib/email/Generator.py Wed Feb 8 14:33:20 2006 @@ -1,8 +1,7 @@ -# Copyright (C) 2001,2002 Python Software Foundation -# Author: barry at zope.com (Barry Warsaw) +# Copyright (C) 2001-2006 Python Software Foundation +# Author: barry at python.org (Barry Warsaw) -"""Classes to generate plain text from a message object tree. -""" +"""Classes to generate plain text from a message object tree.""" import re import sys @@ -192,9 +191,6 @@ payload = msg.get_payload() if payload is None: return - cset = msg.get_charset() - if cset is not None: - payload = cset.body_encode(payload) if not _isstring(payload): raise TypeError, 'string payload expected: %s' % type(payload) if self._mangle_from_: Modified: python/branches/release23-maint/Lib/email/Message.py ============================================================================== --- python/branches/release23-maint/Lib/email/Message.py (original) +++ python/branches/release23-maint/Lib/email/Message.py Wed Feb 8 14:33:20 2006 @@ -272,11 +272,14 @@ charset=charset.get_output_charset()) else: self.set_param('charset', charset.get_output_charset()) + if str(charset) <> charset.get_output_charset(): + self._payload = charset.body_encode(self._payload) if not self.has_key('Content-Transfer-Encoding'): cte = charset.get_body_encoding() if callable(cte): cte(self) else: + self._payload = charset.body_encode(self._payload) self.add_header('Content-Transfer-Encoding', cte) def get_charset(self): Modified: python/branches/release23-maint/Lib/email/test/test_email.py ============================================================================== --- python/branches/release23-maint/Lib/email/test/test_email.py (original) +++ python/branches/release23-maint/Lib/email/test/test_email.py Wed Feb 8 14:33:20 2006 @@ -2073,7 +2073,8 @@ charset = Charset(charsets[0]) eq(charset.get_body_encoding(), 'base64') msg.set_payload('hello world', charset=charset) - eq(msg.get_payload(), 'hello world') + eq(msg.get_payload(), 'aGVsbG8gd29ybGQ=\n') + eq(msg.get_payload(decode=True), 'hello world') eq(msg['content-transfer-encoding'], 'base64') # Try another one msg = Message() Modified: python/branches/release23-maint/Lib/email/test/test_email_codecs.py ============================================================================== --- python/branches/release23-maint/Lib/email/test/test_email_codecs.py (original) +++ python/branches/release23-maint/Lib/email/test/test_email_codecs.py Wed Feb 8 14:33:20 2006 @@ -1,17 +1,16 @@ -# Copyright (C) 2002 Python Software Foundation +# Copyright (C) 2002-2006 Python Software Foundation # email package unit tests for (optional) Asian codecs import unittest from test.test_support import TestSkipped, run_unittest from email.test.test_email import TestEmailBase -from email.Charset import Charset +from email.Charset import Charset, _find_asian_codec from email.Header import Header, decode_header +from email.Message import Message # See if we have the Japanese codecs package installed -try: - unicode('foo', 'japanese.iso-2022-jp') -except LookupError: +if not _find_asian_codec('iso-2022-jp', 'japanese'): raise TestSkipped, 'Optional Japanese codecs not installed' @@ -49,6 +48,14 @@ # TK: full decode comparison eq(h.__unicode__().encode('euc-jp'), long) + def test_payload_encoding(self): + jhello = '\xa5\xcf\xa5\xed\xa1\xbc\xa5\xef\xa1\xbc\xa5\xeb\xa5\xc9\xa1\xaa' + jcode = 'euc-jp' + msg = Message() + msg.set_payload(jhello, jcode) + ustr = unicode(msg.get_payload(), msg.get_content_charset()) + self.assertEqual(jhello, ustr.encode(jcode)) + def suite(): From python-checkins at python.org Wed Feb 8 15:34:25 2006 From: python-checkins at python.org (barry.warsaw) Date: Wed, 8 Feb 2006 15:34:25 +0100 (CET) Subject: [Python-checkins] r42271 - in python/trunk/Lib/email: Charset.py Generator.py Message.py test/test_email.py test/test_email_codecs.py Message-ID: <20060208143425.B0A9E1E4006@bag.python.org> Author: barry.warsaw Date: Wed Feb 8 15:34:21 2006 New Revision: 42271 Modified: python/trunk/Lib/email/Charset.py python/trunk/Lib/email/Generator.py python/trunk/Lib/email/Message.py python/trunk/Lib/email/test/test_email.py python/trunk/Lib/email/test/test_email_codecs.py Log: Port relevant patches for SF 1409455 to the trunk for email 3.0/Python 2.5. Will port to Python 2.4. Modified: python/trunk/Lib/email/Charset.py ============================================================================== --- python/trunk/Lib/email/Charset.py (original) +++ python/trunk/Lib/email/Charset.py Wed Feb 8 15:34:21 2006 @@ -1,4 +1,4 @@ -# Copyright (C) 2001-2004 Python Software Foundation +# Copyright (C) 2001-2006 Python Software Foundation # Author: Ben Gertzfield, Barry Warsaw # Contact: email-sig at python.org @@ -206,7 +206,7 @@ self.input_codec = CODEC_MAP.get(self.input_charset, self.input_charset) self.output_codec = CODEC_MAP.get(self.output_charset, - self.output_charset) + self.output_charset) def __str__(self): return self.input_charset.lower() Modified: python/trunk/Lib/email/Generator.py ============================================================================== --- python/trunk/Lib/email/Generator.py (original) +++ python/trunk/Lib/email/Generator.py Wed Feb 8 15:34:21 2006 @@ -1,4 +1,4 @@ -# Copyright (C) 2001-2004 Python Software Foundation +# Copyright (C) 2001-2006 Python Software Foundation # Author: Barry Warsaw # Contact: email-sig at python.org @@ -175,9 +175,6 @@ payload = msg.get_payload() if payload is None: return - cset = msg.get_charset() - if cset is not None: - payload = cset.body_encode(payload) if not isinstance(payload, basestring): raise TypeError('string payload expected: %s' % type(payload)) if self._mangle_from_: Modified: python/trunk/Lib/email/Message.py ============================================================================== --- python/trunk/Lib/email/Message.py (original) +++ python/trunk/Lib/email/Message.py Wed Feb 8 15:34:21 2006 @@ -250,11 +250,14 @@ charset=charset.get_output_charset()) else: self.set_param('charset', charset.get_output_charset()) + if str(charset) <> charset.get_output_charset(): + self._payload = charset.body_encode(self._payload) if not self.has_key('Content-Transfer-Encoding'): cte = charset.get_body_encoding() try: cte(self) except TypeError: + self._payload = charset.body_encode(self._payload) self.add_header('Content-Transfer-Encoding', cte) def get_charset(self): Modified: python/trunk/Lib/email/test/test_email.py ============================================================================== --- python/trunk/Lib/email/test/test_email.py (original) +++ python/trunk/Lib/email/test/test_email.py Wed Feb 8 15:34:21 2006 @@ -2221,7 +2221,8 @@ charset = Charset(charsets[0]) eq(charset.get_body_encoding(), 'base64') msg.set_payload('hello world', charset=charset) - eq(msg.get_payload(), 'hello world') + eq(msg.get_payload(), 'aGVsbG8gd29ybGQ=\n') + eq(msg.get_payload(decode=True), 'hello world') eq(msg['content-transfer-encoding'], 'base64') # Try another one msg = Message() Modified: python/trunk/Lib/email/test/test_email_codecs.py ============================================================================== --- python/trunk/Lib/email/test/test_email_codecs.py (original) +++ python/trunk/Lib/email/test/test_email_codecs.py Wed Feb 8 15:34:21 2006 @@ -1,4 +1,5 @@ -# Copyright (C) 2002 Python Software Foundation +# Copyright (C) 2002-2006 Python Software Foundation +# Contact: email-sig at python.org # email package unit tests for (optional) Asian codecs import unittest @@ -7,6 +8,8 @@ from email.test.test_email import TestEmailBase from email.Charset import Charset from email.Header import Header, decode_header +from email.Message import Message + class TestEmailAsianCodecs(TestEmailBase): @@ -42,6 +45,14 @@ # TK: full decode comparison eq(h.__unicode__().encode('euc-jp'), long) + def test_payload_encoding(self): + jhello = '\xa5\xcf\xa5\xed\xa1\xbc\xa5\xef\xa1\xbc\xa5\xeb\xa5\xc9\xa1\xaa' + jcode = 'euc-jp' + msg = Message() + msg.set_payload(jhello, jcode) + ustr = unicode(msg.get_payload(), msg.get_content_charset()) + self.assertEqual(jhello, ustr.encode(jcode)) + def suite(): From g.brandl at gmx.net Wed Feb 8 15:37:54 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 08 Feb 2006 15:37:54 +0100 Subject: [Python-checkins] r42269 - in python/trunk: Doc/lib/lib.tex Doc/lib/libhotshot.tex Doc/lib/libprofile.tex Lib/cProfile.py Lib/pstats.py Lib/test/output/test_cProfile Lib/test/output/test_profile Lib/test/test_cProfile.py Lib/test/test_profile.py Misc/NEWS Modules/_lsprof.c Modules/rotatingtree.c Modules/rotatingtree.h setup.py In-Reply-To: <20060208125407.1217B1E4002@bag.python.org> References: <20060208125407.1217B1E4002@bag.python.org> Message-ID: armin.rigo wrote: > Log: > Added the cProfile module. Should that be added to PEP 356 and whatsnew25.tex? Georg From python-checkins at python.org Wed Feb 8 15:58:56 2006 From: python-checkins at python.org (barry.warsaw) Date: Wed, 8 Feb 2006 15:58:56 +0100 (CET) Subject: [Python-checkins] r42272 - in python/branches/release24-maint/Lib/email: Charset.py Generator.py Message.py test/test_email.py test/test_email_codecs.py Message-ID: <20060208145856.D80271E4011@bag.python.org> Author: barry.warsaw Date: Wed Feb 8 15:58:55 2006 New Revision: 42272 Modified: python/branches/release24-maint/Lib/email/Charset.py python/branches/release24-maint/Lib/email/Generator.py python/branches/release24-maint/Lib/email/Message.py python/branches/release24-maint/Lib/email/test/test_email.py python/branches/release24-maint/Lib/email/test/test_email_codecs.py Log: Port of r42271 from the trunk -- relevant patches for SF 1409455 for email 3.0/Python 2.4. Modified: python/branches/release24-maint/Lib/email/Charset.py ============================================================================== --- python/branches/release24-maint/Lib/email/Charset.py (original) +++ python/branches/release24-maint/Lib/email/Charset.py Wed Feb 8 15:58:55 2006 @@ -1,4 +1,4 @@ -# Copyright (C) 2001-2004 Python Software Foundation +# Copyright (C) 2001-2006 Python Software Foundation # Author: Ben Gertzfield, Barry Warsaw # Contact: email-sig at python.org @@ -206,7 +206,7 @@ self.input_codec = CODEC_MAP.get(self.input_charset, self.input_charset) self.output_codec = CODEC_MAP.get(self.output_charset, - self.output_charset) + self.output_charset) def __str__(self): return self.input_charset.lower() Modified: python/branches/release24-maint/Lib/email/Generator.py ============================================================================== --- python/branches/release24-maint/Lib/email/Generator.py (original) +++ python/branches/release24-maint/Lib/email/Generator.py Wed Feb 8 15:58:55 2006 @@ -1,4 +1,4 @@ -# Copyright (C) 2001-2004 Python Software Foundation +# Copyright (C) 2001-2006 Python Software Foundation # Author: Barry Warsaw # Contact: email-sig at python.org @@ -175,9 +175,6 @@ payload = msg.get_payload() if payload is None: return - cset = msg.get_charset() - if cset is not None: - payload = cset.body_encode(payload) if not isinstance(payload, basestring): raise TypeError('string payload expected: %s' % type(payload)) if self._mangle_from_: Modified: python/branches/release24-maint/Lib/email/Message.py ============================================================================== --- python/branches/release24-maint/Lib/email/Message.py (original) +++ python/branches/release24-maint/Lib/email/Message.py Wed Feb 8 15:58:55 2006 @@ -250,11 +250,14 @@ charset=charset.get_output_charset()) else: self.set_param('charset', charset.get_output_charset()) + if str(charset) <> charset.get_output_charset(): + self._payload = charset.body_encode(self._payload) if not self.has_key('Content-Transfer-Encoding'): cte = charset.get_body_encoding() try: cte(self) except TypeError: + self._payload = charset.body_encode(self._payload) self.add_header('Content-Transfer-Encoding', cte) def get_charset(self): Modified: python/branches/release24-maint/Lib/email/test/test_email.py ============================================================================== --- python/branches/release24-maint/Lib/email/test/test_email.py (original) +++ python/branches/release24-maint/Lib/email/test/test_email.py Wed Feb 8 15:58:55 2006 @@ -2221,7 +2221,8 @@ charset = Charset(charsets[0]) eq(charset.get_body_encoding(), 'base64') msg.set_payload('hello world', charset=charset) - eq(msg.get_payload(), 'hello world') + eq(msg.get_payload(), 'aGVsbG8gd29ybGQ=\n') + eq(msg.get_payload(decode=True), 'hello world') eq(msg['content-transfer-encoding'], 'base64') # Try another one msg = Message() Modified: python/branches/release24-maint/Lib/email/test/test_email_codecs.py ============================================================================== --- python/branches/release24-maint/Lib/email/test/test_email_codecs.py (original) +++ python/branches/release24-maint/Lib/email/test/test_email_codecs.py Wed Feb 8 15:58:55 2006 @@ -1,4 +1,5 @@ -# Copyright (C) 2002 Python Software Foundation +# Copyright (C) 2002-2006 Python Software Foundation +# Contact: email-sig at python.org # email package unit tests for (optional) Asian codecs import unittest @@ -7,6 +8,8 @@ from email.test.test_email import TestEmailBase from email.Charset import Charset from email.Header import Header, decode_header +from email.Message import Message + class TestEmailAsianCodecs(TestEmailBase): @@ -42,6 +45,14 @@ # TK: full decode comparison eq(h.__unicode__().encode('euc-jp'), long) + def test_payload_encoding(self): + jhello = '\xa5\xcf\xa5\xed\xa1\xbc\xa5\xef\xa1\xbc\xa5\xeb\xa5\xc9\xa1\xaa' + jcode = 'euc-jp' + msg = Message() + msg.set_payload(jhello, jcode) + ustr = unicode(msg.get_payload(), msg.get_content_charset()) + self.assertEqual(jhello, ustr.encode(jcode)) + def suite(): From martin at v.loewis.de Wed Feb 8 19:40:06 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 08 Feb 2006 19:40:06 +0100 Subject: [Python-checkins] r42269 - in python/trunk: Doc/lib/lib.tex Doc/lib/libhotshot.tex Doc/lib/libprofile.tex Lib/cProfile.py Lib/pstats.py Lib/test/output/test_cProfile Lib/test/output/test_profile Lib/test/test_cProfile.py Lib/test/test_profile.py Misc/NEWS Modules/_lsprof.c Modules/rotatingtree.c Modules/rotatingtree.h setup.py In-Reply-To: References: <20060208125407.1217B1E4002@bag.python.org> Message-ID: <43EA3B06.2020409@v.loewis.de> Georg Brandl wrote: >>Log: >>Added the cProfile module. > > > Should that be added to PEP 356 and whatsnew25.tex? Not sure what the procedure is for adding things to whatsnew. Last I heard, only Andrew Kuchling can add text to whatsnew. Regards, Martin From fdrake at acm.org Wed Feb 8 19:56:13 2006 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Wed, 8 Feb 2006 13:56:13 -0500 Subject: [Python-checkins] r42269 - in python/trunk: Doc/lib/lib.tex Doc/lib/libhotshot.tex Doc/lib/libprofile.tex Lib/cProfile.py Lib/pstats.py Lib/test/output/test_cProfile Lib/test/output/test_profile Lib/test/test_cProfile.py Lib/test/test_profile.py Misc/NEWS Modules/_lsprof.c Modules/rotatingtree.c Modules/rotatingtree.h setup.py In-Reply-To: <43EA3B06.2020409@v.loewis.de> References: <20060208125407.1217B1E4002@bag.python.org> <43EA3B06.2020409@v.loewis.de> Message-ID: <200602081356.13876.fdrake@acm.org> On Wednesday 08 February 2006 13:40, Martin v. L?wis wrote: > Not sure what the procedure is for adding things to whatsnew. > Last I heard, only Andrew Kuchling can add text to whatsnew. Andrew has indicated that adding reminders to him about particular topics as comments is reasonable, especially since that gives a suggested location, but that he'd rather write the text himself. Typos can simply be fixed of course. -Fred -- Fred L. Drake, Jr. From python-checkins at python.org Thu Feb 9 03:43:17 2006 From: python-checkins at python.org (brett.cannon) Date: Thu, 9 Feb 2006 03:43:17 +0100 (CET) Subject: [Python-checkins] r42273 - python/trunk/Python/compile.txt Message-ID: <20060209024317.DDF291E4010@bag.python.org> Author: brett.cannon Date: Thu Feb 9 03:43:14 2006 New Revision: 42273 Added: python/trunk/Python/compile.txt (contents, props changed) Log: Add doc discussing how AST compiler is structured and designed. It is out of date, though, thanks to lacking info on the arena API. It also should eventually be removed in favor of updating PEP 339. Added: python/trunk/Python/compile.txt ============================================================================== --- (empty file) +++ python/trunk/Python/compile.txt Thu Feb 9 03:43:14 2006 @@ -0,0 +1,507 @@ +Developer Notes for Python Compiler +=================================== + +Table of Contents +----------------- + +- Scope + Defines the limits of the change +- Parse Trees + Describes the local (Python) concept +- Abstract Syntax Trees (AST) + Describes the AST technology used +- Parse Tree to AST + Defines the transform approach +- Control Flow Graphs + Defines the creation of "basic blocks" +- AST to CFG to Bytecode + Tracks the flow from AST to bytecode +- Code Objects + Pointer to making bytecode "executable" +- Modified Files + Files added/modified/removed from CPython compiler +- ToDo + Work yet remaining (before complete) +- References + Academic and technical references to technology used. + + +Scope +----- + +Historically (through 2.4), compilation from source code to bytecode +involved two steps: + +1. Parse the source code into a parse tree (Parser/pgen.c) +2. Emit bytecode based on the parse tree (Python/compile.c) + +Historically, this is not how a standard compiler works. The usual +steps for compilation are: + +1. Parse source code into a parse tree (Parser/pgen.c) +2. Transform parse tree into an Abstract Syntax Tree (Python/ast.c) +3. Transform AST into a Control Flow Graph (Python/newcompile.c) +4. Emit bytecode based on the Control Flow Graph (Python/newcompile.c) + +Starting with Python 2.5, the above steps are now used. This change +was done to simplify compilation by breaking it into three steps. +The purpose of this document is to outline how the lattter three steps +of the process works. + +This document does not touch on how parsing works beyond what is needed +to explain what is needed for compilation. It is also not exhaustive +in terms of the how the entire system works. You will most likely need +to read some source to have an exact understanding of all details. + + +Parse Trees +----------- + +Python's parser is an LL(1) parser mostly based off of the +implementation laid out in the Dragon Book [Aho86]_. + +The grammar file for Python can be found in Grammar/Grammar with the +numeric value of grammar rules are stored in Include/graminit.h. The +numeric values for types of tokens (literal tokens, such as ``:``, +numbers, etc.) are kept in Include/token.h). The parse tree made up of +``node *`` structs (as defined in Include/node.h). + +Querying data from the node structs can be done with the following +macros (which are all defined in Include/token.h): + +- ``CHILD(node *, int)`` + Returns the nth child of the node using zero-offset indexing +- ``RCHILD(node *, int)`` + Returns the nth child of the node from the right side; use + negative numbers! +- ``NCH(node *)`` + Number of children the node has +- ``STR(node *)`` + String representation of the node; e.g., will return ``:`` for a + COLON token +- ``TYPE(node *)`` + The type of node as specified in ``Include/graminit.h`` +- ``REQ(node *, TYPE)`` + Assert that the node is the type that is expected +- ``LINENO(node *)`` + retrieve the line number of the source code that led to the + creation of the parse rule; defined in Python/ast.c + +To tie all of this example, consider the rule for 'while':: + + while_stmt: 'while' test ':' suite ['else' ':' suite] + +The node representing this will have ``TYPE(node) == while_stmt`` and +the number of children can be 4 or 7 depending on if there is an 'else' +statement. To access what should be the first ':' and require it be an +actual ':' token, `(REQ(CHILD(node, 2), COLON)``. + + +Abstract Syntax Trees (AST) +--------------------------- + +The abstract syntax tree (AST) is a high-level representation of the +program structure without the necessity of containing the source code; +it can be thought of a abstract representation of the source code. The +specification of the AST nodes is specified using the Zephyr Abstract +Syntax Definition Language (ASDL) [Wang97]_. + +The definition of the AST nodes for Python is found in the file +Parser/Python.asdl . + +Each AST node (representing statements, expressions, and several +specialized types, like list comprehensions and exception handlers) is +defined by the ASDL. Most definitions in the AST correspond to a +particular source construct, such as an 'if' statement or an attribute +lookup. The definition is independent of its realization in any +particular programming language. + +The following fragment of the Python ASDL construct demonstrates the +approach and syntax:: + + module Python + { + stmt = FunctionDef(identifier name, arguments args, stmt* body, + expr* decorators) + | Return(expr? value) | Yield(expr value) + attributes (int lineno) + } + +The preceding example describes three different kinds of statements; +function definitions, return statements, and yield statements. All +three kinds are considered of type stmt as shown by '|' separating the +various kinds. They all take arguments of various kinds and amounts. + +Modifiers on the argument type specify the number of values needed; '?' +means it is optional, '*' means 0 or more, no modifier means only one +value for the argument and it is required. FunctionDef, for instance, +takes an identifier for the name, 'arguments' for args, zero or more +stmt arguments for 'body', and zero or more expr arguments for +'decorators'. + +Do notice that something like 'arguments', which is a node type, is +represented as a single AST node and not as a sequence of nodes as with +stmt as one might expect. + +All three kinds also have an 'attributes' argument; this is shown by the +fact that 'attributes' lacks a '|' before it. + +The statement definitions above generate the following C structure type:: + + typedef struct _stmt *stmt_ty; + + struct _stmt { + enum { FunctionDef_kind=1, Return_kind=2, Yield_kind=3 } kind; + union { + struct { + identifier name; + arguments_ty args; + asdl_seq *body; + } FunctionDef; + + struct { + expr_ty value; + } Return; + + struct { + expr_ty value; + } Yield; + } v; + int lineno; + } + +Also generated are a series of constructor functions that allocate (in +this case) a stmt_ty struct with the appropriate initialization. The +'kind' field specifies which component of the union is initialized. The +FunctionDef() constructor function sets 'kind' to FunctionDef_kind and +initializes the 'name', 'args', 'body', and 'attributes' fields. + +*** NOTE: if you make a change here that can affect the output of bytecode that +is already in existence, make sure to delete your old .py(c|o) files! Running +``find . -name '*.py[co]' -exec rm -f {} ';'`` should do the trick. + + +Parse Tree to AST +----------------- + +The AST is generated from the parse tree in (see Python/ast.c) using the +function:: + + mod_ty PyAST_FromNode(const node *n); + +The function begins a tree walk of the parse tree, creating various AST +nodes as it goes along. It does this by allocating all new nodes it +needs, calling the proper AST node creation functions for any required +supporting functions, and connecting them as needed. + +Do realize that there is no automated nor symbolic connection between +the grammar specification and the nodes in the parse tree. No help is +directly provided by the parse tree as in yacc. + +For instance, one must keep track of +which node in the parse tree one is working with (e.g., if you are +working with an 'if' statement you need to watch out for the ':' token +to find the end of the conditional). No help is directly provided by +the parse tree as in yacc. + +The functions called to generate AST nodes from the parse tree all have +the name ast_for_xx where xx is what the grammar rule that the function +handles (alias_for_import_name is the exception to this). These in turn +call the constructor functions as defined by the ASDL grammar and +contained in Python/Python-ast.c (which was generated by +Parser/asdl_c.py) to create the nodes of the AST. This all leads to a +sequence of AST nodes stored in asdl_seq structs. + + +Function and macros for creating and using ``asdl_seq *`` types as found +in Python/asdl.c and Include/asdl.h: + +- ``asdl_seq_new(int)`` + Allocate memory for an asdl_seq for length 'size' +- ``asdl_seq_free(asdl_seq *)`` + Free asdl_seq struct +- ``asdl_seq_GET(asdl_seq *seq, int pos)`` + Get item held at 'pos' +- ``asdl_seq_SET(asdl_seq *seq, int pos, void *val)`` + Set 'pos' in 'seq' to 'val' +- ``asdl_seq_APPEND(asdl_seq *seq, void *val)`` + Set the end of 'seq' to 'val' +- ``asdl_seq_LEN(asdl_seq *)`` + Return the length of 'seq' + +If you are working with statements, you must also worry about keeping +track of what line number generated the statement. Currently the line +number is passed as the last parameter to each stmt_ty function. + + +Control Flow Graphs +------------------- + +A control flow graph (often referenced by its acronym, CFG) is a +directed graph that models the flow of a program using basic blocks that +contain the intermediate representation (abbreviated "IR", and in this +case is Python bytecode) within the blocks. Basic blocks themselves are +a block of IR that has a single entry point but possibly multiple exit +points. The single entry point is the key to basic blocks; it all has +to do with jumps. An entry point is the target of something that +changes control flow (such as a function call or a jump) while exit +points are instructions that would change the flow of the program (such +as jumps and 'return' statements). What this means is that a basic +block is a chunk of code that starts at the entry point and runs to an +exit point or the end of the block. + +As an example, consider an 'if' statement with an 'else' block. The +guard on the 'if' is a basic block which is pointed to by the basic +block containing the code leading to the 'if' statement. The 'if' +statement block contains jumps (which are exit points) to the true body +of the 'if' and the 'else' body (which may be NULL), each of which are +their own basic blocks. Both of those blocks in turn point to the +basic block representing the code following the entire 'if' statement. + +CFGs are usually one step away from final code output. Code is directly +generated from the basic blocks (with jump targets adjusted based on the +output order) by doing a post-order depth-first search on the CFG +following the edges. + + +AST to CFG to Bytecode +---------------------- + +With the AST created, the next step is to create the CFG. The first step +is to convert the AST to Python bytecode without having jump targets +resolved to specific offsets (this is calculated when the CFG goes to +final bytecode). Essentially, this transforms the AST into Python +bytecode with control flow represented by the edges of the CFG. + +Conversion is done in two passes. The first creates the namespace +(variables can be classified as local, free/cell for closures, or +global). With that done, the second pass essentially flattens the CFG +into a list and calculates jump offsets for final output of bytecode. + +The conversion process is initiated by a call to the function in +Python/newcompile.c:: + + PyCodeObject * PyAST_Compile(mod_ty, const char *, PyCompilerFlags); + +This function does both the conversion of the AST to a CFG and +outputting final bytecode from the CFG. The AST to CFG step is handled +mostly by the two functions called by PyAST_Compile():: + + struct symtable * PySymtable_Build(mod_ty, const char *, + PyFutureFeatures); + PyCodeObject * compiler_mod(struct compiler *, mod_ty); + +The former is in Python/symtable.c while the latter is in +Python/newcompile.c . + +PySymtable_Build() begins by entering the starting code block for the +AST (passed-in) and then calling the proper symtable_visit_xx function +(with xx being the AST node type). Next, the AST tree is walked with +the various code blocks that delineate the reach of a local variable +as blocks are entered and exited:: + + static int symtable_enter_block(struct symtable *, identifier, + block_ty, void *, int); + static int symtable_exit_block(struct symtable *, void *); + +Once the symbol table is created, it is time for CFG creation, whose +code is in Python/newcompile.c . This is handled by several functions +that break the task down by various AST node types. The functions are +all named compiler_visit_xx where xx is the name of the node type (such +as stmt, expr, etc.). Each function receives a ``struct compiler *`` +and xx_ty where xx is the AST node type. Typically these functions +consist of a large 'switch' statement, branching based on the kind of +node type passed to it. Simple things are handled inline in the +'switch' statement with more complex transformations farmed out to other +functions named compiler_xx with xx being a descriptive name of what is +being handled. + +When transforming an arbitrary AST node, use the VISIT macro:: + + VISIT(struct compiler *, , ); + +The appropriate compiler_visit_xx function is called, based on the value +passed in for (so ``VISIT(c, expr, node)`` calls +``compiler_visit_expr(c, node)``). The VISIT_SEQ macro is very similar, + but is called on AST node sequences (those values that were created as +arguments to a node that used the '*' modifier). There is also +VISIT_SLICE just for handling slices:: + + VISIT_SLICE(struct compiler *, slice_ty, expr_context_ty); + +Emission of bytecode is handled by the following macros: + +- ``ADDOP(struct compiler *c, int op)`` + add 'op' as an opcode +- ``ADDOP_I(struct compiler *c, int op, int oparg)`` + add 'op' with an 'oparg' argument +- ``ADDOP_O(struct compiler *c, int op, PyObject *type, PyObject *obj)`` + add 'op' with the proper argument based on the position of obj in + 'type', but with no handling of mangled names; used for when you + need to do named lookups of objects such as globals, consts, or + parameters where name mangling is not possible and the scope of the + name is known +- ``ADDOP_NAME(struct compiler *, int, PyObject *, PyObject *)`` + just like ADDOP_O, but name mangling is also handled; used for + attribute loading or importing based on name +- ``ADDOP_JABS(struct compiling *c, int op, basicblock b)`` + create an absolute jump to the basic block 'b' +- ``ADDOP_JREL(struct compiling *c, int op, basicblock b)`` + create a relative jump to the basic block 'b' + +Several helper functions that will emit bytecode and are named +compiler_xx() where xx is what the function helps with (list, boolop + etc.). A rather useful one is:: + + static int compiler_nameop(struct compiler *, identifier, + expr_context_ty); + +This function looks up the scope of a variable and, based on the +expression context, emits the proper opcode to load, store, or delete +the variable. + +As for handling the line number on which a statement is defined, is +handled by compiler_visit_stmt() and thus is not a worry. + +In addition to emitting bytecode based on the AST node, handling the +creation of basic blocks must be done. Below are the macros and +functions used for managing basic blocks: + +- ``NEW_BLOCK(struct compiler *)`` + create block and set it as current +- ``NEXT_BLOCK(struct compiler *)`` + basically NEW_BLOCK() plus jump from current block +- ``compiler_new_block(struct compiler *)`` + create a block but don't use it (used for generating jumps) + +Once the CFG is created, it must be flattened and then final emission of +bytecode occurs. Flattening is handled using a post-order depth-first +search. Once flattened, jump offsets are backpatched based on the +flattening and then a PyCodeObject file is created. All of this is +handled by calling:: + + PyCodeObject * assemble(struct compiler *, int); + +*** NOTE: if you make a change here that can affect the output of bytecode that +is already in existence, make sure to delete your old .py(c|o) files! Running +``find . -name '*.py[co]' -exec rm -f {} ';'`` should do the trick. + + +Code Objects +------------ + +In the end, one ends up with a PyCodeObject which is defined in +Include/code.h . And with that you now have executable Python bytecode! + + +Modified Files +-------------- + ++ Parser/ + + - Python.asdl + ASDL syntax file + + - asdl.py + "An implementation of the Zephyr Abstract Syntax Definition + Language." Uses SPARK_ to parse the ASDL files. + + - asdl_c.py + "Generate C code from an ASDL description." Generates + ../Python/Python-ast.c and ../Include/Python-ast.h . + + - spark.py + SPARK_ parser generator + ++ Python/ + + - Python-ast.c + Creates C structs corresponding to the ASDL types. Also + contains code for marshaling AST nodes (core ASDL types have + marshaling code in asdl.c). "File automatically generated by + ../Parser/asdl_c.py". + + - asdl.c + Contains code to handle the ASDL sequence type. Also has code + to handle marshalling the core ASDL types, such as number and + identifier. used by Python-ast.c for marshaling AST nodes. + + - ast.c + Converts Python's parse tree into the abstract syntax tree. + + - compile.txt + This file. + + - newcompile.c + New version of compile.c that handles the emitting of bytecode. + + - symtable.c + Generates symbol table from AST. + + ++ Include/ + + - Python-ast.h + Contains the actual definitions of the C structs as generated by + ../Python/Python-ast.c . + "Automatically generated by ../Parser/asdl_c.py". + + - asdl.h + Header for the corresponding ../Python/ast.c . + + - ast.h + Declares PyAST_FromNode() external (from ../Python/ast.c). + + - code.h + Header file for ../Objects/codeobject.c; contains definition of + PyCodeObject. + + - symtable.h + Header for ../Python/symtable.c . struct symtable and + PySTEntryObject are defined here. + ++ Objects/ + + - codeobject.c + Contains PyCodeObject-related code (originally in + ../Python/compile.c). + + +ToDo +---- +*** NOTE: all bugs and patches should be filed on SF under the group + "AST" for easy searching. It also does not hurt to put + "[AST]" at the beginning of the subject line of the tracker + item. + ++ Stdlib support + - AST->Python access? + - rewrite compiler package to mirror AST structure? ++ Documentation + - flesh out this doc + * byte stream output + * explanation of how the symbol table pass works + * code object (PyCodeObject) ++ Universal + - make sure entire test suite passes + - fix memory leaks + - make sure return types are properly checked for errors + - no gcc warnings + +References +---------- + +.. [Aho86] Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman. + `Compilers: Principles, Techniques, and Tools`, + http://www.amazon.com/exec/obidos/tg/detail/-/0201100886/104-0162389-6419108 + +.. [Wang97] Daniel C. Wang, Andrew W. Appel, Jeff L. Korn, and Chris + S. Serra. `The Zephyr Abstract Syntax Description Language.`_ + In Proceedings of the Conference on Domain-Specific Languages, pp. + 213--227, 1997. + +.. _The Zephyr Abstract Syntax Description Language.: + http://www.cs.princeton.edu/~danwang/Papers/dsl97/dsl97.html + +.. _SPARK: http://pages.cpsc.ucalgary.ca/~aycock/spark/ + From python-checkins at python.org Thu Feb 9 03:55:04 2006 From: python-checkins at python.org (barry.warsaw) Date: Thu, 9 Feb 2006 03:55:04 +0100 (CET) Subject: [Python-checkins] r42274 - sandbox/trunk/emailpkg/3.1 Message-ID: <20060209025504.A4D801E4007@bag.python.org> Author: barry.warsaw Date: Thu Feb 9 03:55:03 2006 New Revision: 42274 Added: sandbox/trunk/emailpkg/3.1/ - copied from r42273, sandbox/trunk/emailpkg/3.0/ Log: Create a directory for sandbox develoment of email 3.1. From python-checkins at python.org Thu Feb 9 03:59:19 2006 From: python-checkins at python.org (barry.warsaw) Date: Thu, 9 Feb 2006 03:59:19 +0100 (CET) Subject: [Python-checkins] r42275 - in sandbox/trunk/emailpkg/3.1: email Message-ID: <20060209025919.D99081E4007@bag.python.org> Author: barry.warsaw Date: Thu Feb 9 03:59:17 2006 New Revision: 42275 Added: sandbox/trunk/emailpkg/3.1/email/ - copied from r42274, python/trunk/Lib/email/ Modified: sandbox/trunk/emailpkg/3.1/ (props changed) Log: Copy email 3.0 from the trunk, in preparation for the email 3.1 sandbox changes. From python-checkins at python.org Thu Feb 9 04:04:05 2006 From: python-checkins at python.org (barry.warsaw) Date: Thu, 9 Feb 2006 04:04:05 +0100 (CET) Subject: [Python-checkins] r42276 - in sandbox/trunk/emailpkg/3.1/email: Charset.py Encoders.py Errors.py FeedParser.py Generator.py Header.py Iterators.py MIMEAudio.py MIMEBase.py MIMEImage.py MIMEMessage.py MIMEMultipart.py MIMENonMultipart.py MIMEText.py Message.py Parser.py Utils.py __init__.py base64MIME.py base64mime.py charset.py encoders.py errors.py feedparser.py generator.py header.py iterators.py message.py mime mime/__init__.py mime/audio.py mime/base.py mime/image.py mime/message.py mime/multipart.py mime/nonmultipart.py mime/text.py parser.py quopriMIME.py quoprimime.py test/test_email.py test/test_email_codecs_renamed.py test/test_email_renamed.py utils.py Message-ID: <20060209030405.EBB4F1E4009@bag.python.org> Author: barry.warsaw Date: Thu Feb 9 04:04:02 2006 New Revision: 42276 Added: sandbox/trunk/emailpkg/3.1/email/base64mime.py - copied unchanged from r42272, python/trunk/Lib/email/base64MIME.py sandbox/trunk/emailpkg/3.1/email/charset.py - copied unchanged from r42272, python/trunk/Lib/email/Charset.py sandbox/trunk/emailpkg/3.1/email/encoders.py - copied unchanged from r42272, python/trunk/Lib/email/Encoders.py sandbox/trunk/emailpkg/3.1/email/errors.py - copied unchanged from r42272, python/trunk/Lib/email/Errors.py sandbox/trunk/emailpkg/3.1/email/feedparser.py - copied unchanged from r42272, python/trunk/Lib/email/FeedParser.py sandbox/trunk/emailpkg/3.1/email/generator.py - copied unchanged from r42272, python/trunk/Lib/email/Generator.py sandbox/trunk/emailpkg/3.1/email/header.py - copied unchanged from r42272, python/trunk/Lib/email/Header.py sandbox/trunk/emailpkg/3.1/email/iterators.py - copied unchanged from r42272, python/trunk/Lib/email/Iterators.py sandbox/trunk/emailpkg/3.1/email/message.py - copied unchanged from r42272, python/trunk/Lib/email/Message.py sandbox/trunk/emailpkg/3.1/email/mime/ sandbox/trunk/emailpkg/3.1/email/mime/__init__.py sandbox/trunk/emailpkg/3.1/email/mime/audio.py - copied unchanged from r42272, python/trunk/Lib/email/MIMEAudio.py sandbox/trunk/emailpkg/3.1/email/mime/base.py - copied unchanged from r42272, python/trunk/Lib/email/MIMEBase.py sandbox/trunk/emailpkg/3.1/email/mime/image.py - copied unchanged from r42272, python/trunk/Lib/email/MIMEImage.py sandbox/trunk/emailpkg/3.1/email/mime/message.py - copied unchanged from r42272, python/trunk/Lib/email/MIMEMessage.py sandbox/trunk/emailpkg/3.1/email/mime/multipart.py - copied unchanged from r42272, python/trunk/Lib/email/MIMEMultipart.py sandbox/trunk/emailpkg/3.1/email/mime/nonmultipart.py - copied unchanged from r42272, python/trunk/Lib/email/MIMENonMultipart.py sandbox/trunk/emailpkg/3.1/email/mime/text.py - copied unchanged from r42272, python/trunk/Lib/email/MIMEText.py sandbox/trunk/emailpkg/3.1/email/parser.py - copied unchanged from r42272, python/trunk/Lib/email/Parser.py sandbox/trunk/emailpkg/3.1/email/quoprimime.py - copied unchanged from r42272, python/trunk/Lib/email/quopriMIME.py sandbox/trunk/emailpkg/3.1/email/test/test_email_codecs_renamed.py - copied, changed from r42272, python/trunk/Lib/email/test/test_email_codecs.py sandbox/trunk/emailpkg/3.1/email/test/test_email_renamed.py - copied, changed from r42272, python/trunk/Lib/email/test/test_email.py sandbox/trunk/emailpkg/3.1/email/utils.py - copied unchanged from r42272, python/trunk/Lib/email/Utils.py Removed: sandbox/trunk/emailpkg/3.1/email/Charset.py sandbox/trunk/emailpkg/3.1/email/Encoders.py sandbox/trunk/emailpkg/3.1/email/Errors.py sandbox/trunk/emailpkg/3.1/email/FeedParser.py sandbox/trunk/emailpkg/3.1/email/Generator.py sandbox/trunk/emailpkg/3.1/email/Header.py sandbox/trunk/emailpkg/3.1/email/Iterators.py sandbox/trunk/emailpkg/3.1/email/MIMEAudio.py sandbox/trunk/emailpkg/3.1/email/MIMEBase.py sandbox/trunk/emailpkg/3.1/email/MIMEImage.py sandbox/trunk/emailpkg/3.1/email/MIMEMessage.py sandbox/trunk/emailpkg/3.1/email/MIMEMultipart.py sandbox/trunk/emailpkg/3.1/email/MIMENonMultipart.py sandbox/trunk/emailpkg/3.1/email/MIMEText.py sandbox/trunk/emailpkg/3.1/email/Message.py sandbox/trunk/emailpkg/3.1/email/Parser.py sandbox/trunk/emailpkg/3.1/email/Utils.py sandbox/trunk/emailpkg/3.1/email/base64MIME.py sandbox/trunk/emailpkg/3.1/email/quopriMIME.py Modified: sandbox/trunk/emailpkg/3.1/email/__init__.py sandbox/trunk/emailpkg/3.1/email/test/test_email.py Log: Changes to support PEP 8 module names. Sandbox for possible email 3.1 to be released with Python 2.5. All the old names are still supported and there are parallel tests for all features with both the old and new names. Deleted: /sandbox/trunk/emailpkg/3.1/email/Charset.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/Charset.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,370 +0,0 @@ -# Copyright (C) 2001-2006 Python Software Foundation -# Author: Ben Gertzfield, Barry Warsaw -# Contact: email-sig at python.org - -import email.base64MIME -import email.quopriMIME -from email.Encoders import encode_7or8bit - - - -# Flags for types of header encodings -QP = 1 # Quoted-Printable -BASE64 = 2 # Base64 -SHORTEST = 3 # the shorter of QP and base64, but only for headers - -# In "=?charset?q?hello_world?=", the =?, ?q?, and ?= add up to 7 -MISC_LEN = 7 - -DEFAULT_CHARSET = 'us-ascii' - - - -# Defaults -CHARSETS = { - # input header enc body enc output conv - 'iso-8859-1': (QP, QP, None), - 'iso-8859-2': (QP, QP, None), - 'iso-8859-3': (QP, QP, None), - 'iso-8859-4': (QP, QP, None), - # iso-8859-5 is Cyrillic, and not especially used - # iso-8859-6 is Arabic, also not particularly used - # iso-8859-7 is Greek, QP will not make it readable - # iso-8859-8 is Hebrew, QP will not make it readable - 'iso-8859-9': (QP, QP, None), - 'iso-8859-10': (QP, QP, None), - # iso-8859-11 is Thai, QP will not make it readable - 'iso-8859-13': (QP, QP, None), - 'iso-8859-14': (QP, QP, None), - 'iso-8859-15': (QP, QP, None), - 'windows-1252':(QP, QP, None), - 'viscii': (QP, QP, None), - 'us-ascii': (None, None, None), - 'big5': (BASE64, BASE64, None), - 'gb2312': (BASE64, BASE64, None), - 'euc-jp': (BASE64, None, 'iso-2022-jp'), - 'shift_jis': (BASE64, None, 'iso-2022-jp'), - 'iso-2022-jp': (BASE64, None, None), - 'koi8-r': (BASE64, BASE64, None), - 'utf-8': (SHORTEST, BASE64, 'utf-8'), - # We're making this one up to represent raw unencoded 8-bit - '8bit': (None, BASE64, 'utf-8'), - } - -# Aliases for other commonly-used names for character sets. Map -# them to the real ones used in email. -ALIASES = { - 'latin_1': 'iso-8859-1', - 'latin-1': 'iso-8859-1', - 'latin_2': 'iso-8859-2', - 'latin-2': 'iso-8859-2', - 'latin_3': 'iso-8859-3', - 'latin-3': 'iso-8859-3', - 'latin_4': 'iso-8859-4', - 'latin-4': 'iso-8859-4', - 'latin_5': 'iso-8859-9', - 'latin-5': 'iso-8859-9', - 'latin_6': 'iso-8859-10', - 'latin-6': 'iso-8859-10', - 'latin_7': 'iso-8859-13', - 'latin-7': 'iso-8859-13', - 'latin_8': 'iso-8859-14', - 'latin-8': 'iso-8859-14', - 'latin_9': 'iso-8859-15', - 'latin-9': 'iso-8859-15', - 'cp949': 'ks_c_5601-1987', - 'euc_jp': 'euc-jp', - 'euc_kr': 'euc-kr', - 'ascii': 'us-ascii', - } - - -# Map charsets to their Unicode codec strings. -CODEC_MAP = { - 'gb2312': 'eucgb2312_cn', - 'big5': 'big5_tw', - # Hack: We don't want *any* conversion for stuff marked us-ascii, as all - # sorts of garbage might be sent to us in the guise of 7-bit us-ascii. - # Let that stuff pass through without conversion to/from Unicode. - 'us-ascii': None, - } - - - -# Convenience functions for extending the above mappings -def add_charset(charset, header_enc=None, body_enc=None, output_charset=None): - """Add character set properties to the global registry. - - charset is the input character set, and must be the canonical name of a - character set. - - Optional header_enc and body_enc is either Charset.QP for - quoted-printable, Charset.BASE64 for base64 encoding, Charset.SHORTEST for - the shortest of qp or base64 encoding, or None for no encoding. SHORTEST - is only valid for header_enc. It describes how message headers and - message bodies in the input charset are to be encoded. Default is no - encoding. - - Optional output_charset is the character set that the output should be - in. Conversions will proceed from input charset, to Unicode, to the - output charset when the method Charset.convert() is called. The default - is to output in the same character set as the input. - - Both input_charset and output_charset must have Unicode codec entries in - the module's charset-to-codec mapping; use add_codec(charset, codecname) - to add codecs the module does not know about. See the codecs module's - documentation for more information. - """ - if body_enc == SHORTEST: - raise ValueError('SHORTEST not allowed for body_enc') - CHARSETS[charset] = (header_enc, body_enc, output_charset) - - -def add_alias(alias, canonical): - """Add a character set alias. - - alias is the alias name, e.g. latin-1 - canonical is the character set's canonical name, e.g. iso-8859-1 - """ - ALIASES[alias] = canonical - - -def add_codec(charset, codecname): - """Add a codec that map characters in the given charset to/from Unicode. - - charset is the canonical name of a character set. codecname is the name - of a Python codec, as appropriate for the second argument to the unicode() - built-in, or to the encode() method of a Unicode string. - """ - CODEC_MAP[charset] = codecname - - - -class Charset: - """Map character sets to their email properties. - - This class provides information about the requirements imposed on email - for a specific character set. It also provides convenience routines for - converting between character sets, given the availability of the - applicable codecs. Given a character set, it will do its best to provide - information on how to use that character set in an email in an - RFC-compliant way. - - Certain character sets must be encoded with quoted-printable or base64 - when used in email headers or bodies. Certain character sets must be - converted outright, and are not allowed in email. Instances of this - module expose the following information about a character set: - - input_charset: The initial character set specified. Common aliases - are converted to their `official' email names (e.g. latin_1 - is converted to iso-8859-1). Defaults to 7-bit us-ascii. - - header_encoding: If the character set must be encoded before it can be - used in an email header, this attribute will be set to - Charset.QP (for quoted-printable), Charset.BASE64 (for - base64 encoding), or Charset.SHORTEST for the shortest of - QP or BASE64 encoding. Otherwise, it will be None. - - body_encoding: Same as header_encoding, but describes the encoding for the - mail message's body, which indeed may be different than the - header encoding. Charset.SHORTEST is not allowed for - body_encoding. - - output_charset: Some character sets must be converted before the can be - used in email headers or bodies. If the input_charset is - one of them, this attribute will contain the name of the - charset output will be converted to. Otherwise, it will - be None. - - input_codec: The name of the Python codec used to convert the - input_charset to Unicode. If no conversion codec is - necessary, this attribute will be None. - - output_codec: The name of the Python codec used to convert Unicode - to the output_charset. If no conversion codec is necessary, - this attribute will have the same value as the input_codec. - """ - def __init__(self, input_charset=DEFAULT_CHARSET): - # RFC 2046, $4.1.2 says charsets are not case sensitive. We coerce to - # unicode because its .lower() is locale insensitive. - input_charset = unicode(input_charset, 'ascii').lower() - # Set the input charset after filtering through the aliases - self.input_charset = ALIASES.get(input_charset, input_charset) - # We can try to guess which encoding and conversion to use by the - # charset_map dictionary. Try that first, but let the user override - # it. - henc, benc, conv = CHARSETS.get(self.input_charset, - (SHORTEST, BASE64, None)) - if not conv: - conv = self.input_charset - # Set the attributes, allowing the arguments to override the default. - self.header_encoding = henc - self.body_encoding = benc - self.output_charset = ALIASES.get(conv, conv) - # Now set the codecs. If one isn't defined for input_charset, - # guess and try a Unicode codec with the same name as input_codec. - self.input_codec = CODEC_MAP.get(self.input_charset, - self.input_charset) - self.output_codec = CODEC_MAP.get(self.output_charset, - self.output_charset) - - def __str__(self): - return self.input_charset.lower() - - __repr__ = __str__ - - def __eq__(self, other): - return str(self) == str(other).lower() - - def __ne__(self, other): - return not self.__eq__(other) - - def get_body_encoding(self): - """Return the content-transfer-encoding used for body encoding. - - This is either the string `quoted-printable' or `base64' depending on - the encoding used, or it is a function in which case you should call - the function with a single argument, the Message object being - encoded. The function should then set the Content-Transfer-Encoding - header itself to whatever is appropriate. - - Returns "quoted-printable" if self.body_encoding is QP. - Returns "base64" if self.body_encoding is BASE64. - Returns "7bit" otherwise. - """ - assert self.body_encoding <> SHORTEST - if self.body_encoding == QP: - return 'quoted-printable' - elif self.body_encoding == BASE64: - return 'base64' - else: - return encode_7or8bit - - def convert(self, s): - """Convert a string from the input_codec to the output_codec.""" - if self.input_codec <> self.output_codec: - return unicode(s, self.input_codec).encode(self.output_codec) - else: - return s - - def to_splittable(self, s): - """Convert a possibly multibyte string to a safely splittable format. - - Uses the input_codec to try and convert the string to Unicode, so it - can be safely split on character boundaries (even for multibyte - characters). - - Returns the string as-is if it isn't known how to convert it to - Unicode with the input_charset. - - Characters that could not be converted to Unicode will be replaced - with the Unicode replacement character U+FFFD. - """ - if isinstance(s, unicode) or self.input_codec is None: - return s - try: - return unicode(s, self.input_codec, 'replace') - except LookupError: - # Input codec not installed on system, so return the original - # string unchanged. - return s - - def from_splittable(self, ustr, to_output=True): - """Convert a splittable string back into an encoded string. - - Uses the proper codec to try and convert the string from Unicode back - into an encoded format. Return the string as-is if it is not Unicode, - or if it could not be converted from Unicode. - - Characters that could not be converted from Unicode will be replaced - with an appropriate character (usually '?'). - - If to_output is True (the default), uses output_codec to convert to an - encoded format. If to_output is False, uses input_codec. - """ - if to_output: - codec = self.output_codec - else: - codec = self.input_codec - if not isinstance(ustr, unicode) or codec is None: - return ustr - try: - return ustr.encode(codec, 'replace') - except LookupError: - # Output codec not installed - return ustr - - def get_output_charset(self): - """Return the output character set. - - This is self.output_charset if that is not None, otherwise it is - self.input_charset. - """ - return self.output_charset or self.input_charset - - def encoded_header_len(self, s): - """Return the length of the encoded header string.""" - cset = self.get_output_charset() - # The len(s) of a 7bit encoding is len(s) - if self.header_encoding == BASE64: - return email.base64MIME.base64_len(s) + len(cset) + MISC_LEN - elif self.header_encoding == QP: - return email.quopriMIME.header_quopri_len(s) + len(cset) + MISC_LEN - elif self.header_encoding == SHORTEST: - lenb64 = email.base64MIME.base64_len(s) - lenqp = email.quopriMIME.header_quopri_len(s) - return min(lenb64, lenqp) + len(cset) + MISC_LEN - else: - return len(s) - - def header_encode(self, s, convert=False): - """Header-encode a string, optionally converting it to output_charset. - - If convert is True, the string will be converted from the input - charset to the output charset automatically. This is not useful for - multibyte character sets, which have line length issues (multibyte - characters must be split on a character, not a byte boundary); use the - high-level Header class to deal with these issues. convert defaults - to False. - - The type of encoding (base64 or quoted-printable) will be based on - self.header_encoding. - """ - cset = self.get_output_charset() - if convert: - s = self.convert(s) - # 7bit/8bit encodings return the string unchanged (modulo conversions) - if self.header_encoding == BASE64: - return email.base64MIME.header_encode(s, cset) - elif self.header_encoding == QP: - return email.quopriMIME.header_encode(s, cset, maxlinelen=None) - elif self.header_encoding == SHORTEST: - lenb64 = email.base64MIME.base64_len(s) - lenqp = email.quopriMIME.header_quopri_len(s) - if lenb64 < lenqp: - return email.base64MIME.header_encode(s, cset) - else: - return email.quopriMIME.header_encode(s, cset, maxlinelen=None) - else: - return s - - def body_encode(self, s, convert=True): - """Body-encode a string and convert it to output_charset. - - If convert is True (the default), the string will be converted from - the input charset to output charset automatically. Unlike - header_encode(), there are no issues with byte boundaries and - multibyte charsets in email bodies, so this is usually pretty safe. - - The type of encoding (base64 or quoted-printable) will be based on - self.body_encoding. - """ - if convert: - s = self.convert(s) - # 7bit/8bit encodings return the string unchanged (module conversions) - if self.body_encoding is BASE64: - return email.base64MIME.body_encode(s) - elif self.body_encoding is QP: - return email.quopriMIME.body_encode(s) - else: - return s Deleted: /sandbox/trunk/emailpkg/3.1/email/Encoders.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/Encoders.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,78 +0,0 @@ -# Copyright (C) 2001-2004 Python Software Foundation -# Author: Barry Warsaw -# Contact: email-sig at python.org - -"""Encodings and related functions.""" - -import base64 -from quopri import encodestring as _encodestring - -def _qencode(s): - enc = _encodestring(s, quotetabs=True) - # Must encode spaces, which quopri.encodestring() doesn't do - return enc.replace(' ', '=20') - - -def _bencode(s): - # We can't quite use base64.encodestring() since it tacks on a "courtesy - # newline". Blech! - if not s: - return s - hasnewline = (s[-1] == '\n') - value = base64.encodestring(s) - if not hasnewline and value[-1] == '\n': - return value[:-1] - return value - - - -def encode_base64(msg): - """Encode the message's payload in Base64. - - Also, add an appropriate Content-Transfer-Encoding header. - """ - orig = msg.get_payload() - encdata = _bencode(orig) - msg.set_payload(encdata) - msg['Content-Transfer-Encoding'] = 'base64' - - - -def encode_quopri(msg): - """Encode the message's payload in quoted-printable. - - Also, add an appropriate Content-Transfer-Encoding header. - """ - orig = msg.get_payload() - encdata = _qencode(orig) - msg.set_payload(encdata) - msg['Content-Transfer-Encoding'] = 'quoted-printable' - - - -def encode_7or8bit(msg): - """Set the Content-Transfer-Encoding header to 7bit or 8bit.""" - orig = msg.get_payload() - if orig is None: - # There's no payload. For backwards compatibility we use 7bit - msg['Content-Transfer-Encoding'] = '7bit' - return - # We play a trick to make this go fast. If encoding to ASCII succeeds, we - # know the data must be 7bit, otherwise treat it as 8bit. - try: - orig.encode('ascii') - except UnicodeError: - # iso-2022-* is non-ASCII but still 7-bit - charset = msg.get_charset() - output_cset = charset and charset.output_charset - if output_cset and output_cset.lower().startswith('iso-2202-'): - msg['Content-Transfer-Encoding'] = '7bit' - else: - msg['Content-Transfer-Encoding'] = '8bit' - else: - msg['Content-Transfer-Encoding'] = '7bit' - - - -def encode_noop(msg): - """Do nothing.""" Deleted: /sandbox/trunk/emailpkg/3.1/email/Errors.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/Errors.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,53 +0,0 @@ -# Copyright (C) 2001-2004 Python Software Foundation -# Author: Barry Warsaw -# Contact: email-sig at python.org - -"""email package exception classes.""" - - - -class MessageError(Exception): - """Base class for errors in the email package.""" - - -class MessageParseError(MessageError): - """Base class for message parsing errors.""" - - -class HeaderParseError(MessageParseError): - """Error while parsing headers.""" - - -class BoundaryError(MessageParseError): - """Couldn't find terminating boundary.""" - - -class MultipartConversionError(MessageError, TypeError): - """Conversion to a multipart is prohibited.""" - - - -# These are parsing defects which the parser was able to work around. -class MessageDefect: - """Base class for a message defect.""" - - def __init__(self, line=None): - self.line = line - -class NoBoundaryInMultipartDefect(MessageDefect): - """A message claimed to be a multipart but had no boundary parameter.""" - -class StartBoundaryNotFoundDefect(MessageDefect): - """The claimed start boundary was never found.""" - -class FirstHeaderLineIsContinuationDefect(MessageDefect): - """A message had a continuation line as its first header line.""" - -class MisplacedEnvelopeHeaderDefect(MessageDefect): - """A 'Unix-from' header was found in the middle of a header block.""" - -class MalformedHeaderDefect(MessageDefect): - """Found a header that was missing a colon, or was otherwise malformed.""" - -class MultipartInvariantViolationDefect(MessageDefect): - """A message claimed to be a multipart but no subparts were found.""" Deleted: /sandbox/trunk/emailpkg/3.1/email/FeedParser.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/FeedParser.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,477 +0,0 @@ -# Copyright (C) 2004-2006 Python Software Foundation -# Authors: Baxter, Wouters and Warsaw -# Contact: email-sig at python.org - -"""FeedParser - An email feed parser. - -The feed parser implements an interface for incrementally parsing an email -message, line by line. This has advantages for certain applications, such as -those reading email messages off a socket. - -FeedParser.feed() is the primary interface for pushing new data into the -parser. It returns when there's nothing more it can do with the available -data. When you have no more data to push into the parser, call .close(). -This completes the parsing and returns the root message object. - -The other advantage of this parser is that it will never throw a parsing -exception. Instead, when it finds something unexpected, it adds a 'defect' to -the current message. Defects are just instances that live on the message -object's .defects attribute. -""" - -import re -from email import Errors -from email import Message - -NLCRE = re.compile('\r\n|\r|\n') -NLCRE_bol = re.compile('(\r\n|\r|\n)') -NLCRE_eol = re.compile('(\r\n|\r|\n)$') -NLCRE_crack = re.compile('(\r\n|\r|\n)') -# RFC 2822 $3.6.8 Optional fields. ftext is %d33-57 / %d59-126, Any character -# except controls, SP, and ":". -headerRE = re.compile(r'^(From |[\041-\071\073-\176]{1,}:|[\t ])') -EMPTYSTRING = '' -NL = '\n' - -NeedMoreData = object() - - - -class BufferedSubFile(object): - """A file-ish object that can have new data loaded into it. - - You can also push and pop line-matching predicates onto a stack. When the - current predicate matches the current line, a false EOF response - (i.e. empty string) is returned instead. This lets the parser adhere to a - simple abstraction -- it parses until EOF closes the current message. - """ - def __init__(self): - # The last partial line pushed into this object. - self._partial = '' - # The list of full, pushed lines, in reverse order - self._lines = [] - # The stack of false-EOF checking predicates. - self._eofstack = [] - # A flag indicating whether the file has been closed or not. - self._closed = False - - def push_eof_matcher(self, pred): - self._eofstack.append(pred) - - def pop_eof_matcher(self): - return self._eofstack.pop() - - def close(self): - # Don't forget any trailing partial line. - self._lines.append(self._partial) - self._partial = '' - self._closed = True - - def readline(self): - if not self._lines: - if self._closed: - return '' - return NeedMoreData - # Pop the line off the stack and see if it matches the current - # false-EOF predicate. - line = self._lines.pop() - # RFC 2046, section 5.1.2 requires us to recognize outer level - # boundaries at any level of inner nesting. Do this, but be sure it's - # in the order of most to least nested. - for ateof in self._eofstack[::-1]: - if ateof(line): - # We're at the false EOF. But push the last line back first. - self._lines.append(line) - return '' - return line - - def unreadline(self, line): - # Let the consumer push a line back into the buffer. - assert line is not NeedMoreData - self._lines.append(line) - - def push(self, data): - """Push some new data into this object.""" - # Handle any previous leftovers - data, self._partial = self._partial + data, '' - # Crack into lines, but preserve the newlines on the end of each - parts = NLCRE_crack.split(data) - # The *ahem* interesting behaviour of re.split when supplied grouping - # parentheses is that the last element of the resulting list is the - # data after the final RE. In the case of a NL/CR terminated string, - # this is the empty string. - self._partial = parts.pop() - # parts is a list of strings, alternating between the line contents - # and the eol character(s). Gather up a list of lines after - # re-attaching the newlines. - lines = [] - for i in range(len(parts) // 2): - lines.append(parts[i*2] + parts[i*2+1]) - self.pushlines(lines) - - def pushlines(self, lines): - # Reverse and insert at the front of the lines. - self._lines[:0] = lines[::-1] - - def is_closed(self): - return self._closed - - def __iter__(self): - return self - - def next(self): - line = self.readline() - if line == '': - raise StopIteration - return line - - - -class FeedParser: - """A feed-style parser of email.""" - - def __init__(self, _factory=Message.Message): - """_factory is called with no arguments to create a new message obj""" - self._factory = _factory - self._input = BufferedSubFile() - self._msgstack = [] - self._parse = self._parsegen().next - self._cur = None - self._last = None - self._headersonly = False - - # Non-public interface for supporting Parser's headersonly flag - def _set_headersonly(self): - self._headersonly = True - - def feed(self, data): - """Push more data into the parser.""" - self._input.push(data) - self._call_parse() - - def _call_parse(self): - try: - self._parse() - except StopIteration: - pass - - def close(self): - """Parse all remaining data and return the root message object.""" - self._input.close() - self._call_parse() - root = self._pop_message() - assert not self._msgstack - # Look for final set of defects - if root.get_content_maintype() == 'multipart' \ - and not root.is_multipart(): - root.defects.append(Errors.MultipartInvariantViolationDefect()) - return root - - def _new_message(self): - msg = self._factory() - if self._cur and self._cur.get_content_type() == 'multipart/digest': - msg.set_default_type('message/rfc822') - if self._msgstack: - self._msgstack[-1].attach(msg) - self._msgstack.append(msg) - self._cur = msg - self._last = msg - - def _pop_message(self): - retval = self._msgstack.pop() - if self._msgstack: - self._cur = self._msgstack[-1] - else: - self._cur = None - return retval - - def _parsegen(self): - # Create a new message and start by parsing headers. - self._new_message() - headers = [] - # Collect the headers, searching for a line that doesn't match the RFC - # 2822 header or continuation pattern (including an empty line). - for line in self._input: - if line is NeedMoreData: - yield NeedMoreData - continue - if not headerRE.match(line): - # If we saw the RFC defined header/body separator - # (i.e. newline), just throw it away. Otherwise the line is - # part of the body so push it back. - if not NLCRE.match(line): - self._input.unreadline(line) - break - headers.append(line) - # Done with the headers, so parse them and figure out what we're - # supposed to see in the body of the message. - self._parse_headers(headers) - # Headers-only parsing is a backwards compatibility hack, which was - # necessary in the older parser, which could throw errors. All - # remaining lines in the input are thrown into the message body. - if self._headersonly: - lines = [] - while True: - line = self._input.readline() - if line is NeedMoreData: - yield NeedMoreData - continue - if line == '': - break - lines.append(line) - self._cur.set_payload(EMPTYSTRING.join(lines)) - return - if self._cur.get_content_type() == 'message/delivery-status': - # message/delivery-status contains blocks of headers separated by - # a blank line. We'll represent each header block as a separate - # nested message object, but the processing is a bit different - # than standard message/* types because there is no body for the - # nested messages. A blank line separates the subparts. - while True: - self._input.push_eof_matcher(NLCRE.match) - for retval in self._parsegen(): - if retval is NeedMoreData: - yield NeedMoreData - continue - break - msg = self._pop_message() - # We need to pop the EOF matcher in order to tell if we're at - # the end of the current file, not the end of the last block - # of message headers. - self._input.pop_eof_matcher() - # The input stream must be sitting at the newline or at the - # EOF. We want to see if we're at the end of this subpart, so - # first consume the blank line, then test the next line to see - # if we're at this subpart's EOF. - while True: - line = self._input.readline() - if line is NeedMoreData: - yield NeedMoreData - continue - break - while True: - line = self._input.readline() - if line is NeedMoreData: - yield NeedMoreData - continue - break - if line == '': - break - # Not at EOF so this is a line we're going to need. - self._input.unreadline(line) - return - if self._cur.get_content_maintype() == 'message': - # The message claims to be a message/* type, then what follows is - # another RFC 2822 message. - for retval in self._parsegen(): - if retval is NeedMoreData: - yield NeedMoreData - continue - break - self._pop_message() - return - if self._cur.get_content_maintype() == 'multipart': - boundary = self._cur.get_boundary() - if boundary is None: - # The message /claims/ to be a multipart but it has not - # defined a boundary. That's a problem which we'll handle by - # reading everything until the EOF and marking the message as - # defective. - self._cur.defects.append(Errors.NoBoundaryInMultipartDefect()) - lines = [] - for line in self._input: - if line is NeedMoreData: - yield NeedMoreData - continue - lines.append(line) - self._cur.set_payload(EMPTYSTRING.join(lines)) - return - # Create a line match predicate which matches the inter-part - # boundary as well as the end-of-multipart boundary. Don't push - # this onto the input stream until we've scanned past the - # preamble. - separator = '--' + boundary - boundaryre = re.compile( - '(?P' + re.escape(separator) + - r')(?P--)?(?P[ \t]*)(?P\r\n|\r|\n)?$') - capturing_preamble = True - preamble = [] - linesep = False - while True: - line = self._input.readline() - if line is NeedMoreData: - yield NeedMoreData - continue - if line == '': - break - mo = boundaryre.match(line) - if mo: - # If we're looking at the end boundary, we're done with - # this multipart. If there was a newline at the end of - # the closing boundary, then we need to initialize the - # epilogue with the empty string (see below). - if mo.group('end'): - linesep = mo.group('linesep') - break - # We saw an inter-part boundary. Were we in the preamble? - if capturing_preamble: - if preamble: - # According to RFC 2046, the last newline belongs - # to the boundary. - lastline = preamble[-1] - eolmo = NLCRE_eol.search(lastline) - if eolmo: - preamble[-1] = lastline[:-len(eolmo.group(0))] - self._cur.preamble = EMPTYSTRING.join(preamble) - capturing_preamble = False - self._input.unreadline(line) - continue - # We saw a boundary separating two parts. Consume any - # multiple boundary lines that may be following. Our - # interpretation of RFC 2046 BNF grammar does not produce - # body parts within such double boundaries. - while True: - line = self._input.readline() - if line is NeedMoreData: - yield NeedMoreData - continue - mo = boundaryre.match(line) - if not mo: - self._input.unreadline(line) - break - # Recurse to parse this subpart; the input stream points - # at the subpart's first line. - self._input.push_eof_matcher(boundaryre.match) - for retval in self._parsegen(): - if retval is NeedMoreData: - yield NeedMoreData - continue - break - # Because of RFC 2046, the newline preceding the boundary - # separator actually belongs to the boundary, not the - # previous subpart's payload (or epilogue if the previous - # part is a multipart). - if self._last.get_content_maintype() == 'multipart': - epilogue = self._last.epilogue - if epilogue == '': - self._last.epilogue = None - elif epilogue is not None: - mo = NLCRE_eol.search(epilogue) - if mo: - end = len(mo.group(0)) - self._last.epilogue = epilogue[:-end] - else: - payload = self._last.get_payload() - if isinstance(payload, basestring): - mo = NLCRE_eol.search(payload) - if mo: - payload = payload[:-len(mo.group(0))] - self._last.set_payload(payload) - self._input.pop_eof_matcher() - self._pop_message() - # Set the multipart up for newline cleansing, which will - # happen if we're in a nested multipart. - self._last = self._cur - else: - # I think we must be in the preamble - assert capturing_preamble - preamble.append(line) - # We've seen either the EOF or the end boundary. If we're still - # capturing the preamble, we never saw the start boundary. Note - # that as a defect and store the captured text as the payload. - # Everything from here to the EOF is epilogue. - if capturing_preamble: - self._cur.defects.append(Errors.StartBoundaryNotFoundDefect()) - self._cur.set_payload(EMPTYSTRING.join(preamble)) - epilogue = [] - for line in self._input: - if line is NeedMoreData: - yield NeedMoreData - continue - self._cur.epilogue = EMPTYSTRING.join(epilogue) - return - # If the end boundary ended in a newline, we'll need to make sure - # the epilogue isn't None - if linesep: - epilogue = [''] - else: - epilogue = [] - for line in self._input: - if line is NeedMoreData: - yield NeedMoreData - continue - epilogue.append(line) - # Any CRLF at the front of the epilogue is not technically part of - # the epilogue. Also, watch out for an empty string epilogue, - # which means a single newline. - if epilogue: - firstline = epilogue[0] - bolmo = NLCRE_bol.match(firstline) - if bolmo: - epilogue[0] = firstline[len(bolmo.group(0)):] - self._cur.epilogue = EMPTYSTRING.join(epilogue) - return - # Otherwise, it's some non-multipart type, so the entire rest of the - # file contents becomes the payload. - lines = [] - for line in self._input: - if line is NeedMoreData: - yield NeedMoreData - continue - lines.append(line) - self._cur.set_payload(EMPTYSTRING.join(lines)) - - def _parse_headers(self, lines): - # Passed a list of lines that make up the headers for the current msg - lastheader = '' - lastvalue = [] - for lineno, line in enumerate(lines): - # Check for continuation - if line[0] in ' \t': - if not lastheader: - # The first line of the headers was a continuation. This - # is illegal, so let's note the defect, store the illegal - # line, and ignore it for purposes of headers. - defect = Errors.FirstHeaderLineIsContinuationDefect(line) - self._cur.defects.append(defect) - continue - lastvalue.append(line) - continue - if lastheader: - # XXX reconsider the joining of folded lines - lhdr = EMPTYSTRING.join(lastvalue)[:-1].rstrip('\r\n') - self._cur[lastheader] = lhdr - lastheader, lastvalue = '', [] - # Check for envelope header, i.e. unix-from - if line.startswith('From '): - if lineno == 0: - # Strip off the trailing newline - mo = NLCRE_eol.search(line) - if mo: - line = line[:-len(mo.group(0))] - self._cur.set_unixfrom(line) - continue - elif lineno == len(lines) - 1: - # Something looking like a unix-from at the end - it's - # probably the first line of the body, so push back the - # line and stop. - self._input.unreadline(line) - return - else: - # Weirdly placed unix-from line. Note this as a defect - # and ignore it. - defect = Errors.MisplacedEnvelopeHeaderDefect(line) - self._cur.defects.append(defect) - continue - # Split the line on the colon separating field name from value. - i = line.find(':') - if i < 0: - defect = Errors.MalformedHeaderDefect(line) - self._cur.defects.append(defect) - continue - lastheader = line[:i] - lastvalue = [line[i+1:].lstrip()] - # Done with all the lines, so handle the last header. - if lastheader: - # XXX reconsider the joining of folded lines - self._cur[lastheader] = EMPTYSTRING.join(lastvalue).rstrip('\r\n') Deleted: /sandbox/trunk/emailpkg/3.1/email/Generator.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/Generator.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,352 +0,0 @@ -# Copyright (C) 2001-2006 Python Software Foundation -# Author: Barry Warsaw -# Contact: email-sig at python.org - -"""Classes to generate plain text from a message object tree.""" - -import re -import sys -import time -import random -import warnings -from cStringIO import StringIO - -from email.Header import Header - -UNDERSCORE = '_' -NL = '\n' - -fcre = re.compile(r'^From ', re.MULTILINE) - -def _is8bitstring(s): - if isinstance(s, str): - try: - unicode(s, 'us-ascii') - except UnicodeError: - return True - return False - - - -class Generator: - """Generates output from a Message object tree. - - This basic generator writes the message to the given file object as plain - text. - """ - # - # Public interface - # - - def __init__(self, outfp, mangle_from_=True, maxheaderlen=78): - """Create the generator for message flattening. - - outfp is the output file-like object for writing the message to. It - must have a write() method. - - Optional mangle_from_ is a flag that, when True (the default), escapes - From_ lines in the body of the message by putting a `>' in front of - them. - - Optional maxheaderlen specifies the longest length for a non-continued - header. When a header line is longer (in characters, with tabs - expanded to 8 spaces) than maxheaderlen, the header will split as - defined in the Header class. Set maxheaderlen to zero to disable - header wrapping. The default is 78, as recommended (but not required) - by RFC 2822. - """ - self._fp = outfp - self._mangle_from_ = mangle_from_ - self._maxheaderlen = maxheaderlen - - def write(self, s): - # Just delegate to the file object - self._fp.write(s) - - def flatten(self, msg, unixfrom=False): - """Print the message object tree rooted at msg to the output file - specified when the Generator instance was created. - - unixfrom is a flag that forces the printing of a Unix From_ delimiter - before the first object in the message tree. If the original message - has no From_ delimiter, a `standard' one is crafted. By default, this - is False to inhibit the printing of any From_ delimiter. - - Note that for subobjects, no From_ line is printed. - """ - if unixfrom: - ufrom = msg.get_unixfrom() - if not ufrom: - ufrom = 'From nobody ' + time.ctime(time.time()) - print >> self._fp, ufrom - self._write(msg) - - # For backwards compatibility, but this is slower - def __call__(self, msg, unixfrom=False): - warnings.warn('__call__() deprecated; use flatten()', - DeprecationWarning, 2) - self.flatten(msg, unixfrom) - - def clone(self, fp): - """Clone this generator with the exact same options.""" - return self.__class__(fp, self._mangle_from_, self._maxheaderlen) - - # - # Protected interface - undocumented ;/ - # - - def _write(self, msg): - # We can't write the headers yet because of the following scenario: - # say a multipart message includes the boundary string somewhere in - # its body. We'd have to calculate the new boundary /before/ we write - # the headers so that we can write the correct Content-Type: - # parameter. - # - # The way we do this, so as to make the _handle_*() methods simpler, - # is to cache any subpart writes into a StringIO. The we write the - # headers and the StringIO contents. That way, subpart handlers can - # Do The Right Thing, and can still modify the Content-Type: header if - # necessary. - oldfp = self._fp - try: - self._fp = sfp = StringIO() - self._dispatch(msg) - finally: - self._fp = oldfp - # Write the headers. First we see if the message object wants to - # handle that itself. If not, we'll do it generically. - meth = getattr(msg, '_write_headers', None) - if meth is None: - self._write_headers(msg) - else: - meth(self) - self._fp.write(sfp.getvalue()) - - def _dispatch(self, msg): - # Get the Content-Type: for the message, then try to dispatch to - # self._handle__(). If there's no handler for the - # full MIME type, then dispatch to self._handle_(). If - # that's missing too, then dispatch to self._writeBody(). - main = msg.get_content_maintype() - sub = msg.get_content_subtype() - specific = UNDERSCORE.join((main, sub)).replace('-', '_') - meth = getattr(self, '_handle_' + specific, None) - if meth is None: - generic = main.replace('-', '_') - meth = getattr(self, '_handle_' + generic, None) - if meth is None: - meth = self._writeBody - meth(msg) - - # - # Default handlers - # - - def _write_headers(self, msg): - for h, v in msg.items(): - print >> self._fp, '%s:' % h, - if self._maxheaderlen == 0: - # Explicit no-wrapping - print >> self._fp, v - elif isinstance(v, Header): - # Header instances know what to do - print >> self._fp, v.encode() - elif _is8bitstring(v): - # If we have raw 8bit data in a byte string, we have no idea - # what the encoding is. There is no safe way to split this - # string. If it's ascii-subset, then we could do a normal - # ascii split, but if it's multibyte then we could break the - # string. There's no way to know so the least harm seems to - # be to not split the string and risk it being too long. - print >> self._fp, v - else: - # Header's got lots of smarts, so use it. - print >> self._fp, Header( - v, maxlinelen=self._maxheaderlen, - header_name=h, continuation_ws='\t').encode() - # A blank line always separates headers from body - print >> self._fp - - # - # Handlers for writing types and subtypes - # - - def _handle_text(self, msg): - payload = msg.get_payload() - if payload is None: - return - if not isinstance(payload, basestring): - raise TypeError('string payload expected: %s' % type(payload)) - if self._mangle_from_: - payload = fcre.sub('>From ', payload) - self._fp.write(payload) - - # Default body handler - _writeBody = _handle_text - - def _handle_multipart(self, msg): - # The trick here is to write out each part separately, merge them all - # together, and then make sure that the boundary we've chosen isn't - # present in the payload. - msgtexts = [] - subparts = msg.get_payload() - if subparts is None: - subparts = [] - elif isinstance(subparts, basestring): - # e.g. a non-strict parse of a message with no starting boundary. - self._fp.write(subparts) - return - elif not isinstance(subparts, list): - # Scalar payload - subparts = [subparts] - for part in subparts: - s = StringIO() - g = self.clone(s) - g.flatten(part, unixfrom=False) - msgtexts.append(s.getvalue()) - # Now make sure the boundary we've selected doesn't appear in any of - # the message texts. - alltext = NL.join(msgtexts) - # BAW: What about boundaries that are wrapped in double-quotes? - boundary = msg.get_boundary(failobj=_make_boundary(alltext)) - # If we had to calculate a new boundary because the body text - # contained that string, set the new boundary. We don't do it - # unconditionally because, while set_boundary() preserves order, it - # doesn't preserve newlines/continuations in headers. This is no big - # deal in practice, but turns out to be inconvenient for the unittest - # suite. - if msg.get_boundary() <> boundary: - msg.set_boundary(boundary) - # If there's a preamble, write it out, with a trailing CRLF - if msg.preamble is not None: - print >> self._fp, msg.preamble - # dash-boundary transport-padding CRLF - print >> self._fp, '--' + boundary - # body-part - if msgtexts: - self._fp.write(msgtexts.pop(0)) - # *encapsulation - # --> delimiter transport-padding - # --> CRLF body-part - for body_part in msgtexts: - # delimiter transport-padding CRLF - print >> self._fp, '\n--' + boundary - # body-part - self._fp.write(body_part) - # close-delimiter transport-padding - self._fp.write('\n--' + boundary + '--') - if msg.epilogue is not None: - print >> self._fp - self._fp.write(msg.epilogue) - - def _handle_message_delivery_status(self, msg): - # We can't just write the headers directly to self's file object - # because this will leave an extra newline between the last header - # block and the boundary. Sigh. - blocks = [] - for part in msg.get_payload(): - s = StringIO() - g = self.clone(s) - g.flatten(part, unixfrom=False) - text = s.getvalue() - lines = text.split('\n') - # Strip off the unnecessary trailing empty line - if lines and lines[-1] == '': - blocks.append(NL.join(lines[:-1])) - else: - blocks.append(text) - # Now join all the blocks with an empty line. This has the lovely - # effect of separating each block with an empty line, but not adding - # an extra one after the last one. - self._fp.write(NL.join(blocks)) - - def _handle_message(self, msg): - s = StringIO() - g = self.clone(s) - # The payload of a message/rfc822 part should be a multipart sequence - # of length 1. The zeroth element of the list should be the Message - # object for the subpart. Extract that object, stringify it, and - # write it out. - g.flatten(msg.get_payload(0), unixfrom=False) - self._fp.write(s.getvalue()) - - - -_FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]' - -class DecodedGenerator(Generator): - """Generator a text representation of a message. - - Like the Generator base class, except that non-text parts are substituted - with a format string representing the part. - """ - def __init__(self, outfp, mangle_from_=True, maxheaderlen=78, fmt=None): - """Like Generator.__init__() except that an additional optional - argument is allowed. - - Walks through all subparts of a message. If the subpart is of main - type `text', then it prints the decoded payload of the subpart. - - Otherwise, fmt is a format string that is used instead of the message - payload. fmt is expanded with the following keywords (in - %(keyword)s format): - - type : Full MIME type of the non-text part - maintype : Main MIME type of the non-text part - subtype : Sub-MIME type of the non-text part - filename : Filename of the non-text part - description: Description associated with the non-text part - encoding : Content transfer encoding of the non-text part - - The default value for fmt is None, meaning - - [Non-text (%(type)s) part of message omitted, filename %(filename)s] - """ - Generator.__init__(self, outfp, mangle_from_, maxheaderlen) - if fmt is None: - self._fmt = _FMT - else: - self._fmt = fmt - - def _dispatch(self, msg): - for part in msg.walk(): - maintype = part.get_content_maintype() - if maintype == 'text': - print >> self, part.get_payload(decode=True) - elif maintype == 'multipart': - # Just skip this - pass - else: - print >> self, self._fmt % { - 'type' : part.get_content_type(), - 'maintype' : part.get_content_maintype(), - 'subtype' : part.get_content_subtype(), - 'filename' : part.get_filename('[no filename]'), - 'description': part.get('Content-Description', - '[no description]'), - 'encoding' : part.get('Content-Transfer-Encoding', - '[no encoding]'), - } - - - -# Helper -_width = len(repr(sys.maxint-1)) -_fmt = '%%0%dd' % _width - -def _make_boundary(text=None): - # Craft a random boundary. If text is given, ensure that the chosen - # boundary doesn't appear in the text. - token = random.randrange(sys.maxint) - boundary = ('=' * 15) + (_fmt % token) + '==' - if text is None: - return boundary - b = boundary - counter = 0 - while True: - cre = re.compile('^--' + re.escape(b) + '(--)?$', re.MULTILINE) - if not cre.search(text): - break - b = boundary + '.' + str(counter) - counter += 1 - return b Deleted: /sandbox/trunk/emailpkg/3.1/email/Header.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/Header.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,495 +0,0 @@ -# Copyright (C) 2002-2004 Python Software Foundation -# Author: Ben Gertzfield, Barry Warsaw -# Contact: email-sig at python.org - -"""Header encoding and decoding functionality.""" - -import re -import binascii - -import email.quopriMIME -import email.base64MIME -from email.Errors import HeaderParseError -from email.Charset import Charset - -NL = '\n' -SPACE = ' ' -USPACE = u' ' -SPACE8 = ' ' * 8 -UEMPTYSTRING = u'' - -MAXLINELEN = 76 - -USASCII = Charset('us-ascii') -UTF8 = Charset('utf-8') - -# Match encoded-word strings in the form =?charset?q?Hello_World?= -ecre = re.compile(r''' - =\? # literal =? - (?P[^?]*?) # non-greedy up to the next ? is the charset - \? # literal ? - (?P[qb]) # either a "q" or a "b", case insensitive - \? # literal ? - (?P.*?) # non-greedy up to the next ?= is the encoded string - \?= # literal ?= - ''', re.VERBOSE | re.IGNORECASE) - -# Field name regexp, including trailing colon, but not separating whitespace, -# according to RFC 2822. Character range is from tilde to exclamation mark. -# For use with .match() -fcre = re.compile(r'[\041-\176]+:$') - - - -# Helpers -_max_append = email.quopriMIME._max_append - - - -def decode_header(header): - """Decode a message header value without converting charset. - - Returns a list of (decoded_string, charset) pairs containing each of the - decoded parts of the header. Charset is None for non-encoded parts of the - header, otherwise a lower-case string containing the name of the character - set specified in the encoded string. - - An email.Errors.HeaderParseError may be raised when certain decoding error - occurs (e.g. a base64 decoding exception). - """ - # If no encoding, just return the header - header = str(header) - if not ecre.search(header): - return [(header, None)] - decoded = [] - dec = '' - for line in header.splitlines(): - # This line might not have an encoding in it - if not ecre.search(line): - decoded.append((line, None)) - continue - parts = ecre.split(line) - while parts: - unenc = parts.pop(0).strip() - if unenc: - # Should we continue a long line? - if decoded and decoded[-1][1] is None: - decoded[-1] = (decoded[-1][0] + SPACE + unenc, None) - else: - decoded.append((unenc, None)) - if parts: - charset, encoding = [s.lower() for s in parts[0:2]] - encoded = parts[2] - dec = None - if encoding == 'q': - dec = email.quopriMIME.header_decode(encoded) - elif encoding == 'b': - try: - dec = email.base64MIME.decode(encoded) - except binascii.Error: - # Turn this into a higher level exception. BAW: Right - # now we throw the lower level exception away but - # when/if we get exception chaining, we'll preserve it. - raise HeaderParseError - if dec is None: - dec = encoded - - if decoded and decoded[-1][1] == charset: - decoded[-1] = (decoded[-1][0] + dec, decoded[-1][1]) - else: - decoded.append((dec, charset)) - del parts[0:3] - return decoded - - - -def make_header(decoded_seq, maxlinelen=None, header_name=None, - continuation_ws=' '): - """Create a Header from a sequence of pairs as returned by decode_header() - - decode_header() takes a header value string and returns a sequence of - pairs of the format (decoded_string, charset) where charset is the string - name of the character set. - - This function takes one of those sequence of pairs and returns a Header - instance. Optional maxlinelen, header_name, and continuation_ws are as in - the Header constructor. - """ - h = Header(maxlinelen=maxlinelen, header_name=header_name, - continuation_ws=continuation_ws) - for s, charset in decoded_seq: - # None means us-ascii but we can simply pass it on to h.append() - if charset is not None and not isinstance(charset, Charset): - charset = Charset(charset) - h.append(s, charset) - return h - - - -class Header: - def __init__(self, s=None, charset=None, - maxlinelen=None, header_name=None, - continuation_ws=' ', errors='strict'): - """Create a MIME-compliant header that can contain many character sets. - - Optional s is the initial header value. If None, the initial header - value is not set. You can later append to the header with .append() - method calls. s may be a byte string or a Unicode string, but see the - .append() documentation for semantics. - - Optional charset serves two purposes: it has the same meaning as the - charset argument to the .append() method. It also sets the default - character set for all subsequent .append() calls that omit the charset - argument. If charset is not provided in the constructor, the us-ascii - charset is used both as s's initial charset and as the default for - subsequent .append() calls. - - The maximum line length can be specified explicit via maxlinelen. For - splitting the first line to a shorter value (to account for the field - header which isn't included in s, e.g. `Subject') pass in the name of - the field in header_name. The default maxlinelen is 76. - - continuation_ws must be RFC 2822 compliant folding whitespace (usually - either a space or a hard tab) which will be prepended to continuation - lines. - - errors is passed through to the .append() call. - """ - if charset is None: - charset = USASCII - if not isinstance(charset, Charset): - charset = Charset(charset) - self._charset = charset - self._continuation_ws = continuation_ws - cws_expanded_len = len(continuation_ws.replace('\t', SPACE8)) - # BAW: I believe `chunks' and `maxlinelen' should be non-public. - self._chunks = [] - if s is not None: - self.append(s, charset, errors) - if maxlinelen is None: - maxlinelen = MAXLINELEN - if header_name is None: - # We don't know anything about the field header so the first line - # is the same length as subsequent lines. - self._firstlinelen = maxlinelen - else: - # The first line should be shorter to take into account the field - # header. Also subtract off 2 extra for the colon and space. - self._firstlinelen = maxlinelen - len(header_name) - 2 - # Second and subsequent lines should subtract off the length in - # columns of the continuation whitespace prefix. - self._maxlinelen = maxlinelen - cws_expanded_len - - def __str__(self): - """A synonym for self.encode().""" - return self.encode() - - def __unicode__(self): - """Helper for the built-in unicode function.""" - uchunks = [] - lastcs = None - for s, charset in self._chunks: - # We must preserve spaces between encoded and non-encoded word - # boundaries, which means for us we need to add a space when we go - # from a charset to None/us-ascii, or from None/us-ascii to a - # charset. Only do this for the second and subsequent chunks. - nextcs = charset - if uchunks: - if lastcs not in (None, 'us-ascii'): - if nextcs in (None, 'us-ascii'): - uchunks.append(USPACE) - nextcs = None - elif nextcs not in (None, 'us-ascii'): - uchunks.append(USPACE) - lastcs = nextcs - uchunks.append(unicode(s, str(charset))) - return UEMPTYSTRING.join(uchunks) - - # Rich comparison operators for equality only. BAW: does it make sense to - # have or explicitly disable <, <=, >, >= operators? - def __eq__(self, other): - # other may be a Header or a string. Both are fine so coerce - # ourselves to a string, swap the args and do another comparison. - return other == self.encode() - - def __ne__(self, other): - return not self == other - - def append(self, s, charset=None, errors='strict'): - """Append a string to the MIME header. - - Optional charset, if given, should be a Charset instance or the name - of a character set (which will be converted to a Charset instance). A - value of None (the default) means that the charset given in the - constructor is used. - - s may be a byte string or a Unicode string. If it is a byte string - (i.e. isinstance(s, str) is true), then charset is the encoding of - that byte string, and a UnicodeError will be raised if the string - cannot be decoded with that charset. If s is a Unicode string, then - charset is a hint specifying the character set of the characters in - the string. In this case, when producing an RFC 2822 compliant header - using RFC 2047 rules, the Unicode string will be encoded using the - following charsets in order: us-ascii, the charset hint, utf-8. The - first character set not to provoke a UnicodeError is used. - - Optional `errors' is passed as the third argument to any unicode() or - ustr.encode() call. - """ - if charset is None: - charset = self._charset - elif not isinstance(charset, Charset): - charset = Charset(charset) - # If the charset is our faux 8bit charset, leave the string unchanged - if charset <> '8bit': - # We need to test that the string can be converted to unicode and - # back to a byte string, given the input and output codecs of the - # charset. - if isinstance(s, str): - # Possibly raise UnicodeError if the byte string can't be - # converted to a unicode with the input codec of the charset. - incodec = charset.input_codec or 'us-ascii' - ustr = unicode(s, incodec, errors) - # Now make sure that the unicode could be converted back to a - # byte string with the output codec, which may be different - # than the iput coded. Still, use the original byte string. - outcodec = charset.output_codec or 'us-ascii' - ustr.encode(outcodec, errors) - elif isinstance(s, unicode): - # Now we have to be sure the unicode string can be converted - # to a byte string with a reasonable output codec. We want to - # use the byte string in the chunk. - for charset in USASCII, charset, UTF8: - try: - outcodec = charset.output_codec or 'us-ascii' - s = s.encode(outcodec, errors) - break - except UnicodeError: - pass - else: - assert False, 'utf-8 conversion failed' - self._chunks.append((s, charset)) - - def _split(self, s, charset, maxlinelen, splitchars): - # Split up a header safely for use with encode_chunks. - splittable = charset.to_splittable(s) - encoded = charset.from_splittable(splittable, True) - elen = charset.encoded_header_len(encoded) - # If the line's encoded length first, just return it - if elen <= maxlinelen: - return [(encoded, charset)] - # If we have undetermined raw 8bit characters sitting in a byte - # string, we really don't know what the right thing to do is. We - # can't really split it because it might be multibyte data which we - # could break if we split it between pairs. The least harm seems to - # be to not split the header at all, but that means they could go out - # longer than maxlinelen. - if charset == '8bit': - return [(s, charset)] - # BAW: I'm not sure what the right test here is. What we're trying to - # do is be faithful to RFC 2822's recommendation that ($2.2.3): - # - # "Note: Though structured field bodies are defined in such a way that - # folding can take place between many of the lexical tokens (and even - # within some of the lexical tokens), folding SHOULD be limited to - # placing the CRLF at higher-level syntactic breaks." - # - # For now, I can only imagine doing this when the charset is us-ascii, - # although it's possible that other charsets may also benefit from the - # higher-level syntactic breaks. - elif charset == 'us-ascii': - return self._split_ascii(s, charset, maxlinelen, splitchars) - # BAW: should we use encoded? - elif elen == len(s): - # We can split on _maxlinelen boundaries because we know that the - # encoding won't change the size of the string - splitpnt = maxlinelen - first = charset.from_splittable(splittable[:splitpnt], False) - last = charset.from_splittable(splittable[splitpnt:], False) - else: - # Binary search for split point - first, last = _binsplit(splittable, charset, maxlinelen) - # first is of the proper length so just wrap it in the appropriate - # chrome. last must be recursively split. - fsplittable = charset.to_splittable(first) - fencoded = charset.from_splittable(fsplittable, True) - chunk = [(fencoded, charset)] - return chunk + self._split(last, charset, self._maxlinelen, splitchars) - - def _split_ascii(self, s, charset, firstlen, splitchars): - chunks = _split_ascii(s, firstlen, self._maxlinelen, - self._continuation_ws, splitchars) - return zip(chunks, [charset]*len(chunks)) - - def _encode_chunks(self, newchunks, maxlinelen): - # MIME-encode a header with many different charsets and/or encodings. - # - # Given a list of pairs (string, charset), return a MIME-encoded - # string suitable for use in a header field. Each pair may have - # different charsets and/or encodings, and the resulting header will - # accurately reflect each setting. - # - # Each encoding can be email.Utils.QP (quoted-printable, for - # ASCII-like character sets like iso-8859-1), email.Utils.BASE64 - # (Base64, for non-ASCII like character sets like KOI8-R and - # iso-2022-jp), or None (no encoding). - # - # Each pair will be represented on a separate line; the resulting - # string will be in the format: - # - # =?charset1?q?Mar=EDa_Gonz=E1lez_Alonso?=\n - # =?charset2?b?SvxyZ2VuIEL2aW5n?=" - chunks = [] - for header, charset in newchunks: - if not header: - continue - if charset is None or charset.header_encoding is None: - s = header - else: - s = charset.header_encode(header) - # Don't add more folding whitespace than necessary - if chunks and chunks[-1].endswith(' '): - extra = '' - else: - extra = ' ' - _max_append(chunks, s, maxlinelen, extra) - joiner = NL + self._continuation_ws - return joiner.join(chunks) - - def encode(self, splitchars=';, '): - """Encode a message header into an RFC-compliant format. - - There are many issues involved in converting a given string for use in - an email header. Only certain character sets are readable in most - email clients, and as header strings can only contain a subset of - 7-bit ASCII, care must be taken to properly convert and encode (with - Base64 or quoted-printable) header strings. In addition, there is a - 75-character length limit on any given encoded header field, so - line-wrapping must be performed, even with double-byte character sets. - - This method will do its best to convert the string to the correct - character set used in email, and encode and line wrap it safely with - the appropriate scheme for that character set. - - If the given charset is not known or an error occurs during - conversion, this function will return the header untouched. - - Optional splitchars is a string containing characters to split long - ASCII lines on, in rough support of RFC 2822's `highest level - syntactic breaks'. This doesn't affect RFC 2047 encoded lines. - """ - newchunks = [] - maxlinelen = self._firstlinelen - lastlen = 0 - for s, charset in self._chunks: - # The first bit of the next chunk should be just long enough to - # fill the next line. Don't forget the space separating the - # encoded words. - targetlen = maxlinelen - lastlen - 1 - if targetlen < charset.encoded_header_len(''): - # Stick it on the next line - targetlen = maxlinelen - newchunks += self._split(s, charset, targetlen, splitchars) - lastchunk, lastcharset = newchunks[-1] - lastlen = lastcharset.encoded_header_len(lastchunk) - return self._encode_chunks(newchunks, maxlinelen) - - - -def _split_ascii(s, firstlen, restlen, continuation_ws, splitchars): - lines = [] - maxlen = firstlen - for line in s.splitlines(): - # Ignore any leading whitespace (i.e. continuation whitespace) already - # on the line, since we'll be adding our own. - line = line.lstrip() - if len(line) < maxlen: - lines.append(line) - maxlen = restlen - continue - # Attempt to split the line at the highest-level syntactic break - # possible. Note that we don't have a lot of smarts about field - # syntax; we just try to break on semi-colons, then commas, then - # whitespace. - for ch in splitchars: - if ch in line: - break - else: - # There's nothing useful to split the line on, not even spaces, so - # just append this line unchanged - lines.append(line) - maxlen = restlen - continue - # Now split the line on the character plus trailing whitespace - cre = re.compile(r'%s\s*' % ch) - if ch in ';,': - eol = ch - else: - eol = '' - joiner = eol + ' ' - joinlen = len(joiner) - wslen = len(continuation_ws.replace('\t', SPACE8)) - this = [] - linelen = 0 - for part in cre.split(line): - curlen = linelen + max(0, len(this)-1) * joinlen - partlen = len(part) - onfirstline = not lines - # We don't want to split after the field name, if we're on the - # first line and the field name is present in the header string. - if ch == ' ' and onfirstline and \ - len(this) == 1 and fcre.match(this[0]): - this.append(part) - linelen += partlen - elif curlen + partlen > maxlen: - if this: - lines.append(joiner.join(this) + eol) - # If this part is longer than maxlen and we aren't already - # splitting on whitespace, try to recursively split this line - # on whitespace. - if partlen > maxlen and ch <> ' ': - subl = _split_ascii(part, maxlen, restlen, - continuation_ws, ' ') - lines.extend(subl[:-1]) - this = [subl[-1]] - else: - this = [part] - linelen = wslen + len(this[-1]) - maxlen = restlen - else: - this.append(part) - linelen += partlen - # Put any left over parts on a line by themselves - if this: - lines.append(joiner.join(this)) - return lines - - - -def _binsplit(splittable, charset, maxlinelen): - i = 0 - j = len(splittable) - while i < j: - # Invariants: - # 1. splittable[:k] fits for all k <= i (note that we *assume*, - # at the start, that splittable[:0] fits). - # 2. splittable[:k] does not fit for any k > j (at the start, - # this means we shouldn't look at any k > len(splittable)). - # 3. We don't know about splittable[:k] for k in i+1..j. - # 4. We want to set i to the largest k that fits, with i <= k <= j. - # - m = (i+j+1) >> 1 # ceiling((i+j)/2); i < m <= j - chunk = charset.from_splittable(splittable[:m], True) - chunklen = charset.encoded_header_len(chunk) - if chunklen <= maxlinelen: - # m is acceptable, so is a new lower bound. - i = m - else: - # m is not acceptable, so final i must be < m. - j = m - 1 - # i == j. Invariant #1 implies that splittable[:i] fits, and - # invariant #2 implies that splittable[:i+1] does not fit, so i - # is what we're looking for. - first = charset.from_splittable(splittable[:i], False) - last = charset.from_splittable(splittable[i:], False) - return first, last Deleted: /sandbox/trunk/emailpkg/3.1/email/Iterators.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/Iterators.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,67 +0,0 @@ -# Copyright (C) 2001-2004 Python Software Foundation -# Author: Barry Warsaw -# Contact: email-sig at python.org - -"""Various types of useful iterators and generators.""" - -import sys -from cStringIO import StringIO - - - -# This function will become a method of the Message class -def walk(self): - """Walk over the message tree, yielding each subpart. - - The walk is performed in depth-first order. This method is a - generator. - """ - yield self - if self.is_multipart(): - for subpart in self.get_payload(): - for subsubpart in subpart.walk(): - yield subsubpart - - - -# These two functions are imported into the Iterators.py interface module. -# The Python 2.2 version uses generators for efficiency. -def body_line_iterator(msg, decode=False): - """Iterate over the parts, returning string payloads line-by-line. - - Optional decode (default False) is passed through to .get_payload(). - """ - for subpart in msg.walk(): - payload = subpart.get_payload(decode=decode) - if isinstance(payload, basestring): - for line in StringIO(payload): - yield line - - -def typed_subpart_iterator(msg, maintype='text', subtype=None): - """Iterate over the subparts with a given MIME type. - - Use `maintype' as the main MIME type to match against; this defaults to - "text". Optional `subtype' is the MIME subtype to match against; if - omitted, only the main type is matched. - """ - for subpart in msg.walk(): - if subpart.get_content_maintype() == maintype: - if subtype is None or subpart.get_content_subtype() == subtype: - yield subpart - - - -def _structure(msg, fp=None, level=0, include_default=False): - """A handy debugging aid""" - if fp is None: - fp = sys.stdout - tab = ' ' * (level * 4) - print >> fp, tab + msg.get_content_type(), - if include_default: - print >> fp, '[%s]' % msg.get_default_type() - else: - print >> fp - if msg.is_multipart(): - for subpart in msg.get_payload(): - _structure(subpart, fp, level+1, include_default) Deleted: /sandbox/trunk/emailpkg/3.1/email/MIMEAudio.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/MIMEAudio.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,72 +0,0 @@ -# Copyright (C) 2001-2004 Python Software Foundation -# Author: Anthony Baxter -# Contact: email-sig at python.org - -"""Class representing audio/* type MIME documents.""" - -import sndhdr -from cStringIO import StringIO - -from email import Errors -from email import Encoders -from email.MIMENonMultipart import MIMENonMultipart - - - -_sndhdr_MIMEmap = {'au' : 'basic', - 'wav' :'x-wav', - 'aiff':'x-aiff', - 'aifc':'x-aiff', - } - -# There are others in sndhdr that don't have MIME types. :( -# Additional ones to be added to sndhdr? midi, mp3, realaudio, wma?? -def _whatsnd(data): - """Try to identify a sound file type. - - sndhdr.what() has a pretty cruddy interface, unfortunately. This is why - we re-do it here. It would be easier to reverse engineer the Unix 'file' - command and use the standard 'magic' file, as shipped with a modern Unix. - """ - hdr = data[:512] - fakefile = StringIO(hdr) - for testfn in sndhdr.tests: - res = testfn(hdr, fakefile) - if res is not None: - return _sndhdr_MIMEmap.get(res[0]) - return None - - - -class MIMEAudio(MIMENonMultipart): - """Class for generating audio/* MIME documents.""" - - def __init__(self, _audiodata, _subtype=None, - _encoder=Encoders.encode_base64, **_params): - """Create an audio/* type MIME document. - - _audiodata is a string containing the raw audio data. If this data - can be decoded by the standard Python `sndhdr' module, then the - subtype will be automatically included in the Content-Type header. - Otherwise, you can specify the specific audio subtype via the - _subtype parameter. If _subtype is not given, and no subtype can be - guessed, a TypeError is raised. - - _encoder is a function which will perform the actual encoding for - transport of the image data. It takes one argument, which is this - Image instance. It should use get_payload() and set_payload() to - change the payload to the encoded form. It should also add any - Content-Transfer-Encoding or other headers to the message as - necessary. The default encoding is Base64. - - Any additional keyword arguments are passed to the base class - constructor, which turns them into parameters on the Content-Type - header. - """ - if _subtype is None: - _subtype = _whatsnd(_audiodata) - if _subtype is None: - raise TypeError('Could not find audio MIME subtype') - MIMENonMultipart.__init__(self, 'audio', _subtype, **_params) - self.set_payload(_audiodata) - _encoder(self) Deleted: /sandbox/trunk/emailpkg/3.1/email/MIMEBase.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/MIMEBase.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,24 +0,0 @@ -# Copyright (C) 2001-2004 Python Software Foundation -# Author: Barry Warsaw -# Contact: email-sig at python.org - -"""Base class for MIME specializations.""" - -from email import Message - - - -class MIMEBase(Message.Message): - """Base class for MIME specializations.""" - - def __init__(self, _maintype, _subtype, **_params): - """This constructor adds a Content-Type: and a MIME-Version: header. - - The Content-Type: header is taken from the _maintype and _subtype - arguments. Additional parameters for this header are taken from the - keyword arguments. - """ - Message.Message.__init__(self) - ctype = '%s/%s' % (_maintype, _subtype) - self.add_header('Content-Type', ctype, **_params) - self['MIME-Version'] = '1.0' Deleted: /sandbox/trunk/emailpkg/3.1/email/MIMEImage.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/MIMEImage.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,45 +0,0 @@ -# Copyright (C) 2001-2004 Python Software Foundation -# Author: Barry Warsaw -# Contact: email-sig at python.org - -"""Class representing image/* type MIME documents.""" - -import imghdr - -from email import Errors -from email import Encoders -from email.MIMENonMultipart import MIMENonMultipart - - - -class MIMEImage(MIMENonMultipart): - """Class for generating image/* type MIME documents.""" - - def __init__(self, _imagedata, _subtype=None, - _encoder=Encoders.encode_base64, **_params): - """Create an image/* type MIME document. - - _imagedata is a string containing the raw image data. If this data - can be decoded by the standard Python `imghdr' module, then the - subtype will be automatically included in the Content-Type header. - Otherwise, you can specify the specific image subtype via the _subtype - parameter. - - _encoder is a function which will perform the actual encoding for - transport of the image data. It takes one argument, which is this - Image instance. It should use get_payload() and set_payload() to - change the payload to the encoded form. It should also add any - Content-Transfer-Encoding or other headers to the message as - necessary. The default encoding is Base64. - - Any additional keyword arguments are passed to the base class - constructor, which turns them into parameters on the Content-Type - header. - """ - if _subtype is None: - _subtype = imghdr.what(None, _imagedata) - if _subtype is None: - raise TypeError('Could not guess image MIME subtype') - MIMENonMultipart.__init__(self, 'image', _subtype, **_params) - self.set_payload(_imagedata) - _encoder(self) Deleted: /sandbox/trunk/emailpkg/3.1/email/MIMEMessage.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/MIMEMessage.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,32 +0,0 @@ -# Copyright (C) 2001-2004 Python Software Foundation -# Author: Barry Warsaw -# Contact: email-sig at python.org - -"""Class representing message/* MIME documents.""" - -from email import Message -from email.MIMENonMultipart import MIMENonMultipart - - - -class MIMEMessage(MIMENonMultipart): - """Class representing message/* MIME documents.""" - - def __init__(self, _msg, _subtype='rfc822'): - """Create a message/* type MIME document. - - _msg is a message object and must be an instance of Message, or a - derived class of Message, otherwise a TypeError is raised. - - Optional _subtype defines the subtype of the contained message. The - default is "rfc822" (this is defined by the MIME standard, even though - the term "rfc822" is technically outdated by RFC 2822). - """ - MIMENonMultipart.__init__(self, 'message', _subtype) - if not isinstance(_msg, Message.Message): - raise TypeError('Argument is not an instance of Message') - # It's convenient to use this base class method. We need to do it - # this way or we'll get an exception - Message.Message.attach(self, _msg) - # And be sure our default type is set correctly - self.set_default_type('message/rfc822') Deleted: /sandbox/trunk/emailpkg/3.1/email/MIMEMultipart.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/MIMEMultipart.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,39 +0,0 @@ -# Copyright (C) 2002-2004 Python Software Foundation -# Author: Barry Warsaw -# Contact: email-sig at python.org - -"""Base class for MIME multipart/* type messages.""" - -from email import MIMEBase - - - -class MIMEMultipart(MIMEBase.MIMEBase): - """Base class for MIME multipart/* type messages.""" - - def __init__(self, _subtype='mixed', boundary=None, _subparts=None, - **_params): - """Creates a multipart/* type message. - - By default, creates a multipart/mixed message, with proper - Content-Type and MIME-Version headers. - - _subtype is the subtype of the multipart content type, defaulting to - `mixed'. - - boundary is the multipart boundary string. By default it is - calculated as needed. - - _subparts is a sequence of initial subparts for the payload. It - must be an iterable object, such as a list. You can always - attach new subparts to the message by using the attach() method. - - Additional parameters for the Content-Type header are taken from the - keyword arguments (or passed into the _params argument). - """ - MIMEBase.MIMEBase.__init__(self, 'multipart', _subtype, **_params) - if _subparts: - for p in _subparts: - self.attach(p) - if boundary: - self.set_boundary(boundary) Deleted: /sandbox/trunk/emailpkg/3.1/email/MIMENonMultipart.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/MIMENonMultipart.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,24 +0,0 @@ -# Copyright (C) 2002-2004 Python Software Foundation -# Author: Barry Warsaw -# Contact: email-sig at python.org - -"""Base class for MIME type messages that are not multipart.""" - -from email import Errors -from email import MIMEBase - - - -class MIMENonMultipart(MIMEBase.MIMEBase): - """Base class for MIME multipart/* type messages.""" - - __pychecker__ = 'unusednames=payload' - - def attach(self, payload): - # The public API prohibits attaching multiple subparts to MIMEBase - # derived subtypes since none of them are, by definition, of content - # type multipart/* - raise Errors.MultipartConversionError( - 'Cannot attach additional subparts to non-multipart/*') - - del __pychecker__ Deleted: /sandbox/trunk/emailpkg/3.1/email/MIMEText.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/MIMEText.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,28 +0,0 @@ -# Copyright (C) 2001-2004 Python Software Foundation -# Author: Barry Warsaw -# Contact: email-sig at python.org - -"""Class representing text/* type MIME documents.""" - -from email.MIMENonMultipart import MIMENonMultipart -from email.Encoders import encode_7or8bit - - - -class MIMEText(MIMENonMultipart): - """Class for generating text/* type MIME documents.""" - - def __init__(self, _text, _subtype='plain', _charset='us-ascii'): - """Create a text/* type MIME document. - - _text is the string for this message object. - - _subtype is the MIME sub content type, defaulting to "plain". - - _charset is the character set parameter added to the Content-Type - header. This defaults to "us-ascii". Note that as a side-effect, the - Content-Transfer-Encoding header will also be set. - """ - MIMENonMultipart.__init__(self, 'text', _subtype, - **{'charset': _charset}) - self.set_payload(_text, _charset) Deleted: /sandbox/trunk/emailpkg/3.1/email/Message.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/Message.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,814 +0,0 @@ -# Copyright (C) 2001-2006 Python Software Foundation -# Author: Barry Warsaw -# Contact: email-sig at python.org - -"""Basic message object for the email package object model.""" - -import re -import uu -import binascii -import warnings -from cStringIO import StringIO - -# Intrapackage imports -from email import Utils -from email import Errors -from email import Charset - -SEMISPACE = '; ' - -# Regular expression used to split header parameters. BAW: this may be too -# simple. It isn't strictly RFC 2045 (section 5.1) compliant, but it catches -# most headers found in the wild. We may eventually need a full fledged -# parser eventually. -paramre = re.compile(r'\s*;\s*') -# Regular expression that matches `special' characters in parameters, the -# existance of which force quoting of the parameter value. -tspecials = re.compile(r'[ \(\)<>@,;:\\"/\[\]\?=]') - - - -# Helper functions -def _formatparam(param, value=None, quote=True): - """Convenience function to format and return a key=value pair. - - This will quote the value if needed or if quote is true. - """ - if value is not None and len(value) > 0: - # A tuple is used for RFC 2231 encoded parameter values where items - # are (charset, language, value). charset is a string, not a Charset - # instance. - if isinstance(value, tuple): - # Encode as per RFC 2231 - param += '*' - value = Utils.encode_rfc2231(value[2], value[0], value[1]) - # BAW: Please check this. I think that if quote is set it should - # force quoting even if not necessary. - if quote or tspecials.search(value): - return '%s="%s"' % (param, Utils.quote(value)) - else: - return '%s=%s' % (param, value) - else: - return param - -def _parseparam(s): - plist = [] - while s[:1] == ';': - s = s[1:] - end = s.find(';') - while end > 0 and s.count('"', 0, end) % 2: - end = s.find(';', end + 1) - if end < 0: - end = len(s) - f = s[:end] - if '=' in f: - i = f.index('=') - f = f[:i].strip().lower() + '=' + f[i+1:].strip() - plist.append(f.strip()) - s = s[end:] - return plist - - -def _unquotevalue(value): - # This is different than Utils.collapse_rfc2231_value() because it doesn't - # try to convert the value to a unicode. Message.get_param() and - # Message.get_params() are both currently defined to return the tuple in - # the face of RFC 2231 parameters. - if isinstance(value, tuple): - return value[0], value[1], Utils.unquote(value[2]) - else: - return Utils.unquote(value) - - - -class Message: - """Basic message object. - - A message object is defined as something that has a bunch of RFC 2822 - headers and a payload. It may optionally have an envelope header - (a.k.a. Unix-From or From_ header). If the message is a container (i.e. a - multipart or a message/rfc822), then the payload is a list of Message - objects, otherwise it is a string. - - Message objects implement part of the `mapping' interface, which assumes - there is exactly one occurrance of the header per message. Some headers - do in fact appear multiple times (e.g. Received) and for those headers, - you must use the explicit API to set or get all the headers. Not all of - the mapping methods are implemented. - """ - def __init__(self): - self._headers = [] - self._unixfrom = None - self._payload = None - self._charset = None - # Defaults for multipart messages - self.preamble = self.epilogue = None - self.defects = [] - # Default content type - self._default_type = 'text/plain' - - def __str__(self): - """Return the entire formatted message as a string. - This includes the headers, body, and envelope header. - """ - return self.as_string(unixfrom=True) - - def as_string(self, unixfrom=False): - """Return the entire formatted message as a string. - Optional `unixfrom' when True, means include the Unix From_ envelope - header. - - This is a convenience method and may not generate the message exactly - as you intend because by default it mangles lines that begin with - "From ". For more flexibility, use the flatten() method of a - Generator instance. - """ - from email.Generator import Generator - fp = StringIO() - g = Generator(fp) - g.flatten(self, unixfrom=unixfrom) - return fp.getvalue() - - def is_multipart(self): - """Return True if the message consists of multiple parts.""" - return isinstance(self._payload, list) - - # - # Unix From_ line - # - def set_unixfrom(self, unixfrom): - self._unixfrom = unixfrom - - def get_unixfrom(self): - return self._unixfrom - - # - # Payload manipulation. - # - def attach(self, payload): - """Add the given payload to the current payload. - - The current payload will always be a list of objects after this method - is called. If you want to set the payload to a scalar object, use - set_payload() instead. - """ - if self._payload is None: - self._payload = [payload] - else: - self._payload.append(payload) - - def get_payload(self, i=None, decode=False): - """Return a reference to the payload. - - The payload will either be a list object or a string. If you mutate - the list object, you modify the message's payload in place. Optional - i returns that index into the payload. - - Optional decode is a flag indicating whether the payload should be - decoded or not, according to the Content-Transfer-Encoding header - (default is False). - - When True and the message is not a multipart, the payload will be - decoded if this header's value is `quoted-printable' or `base64'. If - some other encoding is used, or the header is missing, or if the - payload has bogus data (i.e. bogus base64 or uuencoded data), the - payload is returned as-is. - - If the message is a multipart and the decode flag is True, then None - is returned. - """ - if i is None: - payload = self._payload - elif not isinstance(self._payload, list): - raise TypeError('Expected list, got %s' % type(self._payload)) - else: - payload = self._payload[i] - if decode: - if self.is_multipart(): - return None - cte = self.get('content-transfer-encoding', '').lower() - if cte == 'quoted-printable': - return Utils._qdecode(payload) - elif cte == 'base64': - try: - return Utils._bdecode(payload) - except binascii.Error: - # Incorrect padding - return payload - elif cte in ('x-uuencode', 'uuencode', 'uue', 'x-uue'): - sfp = StringIO() - try: - uu.decode(StringIO(payload+'\n'), sfp) - payload = sfp.getvalue() - except uu.Error: - # Some decoding problem - return payload - # Everything else, including encodings with 8bit or 7bit are returned - # unchanged. - return payload - - def set_payload(self, payload, charset=None): - """Set the payload to the given value. - - Optional charset sets the message's default character set. See - set_charset() for details. - """ - self._payload = payload - if charset is not None: - self.set_charset(charset) - - def set_charset(self, charset): - """Set the charset of the payload to a given character set. - - charset can be a Charset instance, a string naming a character set, or - None. If it is a string it will be converted to a Charset instance. - If charset is None, the charset parameter will be removed from the - Content-Type field. Anything else will generate a TypeError. - - The message will be assumed to be of type text/* encoded with - charset.input_charset. It will be converted to charset.output_charset - and encoded properly, if needed, when generating the plain text - representation of the message. MIME headers (MIME-Version, - Content-Type, Content-Transfer-Encoding) will be added as needed. - - """ - if charset is None: - self.del_param('charset') - self._charset = None - return - if isinstance(charset, str): - charset = Charset.Charset(charset) - if not isinstance(charset, Charset.Charset): - raise TypeError(charset) - # BAW: should we accept strings that can serve as arguments to the - # Charset constructor? - self._charset = charset - if not self.has_key('MIME-Version'): - self.add_header('MIME-Version', '1.0') - if not self.has_key('Content-Type'): - self.add_header('Content-Type', 'text/plain', - charset=charset.get_output_charset()) - else: - self.set_param('charset', charset.get_output_charset()) - if str(charset) <> charset.get_output_charset(): - self._payload = charset.body_encode(self._payload) - if not self.has_key('Content-Transfer-Encoding'): - cte = charset.get_body_encoding() - try: - cte(self) - except TypeError: - self._payload = charset.body_encode(self._payload) - self.add_header('Content-Transfer-Encoding', cte) - - def get_charset(self): - """Return the Charset instance associated with the message's payload. - """ - return self._charset - - # - # MAPPING INTERFACE (partial) - # - def __len__(self): - """Return the total number of headers, including duplicates.""" - return len(self._headers) - - def __getitem__(self, name): - """Get a header value. - - Return None if the header is missing instead of raising an exception. - - Note that if the header appeared multiple times, exactly which - occurrance gets returned is undefined. Use get_all() to get all - the values matching a header field name. - """ - return self.get(name) - - def __setitem__(self, name, val): - """Set the value of a header. - - Note: this does not overwrite an existing header with the same field - name. Use __delitem__() first to delete any existing headers. - """ - self._headers.append((name, val)) - - def __delitem__(self, name): - """Delete all occurrences of a header, if present. - - Does not raise an exception if the header is missing. - """ - name = name.lower() - newheaders = [] - for k, v in self._headers: - if k.lower() <> name: - newheaders.append((k, v)) - self._headers = newheaders - - def __contains__(self, name): - return name.lower() in [k.lower() for k, v in self._headers] - - def has_key(self, name): - """Return true if the message contains the header.""" - missing = object() - return self.get(name, missing) is not missing - - def keys(self): - """Return a list of all the message's header field names. - - These will be sorted in the order they appeared in the original - message, or were added to the message, and may contain duplicates. - Any fields deleted and re-inserted are always appended to the header - list. - """ - return [k for k, v in self._headers] - - def values(self): - """Return a list of all the message's header values. - - These will be sorted in the order they appeared in the original - message, or were added to the message, and may contain duplicates. - Any fields deleted and re-inserted are always appended to the header - list. - """ - return [v for k, v in self._headers] - - def items(self): - """Get all the message's header fields and values. - - These will be sorted in the order they appeared in the original - message, or were added to the message, and may contain duplicates. - Any fields deleted and re-inserted are always appended to the header - list. - """ - return self._headers[:] - - def get(self, name, failobj=None): - """Get a header value. - - Like __getitem__() but return failobj instead of None when the field - is missing. - """ - name = name.lower() - for k, v in self._headers: - if k.lower() == name: - return v - return failobj - - # - # Additional useful stuff - # - - def get_all(self, name, failobj=None): - """Return a list of all the values for the named field. - - These will be sorted in the order they appeared in the original - message, and may contain duplicates. Any fields deleted and - re-inserted are always appended to the header list. - - If no such fields exist, failobj is returned (defaults to None). - """ - values = [] - name = name.lower() - for k, v in self._headers: - if k.lower() == name: - values.append(v) - if not values: - return failobj - return values - - def add_header(self, _name, _value, **_params): - """Extended header setting. - - name is the header field to add. keyword arguments can be used to set - additional parameters for the header field, with underscores converted - to dashes. Normally the parameter will be added as key="value" unless - value is None, in which case only the key will be added. - - Example: - - msg.add_header('content-disposition', 'attachment', filename='bud.gif') - """ - parts = [] - for k, v in _params.items(): - if v is None: - parts.append(k.replace('_', '-')) - else: - parts.append(_formatparam(k.replace('_', '-'), v)) - if _value is not None: - parts.insert(0, _value) - self._headers.append((_name, SEMISPACE.join(parts))) - - def replace_header(self, _name, _value): - """Replace a header. - - Replace the first matching header found in the message, retaining - header order and case. If no matching header was found, a KeyError is - raised. - """ - _name = _name.lower() - for i, (k, v) in zip(range(len(self._headers)), self._headers): - if k.lower() == _name: - self._headers[i] = (k, _value) - break - else: - raise KeyError(_name) - - # - # Deprecated methods. These will be removed in email 3.1. - # - - def get_type(self, failobj=None): - """Returns the message's content type. - - The returned string is coerced to lowercase and returned as a single - string of the form `maintype/subtype'. If there was no Content-Type - header in the message, failobj is returned (defaults to None). - """ - warnings.warn('get_type() deprecated; use get_content_type()', - DeprecationWarning, 2) - missing = object() - value = self.get('content-type', missing) - if value is missing: - return failobj - return paramre.split(value)[0].lower().strip() - - def get_main_type(self, failobj=None): - """Return the message's main content type if present.""" - warnings.warn('get_main_type() deprecated; use get_content_maintype()', - DeprecationWarning, 2) - missing = object() - ctype = self.get_type(missing) - if ctype is missing: - return failobj - if ctype.count('/') <> 1: - return failobj - return ctype.split('/')[0] - - def get_subtype(self, failobj=None): - """Return the message's content subtype if present.""" - warnings.warn('get_subtype() deprecated; use get_content_subtype()', - DeprecationWarning, 2) - missing = object() - ctype = self.get_type(missing) - if ctype is missing: - return failobj - if ctype.count('/') <> 1: - return failobj - return ctype.split('/')[1] - - # - # Use these three methods instead of the three above. - # - - def get_content_type(self): - """Return the message's content type. - - The returned string is coerced to lower case of the form - `maintype/subtype'. If there was no Content-Type header in the - message, the default type as given by get_default_type() will be - returned. Since according to RFC 2045, messages always have a default - type this will always return a value. - - RFC 2045 defines a message's default type to be text/plain unless it - appears inside a multipart/digest container, in which case it would be - message/rfc822. - """ - missing = object() - value = self.get('content-type', missing) - if value is missing: - # This should have no parameters - return self.get_default_type() - ctype = paramre.split(value)[0].lower().strip() - # RFC 2045, section 5.2 says if its invalid, use text/plain - if ctype.count('/') <> 1: - return 'text/plain' - return ctype - - def get_content_maintype(self): - """Return the message's main content type. - - This is the `maintype' part of the string returned by - get_content_type(). - """ - ctype = self.get_content_type() - return ctype.split('/')[0] - - def get_content_subtype(self): - """Returns the message's sub-content type. - - This is the `subtype' part of the string returned by - get_content_type(). - """ - ctype = self.get_content_type() - return ctype.split('/')[1] - - def get_default_type(self): - """Return the `default' content type. - - Most messages have a default content type of text/plain, except for - messages that are subparts of multipart/digest containers. Such - subparts have a default content type of message/rfc822. - """ - return self._default_type - - def set_default_type(self, ctype): - """Set the `default' content type. - - ctype should be either "text/plain" or "message/rfc822", although this - is not enforced. The default content type is not stored in the - Content-Type header. - """ - self._default_type = ctype - - def _get_params_preserve(self, failobj, header): - # Like get_params() but preserves the quoting of values. BAW: - # should this be part of the public interface? - missing = object() - value = self.get(header, missing) - if value is missing: - return failobj - params = [] - for p in _parseparam(';' + value): - try: - name, val = p.split('=', 1) - name = name.strip() - val = val.strip() - except ValueError: - # Must have been a bare attribute - name = p.strip() - val = '' - params.append((name, val)) - params = Utils.decode_params(params) - return params - - def get_params(self, failobj=None, header='content-type', unquote=True): - """Return the message's Content-Type parameters, as a list. - - The elements of the returned list are 2-tuples of key/value pairs, as - split on the `=' sign. The left hand side of the `=' is the key, - while the right hand side is the value. If there is no `=' sign in - the parameter the value is the empty string. The value is as - described in the get_param() method. - - Optional failobj is the object to return if there is no Content-Type - header. Optional header is the header to search instead of - Content-Type. If unquote is True, the value is unquoted. - """ - missing = object() - params = self._get_params_preserve(missing, header) - if params is missing: - return failobj - if unquote: - return [(k, _unquotevalue(v)) for k, v in params] - else: - return params - - def get_param(self, param, failobj=None, header='content-type', - unquote=True): - """Return the parameter value if found in the Content-Type header. - - Optional failobj is the object to return if there is no Content-Type - header, or the Content-Type header has no such parameter. Optional - header is the header to search instead of Content-Type. - - Parameter keys are always compared case insensitively. The return - value can either be a string, or a 3-tuple if the parameter was RFC - 2231 encoded. When it's a 3-tuple, the elements of the value are of - the form (CHARSET, LANGUAGE, VALUE). Note that both CHARSET and - LANGUAGE can be None, in which case you should consider VALUE to be - encoded in the us-ascii charset. You can usually ignore LANGUAGE. - - Your application should be prepared to deal with 3-tuple return - values, and can convert the parameter to a Unicode string like so: - - param = msg.get_param('foo') - if isinstance(param, tuple): - param = unicode(param[2], param[0] or 'us-ascii') - - In any case, the parameter value (either the returned string, or the - VALUE item in the 3-tuple) is always unquoted, unless unquote is set - to False. - """ - if not self.has_key(header): - return failobj - for k, v in self._get_params_preserve(failobj, header): - if k.lower() == param.lower(): - if unquote: - return _unquotevalue(v) - else: - return v - return failobj - - def set_param(self, param, value, header='Content-Type', requote=True, - charset=None, language=''): - """Set a parameter in the Content-Type header. - - If the parameter already exists in the header, its value will be - replaced with the new value. - - If header is Content-Type and has not yet been defined for this - message, it will be set to "text/plain" and the new parameter and - value will be appended as per RFC 2045. - - An alternate header can specified in the header argument, and all - parameters will be quoted as necessary unless requote is False. - - If charset is specified, the parameter will be encoded according to RFC - 2231. Optional language specifies the RFC 2231 language, defaulting - to the empty string. Both charset and language should be strings. - """ - if not isinstance(value, tuple) and charset: - value = (charset, language, value) - - if not self.has_key(header) and header.lower() == 'content-type': - ctype = 'text/plain' - else: - ctype = self.get(header) - if not self.get_param(param, header=header): - if not ctype: - ctype = _formatparam(param, value, requote) - else: - ctype = SEMISPACE.join( - [ctype, _formatparam(param, value, requote)]) - else: - ctype = '' - for old_param, old_value in self.get_params(header=header, - unquote=requote): - append_param = '' - if old_param.lower() == param.lower(): - append_param = _formatparam(param, value, requote) - else: - append_param = _formatparam(old_param, old_value, requote) - if not ctype: - ctype = append_param - else: - ctype = SEMISPACE.join([ctype, append_param]) - if ctype <> self.get(header): - del self[header] - self[header] = ctype - - def del_param(self, param, header='content-type', requote=True): - """Remove the given parameter completely from the Content-Type header. - - The header will be re-written in place without the parameter or its - value. All values will be quoted as necessary unless requote is - False. Optional header specifies an alternative to the Content-Type - header. - """ - if not self.has_key(header): - return - new_ctype = '' - for p, v in self.get_params(header=header, unquote=requote): - if p.lower() <> param.lower(): - if not new_ctype: - new_ctype = _formatparam(p, v, requote) - else: - new_ctype = SEMISPACE.join([new_ctype, - _formatparam(p, v, requote)]) - if new_ctype <> self.get(header): - del self[header] - self[header] = new_ctype - - def set_type(self, type, header='Content-Type', requote=True): - """Set the main type and subtype for the Content-Type header. - - type must be a string in the form "maintype/subtype", otherwise a - ValueError is raised. - - This method replaces the Content-Type header, keeping all the - parameters in place. If requote is False, this leaves the existing - header's quoting as is. Otherwise, the parameters will be quoted (the - default). - - An alternative header can be specified in the header argument. When - the Content-Type header is set, we'll always also add a MIME-Version - header. - """ - # BAW: should we be strict? - if not type.count('/') == 1: - raise ValueError - # Set the Content-Type, you get a MIME-Version - if header.lower() == 'content-type': - del self['mime-version'] - self['MIME-Version'] = '1.0' - if not self.has_key(header): - self[header] = type - return - params = self.get_params(header=header, unquote=requote) - del self[header] - self[header] = type - # Skip the first param; it's the old type. - for p, v in params[1:]: - self.set_param(p, v, header, requote) - - def get_filename(self, failobj=None): - """Return the filename associated with the payload if present. - - The filename is extracted from the Content-Disposition header's - `filename' parameter, and it is unquoted. If that header is missing - the `filename' parameter, this method falls back to looking for the - `name' parameter. - """ - missing = object() - filename = self.get_param('filename', missing, 'content-disposition') - if filename is missing: - filename = self.get_param('name', missing, 'content-disposition') - if filename is missing: - return failobj - return Utils.collapse_rfc2231_value(filename).strip() - - def get_boundary(self, failobj=None): - """Return the boundary associated with the payload if present. - - The boundary is extracted from the Content-Type header's `boundary' - parameter, and it is unquoted. - """ - missing = object() - boundary = self.get_param('boundary', missing) - if boundary is missing: - return failobj - # RFC 2046 says that boundaries may begin but not end in w/s - return Utils.collapse_rfc2231_value(boundary).rstrip() - - def set_boundary(self, boundary): - """Set the boundary parameter in Content-Type to 'boundary'. - - This is subtly different than deleting the Content-Type header and - adding a new one with a new boundary parameter via add_header(). The - main difference is that using the set_boundary() method preserves the - order of the Content-Type header in the original message. - - HeaderParseError is raised if the message has no Content-Type header. - """ - missing = object() - params = self._get_params_preserve(missing, 'content-type') - if params is missing: - # There was no Content-Type header, and we don't know what type - # to set it to, so raise an exception. - raise Errors.HeaderParseError, 'No Content-Type header found' - newparams = [] - foundp = False - for pk, pv in params: - if pk.lower() == 'boundary': - newparams.append(('boundary', '"%s"' % boundary)) - foundp = True - else: - newparams.append((pk, pv)) - if not foundp: - # The original Content-Type header had no boundary attribute. - # Tack one on the end. BAW: should we raise an exception - # instead??? - newparams.append(('boundary', '"%s"' % boundary)) - # Replace the existing Content-Type header with the new value - newheaders = [] - for h, v in self._headers: - if h.lower() == 'content-type': - parts = [] - for k, v in newparams: - if v == '': - parts.append(k) - else: - parts.append('%s=%s' % (k, v)) - newheaders.append((h, SEMISPACE.join(parts))) - - else: - newheaders.append((h, v)) - self._headers = newheaders - - def get_content_charset(self, failobj=None): - """Return the charset parameter of the Content-Type header. - - The returned string is always coerced to lower case. If there is no - Content-Type header, or if that header has no charset parameter, - failobj is returned. - """ - missing = object() - charset = self.get_param('charset', missing) - if charset is missing: - return failobj - if isinstance(charset, tuple): - # RFC 2231 encoded, so decode it, and it better end up as ascii. - pcharset = charset[0] or 'us-ascii' - charset = unicode(charset[2], pcharset).encode('us-ascii') - # RFC 2046, $4.1.2 says charsets are not case sensitive - return charset.lower() - - def get_charsets(self, failobj=None): - """Return a list containing the charset(s) used in this message. - - The returned list of items describes the Content-Type headers' - charset parameter for this message and all the subparts in its - payload. - - Each item will either be a string (the value of the charset parameter - in the Content-Type header of that part) or the value of the - 'failobj' parameter (defaults to None), if the part does not have a - main MIME type of "text", or the charset is not defined. - - The list will contain one string for each part of the message, plus - one for the container message (i.e. self), so that a non-multipart - message will still return a list of length 1. - """ - return [part.get_content_charset(failobj) for part in self.walk()] - - # I.e. def walk(self): ... - from email.Iterators import walk Deleted: /sandbox/trunk/emailpkg/3.1/email/Parser.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/Parser.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,88 +0,0 @@ -# Copyright (C) 2001-2004 Python Software Foundation -# Author: Barry Warsaw, Thomas Wouters, Anthony Baxter -# Contact: email-sig at python.org - -"""A parser of RFC 2822 and MIME email messages.""" - -import warnings -from cStringIO import StringIO -from email.FeedParser import FeedParser -from email.Message import Message - - - -class Parser: - def __init__(self, *args, **kws): - """Parser of RFC 2822 and MIME email messages. - - Creates an in-memory object tree representing the email message, which - can then be manipulated and turned over to a Generator to return the - textual representation of the message. - - The string must be formatted as a block of RFC 2822 headers and header - continuation lines, optionally preceeded by a `Unix-from' header. The - header block is terminated either by the end of the string or by a - blank line. - - _class is the class to instantiate for new message objects when they - must be created. This class must have a constructor that can take - zero arguments. Default is Message.Message. - """ - if len(args) >= 1: - if '_class' in kws: - raise TypeError("Multiple values for keyword arg '_class'") - kws['_class'] = args[0] - if len(args) == 2: - if 'strict' in kws: - raise TypeError("Multiple values for keyword arg 'strict'") - kws['strict'] = args[1] - if len(args) > 2: - raise TypeError('Too many arguments') - if '_class' in kws: - self._class = kws['_class'] - del kws['_class'] - else: - self._class = Message - if 'strict' in kws: - warnings.warn("'strict' argument is deprecated (and ignored)", - DeprecationWarning, 2) - del kws['strict'] - if kws: - raise TypeError('Unexpected keyword arguments') - - def parse(self, fp, headersonly=False): - """Create a message structure from the data in a file. - - Reads all the data from the file and returns the root of the message - structure. Optional headersonly is a flag specifying whether to stop - parsing after reading the headers or not. The default is False, - meaning it parses the entire contents of the file. - """ - feedparser = FeedParser(self._class) - if headersonly: - feedparser._set_headersonly() - while True: - data = fp.read(8192) - if not data: - break - feedparser.feed(data) - return feedparser.close() - - def parsestr(self, text, headersonly=False): - """Create a message structure from a string. - - Returns the root of the message structure. Optional headersonly is a - flag specifying whether to stop parsing after reading the headers or - not. The default is False, meaning it parses the entire contents of - the file. - """ - return self.parse(StringIO(text), headersonly=headersonly) - - - -class HeaderParser(Parser): - def parse(self, fp, headersonly=True): - return Parser.parse(self, fp, True) - - def parsestr(self, text, headersonly=True): - return Parser.parsestr(self, text, True) Deleted: /sandbox/trunk/emailpkg/3.1/email/Utils.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/Utils.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,291 +0,0 @@ -# Copyright (C) 2001-2004 Python Software Foundation -# Author: Barry Warsaw -# Contact: email-sig at python.org - -"""Miscellaneous utilities.""" - -import os -import re -import time -import base64 -import random -import socket -import warnings -from cStringIO import StringIO - -from email._parseaddr import quote -from email._parseaddr import AddressList as _AddressList -from email._parseaddr import mktime_tz - -# We need wormarounds for bugs in these methods in older Pythons (see below) -from email._parseaddr import parsedate as _parsedate -from email._parseaddr import parsedate_tz as _parsedate_tz - -from quopri import decodestring as _qdecode - -# Intrapackage imports -from email.Encoders import _bencode, _qencode - -COMMASPACE = ', ' -EMPTYSTRING = '' -UEMPTYSTRING = u'' -CRLF = '\r\n' - -specialsre = re.compile(r'[][\\()<>@,:;".]') -escapesre = re.compile(r'[][\\()"]') - - - -# Helpers - -def _identity(s): - return s - - -def _bdecode(s): - # We can't quite use base64.encodestring() since it tacks on a "courtesy - # newline". Blech! - if not s: - return s - value = base64.decodestring(s) - if not s.endswith('\n') and value.endswith('\n'): - return value[:-1] - return value - - - -def fix_eols(s): - """Replace all line-ending characters with \r\n.""" - # Fix newlines with no preceding carriage return - s = re.sub(r'(?', name) - return '%s%s%s <%s>' % (quotes, name, quotes, address) - return address - - - -def getaddresses(fieldvalues): - """Return a list of (REALNAME, EMAIL) for each fieldvalue.""" - all = COMMASPACE.join(fieldvalues) - a = _AddressList(all) - return a.addresslist - - - -ecre = re.compile(r''' - =\? # literal =? - (?P[^?]*?) # non-greedy up to the next ? is the charset - \? # literal ? - (?P[qb]) # either a "q" or a "b", case insensitive - \? # literal ? - (?P.*?) # non-greedy up to the next ?= is the atom - \?= # literal ?= - ''', re.VERBOSE | re.IGNORECASE) - - - -def formatdate(timeval=None, localtime=False, usegmt=False): - """Returns a date string as specified by RFC 2822, e.g.: - - Fri, 09 Nov 2001 01:08:47 -0000 - - Optional timeval if given is a floating point time value as accepted by - gmtime() and localtime(), otherwise the current time is used. - - Optional localtime is a flag that when True, interprets timeval, and - returns a date relative to the local timezone instead of UTC, properly - taking daylight savings time into account. - - Optional argument usegmt means that the timezone is written out as - an ascii string, not numeric one (so "GMT" instead of "+0000"). This - is needed for HTTP, and is only used when localtime==False. - """ - # Note: we cannot use strftime() because that honors the locale and RFC - # 2822 requires that day and month names be the English abbreviations. - if timeval is None: - timeval = time.time() - if localtime: - now = time.localtime(timeval) - # Calculate timezone offset, based on whether the local zone has - # daylight savings time, and whether DST is in effect. - if time.daylight and now[-1]: - offset = time.altzone - else: - offset = time.timezone - hours, minutes = divmod(abs(offset), 3600) - # Remember offset is in seconds west of UTC, but the timezone is in - # minutes east of UTC, so the signs differ. - if offset > 0: - sign = '-' - else: - sign = '+' - zone = '%s%02d%02d' % (sign, hours, minutes // 60) - else: - now = time.gmtime(timeval) - # Timezone offset is always -0000 - if usegmt: - zone = 'GMT' - else: - zone = '-0000' - return '%s, %02d %s %04d %02d:%02d:%02d %s' % ( - ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'][now[6]], - now[2], - ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', - 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'][now[1] - 1], - now[0], now[3], now[4], now[5], - zone) - - - -def make_msgid(idstring=None): - """Returns a string suitable for RFC 2822 compliant Message-ID, e.g: - - <20020201195627.33539.96671 at nightshade.la.mastaler.com> - - Optional idstring if given is a string used to strengthen the - uniqueness of the message id. - """ - timeval = time.time() - utcdate = time.strftime('%Y%m%d%H%M%S', time.gmtime(timeval)) - pid = os.getpid() - randint = random.randrange(100000) - if idstring is None: - idstring = '' - else: - idstring = '.' + idstring - idhost = socket.getfqdn() - msgid = '<%s.%s.%s%s@%s>' % (utcdate, pid, randint, idstring, idhost) - return msgid - - - -# These functions are in the standalone mimelib version only because they've -# subsequently been fixed in the latest Python versions. We use this to worm -# around broken older Pythons. -def parsedate(data): - if not data: - return None - return _parsedate(data) - - -def parsedate_tz(data): - if not data: - return None - return _parsedate_tz(data) - - -def parseaddr(addr): - addrs = _AddressList(addr).addresslist - if not addrs: - return '', '' - return addrs[0] - - -# rfc822.unquote() doesn't properly de-backslash-ify in Python pre-2.3. -def unquote(str): - """Remove quotes from a string.""" - if len(str) > 1: - if str.startswith('"') and str.endswith('"'): - return str[1:-1].replace('\\\\', '\\').replace('\\"', '"') - if str.startswith('<') and str.endswith('>'): - return str[1:-1] - return str - - - -# RFC2231-related functions - parameter encoding and decoding -def decode_rfc2231(s): - """Decode string according to RFC 2231""" - import urllib - parts = s.split("'", 2) - if len(parts) == 1: - return None, None, urllib.unquote(s) - charset, language, s = parts - return charset, language, urllib.unquote(s) - - -def encode_rfc2231(s, charset=None, language=None): - """Encode string according to RFC 2231. - - If neither charset nor language is given, then s is returned as-is. If - charset is given but not language, the string is encoded using the empty - string for language. - """ - import urllib - s = urllib.quote(s, safe='') - if charset is None and language is None: - return s - if language is None: - language = '' - return "%s'%s'%s" % (charset, language, s) - - -rfc2231_continuation = re.compile(r'^(?P\w+)\*((?P[0-9]+)\*?)?$') - -def decode_params(params): - """Decode parameters list according to RFC 2231. - - params is a sequence of 2-tuples containing (content type, string value). - """ - new_params = [] - # maps parameter's name to a list of continuations - rfc2231_params = {} - # params is a sequence of 2-tuples containing (content_type, string value) - name, value = params[0] - new_params.append((name, value)) - # Cycle through each of the rest of the parameters. - for name, value in params[1:]: - value = unquote(value) - mo = rfc2231_continuation.match(name) - if mo: - name, num = mo.group('name', 'num') - if num is not None: - num = int(num) - rfc2231_param1 = rfc2231_params.setdefault(name, []) - rfc2231_param1.append((num, value)) - else: - new_params.append((name, '"%s"' % quote(value))) - if rfc2231_params: - for name, continuations in rfc2231_params.items(): - value = [] - # Sort by number - continuations.sort() - # And now append all values in num order - for num, continuation in continuations: - value.append(continuation) - charset, language, value = decode_rfc2231(EMPTYSTRING.join(value)) - new_params.append( - (name, (charset, language, '"%s"' % quote(value)))) - return new_params - -def collapse_rfc2231_value(value, errors='replace', - fallback_charset='us-ascii'): - if isinstance(value, tuple): - rawval = unquote(value[2]) - charset = value[0] or 'us-ascii' - try: - return unicode(rawval, charset, errors) - except LookupError: - # XXX charset is unknown to Python. - return unicode(rawval, fallback_charset, errors) - else: - return unquote(value) Modified: sandbox/trunk/emailpkg/3.1/email/__init__.py ============================================================================== --- sandbox/trunk/emailpkg/3.1/email/__init__.py (original) +++ sandbox/trunk/emailpkg/3.1/email/__init__.py Thu Feb 9 04:04:02 2006 @@ -4,9 +4,10 @@ """A package for parsing, handling, and generating email messages.""" -__version__ = '3.0.1' +__version__ = '3.1' __all__ = [ + # Old names 'base64MIME', 'Charset', 'Encoders', @@ -27,6 +28,19 @@ 'Utils', 'message_from_string', 'message_from_file', + # new names + 'base64mime', + 'charset', + 'encoders', + 'errors', + 'generator', + 'header', + 'iterators', + 'message', + 'mime', + 'parser', + 'quoprimime', + 'utils', ] @@ -50,3 +64,60 @@ """ from email.Parser import Parser return Parser(*args, **kws).parse(fp) + + + +# Lazy loading to provide name mapping from new-style names (PEP 8 compatible +# email 3.1 module names), to old-style names (email 3.0 module names). +import sys + +class LazyImporter(object): + def __init__(self, module_name): + self.__module_name = 'email.' + module_name + + def __getattr__(self, name): + __import__(self.__module_name) + mod = sys.modules[self.__module_name] + self.__dict__.update(mod.__dict__) + return getattr(mod, name) + + +_LOWERNAMES = [ + # email. -> email. + 'Charset', + 'Encoders', + 'Errors', + 'FeedParser', + 'Generator', + 'Header', + 'Iterators', + 'Message', + 'Parser', + 'Utils', + 'base64MIME', + 'quopriMIME', + ] + +_MIMENAMES = [ + # email.MIME -> email.mime. + 'Audio', + 'Base', + 'Image', + 'Message', + 'Multipart', + 'NonMultipart', + 'Text', + ] + +for _name in _LOWERNAMES: + importer = LazyImporter(_name.lower()) + sys.modules['email.' + _name] = importer + setattr(sys.modules['email'], _name, importer) + + +import email.mime +for _name in _MIMENAMES: + importer = LazyImporter('mime.' + _name.lower()) + sys.modules['email.MIME' + _name] = importer + setattr(sys.modules['email'], 'MIME' + _name, importer) + setattr(sys.modules['email.mime'], _name, importer) Deleted: /sandbox/trunk/emailpkg/3.1/email/base64MIME.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/base64MIME.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,172 +0,0 @@ -# Copyright (C) 2002-2004 Python Software Foundation -# Author: Ben Gertzfield -# Contact: email-sig at python.org - -"""Base64 content transfer encoding per RFCs 2045-2047. - -This module handles the content transfer encoding method defined in RFC 2045 -to encode arbitrary 8-bit data using the three 8-bit bytes in four 7-bit -characters encoding known as Base64. - -It is used in the MIME standards for email to attach images, audio, and text -using some 8-bit character sets to messages. - -This module provides an interface to encode and decode both headers and bodies -with Base64 encoding. - -RFC 2045 defines a method for including character set information in an -`encoded-word' in a header. This method is commonly used for 8-bit real names -in To:, From:, Cc:, etc. fields, as well as Subject: lines. - -This module does not do the line wrapping or end-of-line character conversion -necessary for proper internationalized headers; it only does dumb encoding and -decoding. To deal with the various line wrapping issues, use the email.Header -module. -""" - -import re -from binascii import b2a_base64, a2b_base64 -from email.Utils import fix_eols - -CRLF = '\r\n' -NL = '\n' -EMPTYSTRING = '' - -# See also Charset.py -MISC_LEN = 7 - - - -# Helpers -def base64_len(s): - """Return the length of s when it is encoded with base64.""" - groups_of_3, leftover = divmod(len(s), 3) - # 4 bytes out for each 3 bytes (or nonzero fraction thereof) in. - # Thanks, Tim! - n = groups_of_3 * 4 - if leftover: - n += 4 - return n - - - -def header_encode(header, charset='iso-8859-1', keep_eols=False, - maxlinelen=76, eol=NL): - """Encode a single header line with Base64 encoding in a given charset. - - Defined in RFC 2045, this Base64 encoding is identical to normal Base64 - encoding, except that each line must be intelligently wrapped (respecting - the Base64 encoding), and subsequent lines must start with a space. - - charset names the character set to use to encode the header. It defaults - to iso-8859-1. - - End-of-line characters (\\r, \\n, \\r\\n) will be automatically converted - to the canonical email line separator \\r\\n unless the keep_eols - parameter is True (the default is False). - - Each line of the header will be terminated in the value of eol, which - defaults to "\\n". Set this to "\\r\\n" if you are using the result of - this function directly in email. - - The resulting string will be in the form: - - "=?charset?b?WW/5ciBtYXp66XLrIHf8eiBhIGhhbXBzdGHuciBBIFlv+XIgbWF6euly?=\\n - =?charset?b?6yB3/HogYSBoYW1wc3Rh7nIgQkMgWW/5ciBtYXp66XLrIHf8eiBhIGhh?=" - - with each line wrapped at, at most, maxlinelen characters (defaults to 76 - characters). - """ - # Return empty headers unchanged - if not header: - return header - - if not keep_eols: - header = fix_eols(header) - - # Base64 encode each line, in encoded chunks no greater than maxlinelen in - # length, after the RFC chrome is added in. - base64ed = [] - max_encoded = maxlinelen - len(charset) - MISC_LEN - max_unencoded = max_encoded * 3 // 4 - - for i in range(0, len(header), max_unencoded): - base64ed.append(b2a_base64(header[i:i+max_unencoded])) - - # Now add the RFC chrome to each encoded chunk - lines = [] - for line in base64ed: - # Ignore the last character of each line if it is a newline - if line.endswith(NL): - line = line[:-1] - # Add the chrome - lines.append('=?%s?b?%s?=' % (charset, line)) - # Glue the lines together and return it. BAW: should we be able to - # specify the leading whitespace in the joiner? - joiner = eol + ' ' - return joiner.join(lines) - - - -def encode(s, binary=True, maxlinelen=76, eol=NL): - """Encode a string with base64. - - Each line will be wrapped at, at most, maxlinelen characters (defaults to - 76 characters). - - If binary is False, end-of-line characters will be converted to the - canonical email end-of-line sequence \\r\\n. Otherwise they will be left - verbatim (this is the default). - - Each line of encoded text will end with eol, which defaults to "\\n". Set - this to "\r\n" if you will be using the result of this function directly - in an email. - """ - if not s: - return s - - if not binary: - s = fix_eols(s) - - encvec = [] - max_unencoded = maxlinelen * 3 // 4 - for i in range(0, len(s), max_unencoded): - # BAW: should encode() inherit b2a_base64()'s dubious behavior in - # adding a newline to the encoded string? - enc = b2a_base64(s[i:i + max_unencoded]) - if enc.endswith(NL) and eol <> NL: - enc = enc[:-1] + eol - encvec.append(enc) - return EMPTYSTRING.join(encvec) - - -# For convenience and backwards compatibility w/ standard base64 module -body_encode = encode -encodestring = encode - - - -def decode(s, convert_eols=None): - """Decode a raw base64 string. - - If convert_eols is set to a string value, all canonical email linefeeds, - e.g. "\\r\\n", in the decoded text will be converted to the value of - convert_eols. os.linesep is a good choice for convert_eols if you are - decoding a text attachment. - - This function does not parse a full MIME header value encoded with - base64 (like =?iso-8895-1?b?bmloISBuaWgh?=) -- please use the high - level email.Header class for that functionality. - """ - if not s: - return s - - dec = a2b_base64(s) - if convert_eols: - return dec.replace(CRLF, convert_eols) - return dec - - -# For convenience and backwards compatibility w/ standard base64 module -body_decode = decode -decodestring = decode Added: sandbox/trunk/emailpkg/3.1/email/mime/__init__.py ============================================================================== Deleted: /sandbox/trunk/emailpkg/3.1/email/quopriMIME.py ============================================================================== --- /sandbox/trunk/emailpkg/3.1/email/quopriMIME.py Thu Feb 9 04:04:02 2006 +++ (empty file) @@ -1,318 +0,0 @@ -# Copyright (C) 2001-2004 Python Software Foundation -# Author: Ben Gertzfield -# Contact: email-sig at python.org - -"""Quoted-printable content transfer encoding per RFCs 2045-2047. - -This module handles the content transfer encoding method defined in RFC 2045 -to encode US ASCII-like 8-bit data called `quoted-printable'. It is used to -safely encode text that is in a character set similar to the 7-bit US ASCII -character set, but that includes some 8-bit characters that are normally not -allowed in email bodies or headers. - -Quoted-printable is very space-inefficient for encoding binary files; use the -email.base64MIME module for that instead. - -This module provides an interface to encode and decode both headers and bodies -with quoted-printable encoding. - -RFC 2045 defines a method for including character set information in an -`encoded-word' in a header. This method is commonly used for 8-bit real names -in To:/From:/Cc: etc. fields, as well as Subject: lines. - -This module does not do the line wrapping or end-of-line character -conversion necessary for proper internationalized headers; it only -does dumb encoding and decoding. To deal with the various line -wrapping issues, use the email.Header module. -""" - -import re -from string import hexdigits -from email.Utils import fix_eols - -CRLF = '\r\n' -NL = '\n' - -# See also Charset.py -MISC_LEN = 7 - -hqre = re.compile(r'[^-a-zA-Z0-9!*+/ ]') -bqre = re.compile(r'[^ !-<>-~\t]') - - - -# Helpers -def header_quopri_check(c): - """Return True if the character should be escaped with header quopri.""" - return bool(hqre.match(c)) - - -def body_quopri_check(c): - """Return True if the character should be escaped with body quopri.""" - return bool(bqre.match(c)) - - -def header_quopri_len(s): - """Return the length of str when it is encoded with header quopri.""" - count = 0 - for c in s: - if hqre.match(c): - count += 3 - else: - count += 1 - return count - - -def body_quopri_len(str): - """Return the length of str when it is encoded with body quopri.""" - count = 0 - for c in str: - if bqre.match(c): - count += 3 - else: - count += 1 - return count - - -def _max_append(L, s, maxlen, extra=''): - if not L: - L.append(s.lstrip()) - elif len(L[-1]) + len(s) <= maxlen: - L[-1] += extra + s - else: - L.append(s.lstrip()) - - -def unquote(s): - """Turn a string in the form =AB to the ASCII character with value 0xab""" - return chr(int(s[1:3], 16)) - - -def quote(c): - return "=%02X" % ord(c) - - - -def header_encode(header, charset="iso-8859-1", keep_eols=False, - maxlinelen=76, eol=NL): - """Encode a single header line with quoted-printable (like) encoding. - - Defined in RFC 2045, this `Q' encoding is similar to quoted-printable, but - used specifically for email header fields to allow charsets with mostly 7 - bit characters (and some 8 bit) to remain more or less readable in non-RFC - 2045 aware mail clients. - - charset names the character set to use to encode the header. It defaults - to iso-8859-1. - - The resulting string will be in the form: - - "=?charset?q?I_f=E2rt_in_your_g=E8n=E8ral_dire=E7tion?\\n - =?charset?q?Silly_=C8nglish_Kn=EEghts?=" - - with each line wrapped safely at, at most, maxlinelen characters (defaults - to 76 characters). If maxlinelen is None, the entire string is encoded in - one chunk with no splitting. - - End-of-line characters (\\r, \\n, \\r\\n) will be automatically converted - to the canonical email line separator \\r\\n unless the keep_eols - parameter is True (the default is False). - - Each line of the header will be terminated in the value of eol, which - defaults to "\\n". Set this to "\\r\\n" if you are using the result of - this function directly in email. - """ - # Return empty headers unchanged - if not header: - return header - - if not keep_eols: - header = fix_eols(header) - - # Quopri encode each line, in encoded chunks no greater than maxlinelen in - # length, after the RFC chrome is added in. - quoted = [] - if maxlinelen is None: - # An obnoxiously large number that's good enough - max_encoded = 100000 - else: - max_encoded = maxlinelen - len(charset) - MISC_LEN - 1 - - for c in header: - # Space may be represented as _ instead of =20 for readability - if c == ' ': - _max_append(quoted, '_', max_encoded) - # These characters can be included verbatim - elif not hqre.match(c): - _max_append(quoted, c, max_encoded) - # Otherwise, replace with hex value like =E2 - else: - _max_append(quoted, "=%02X" % ord(c), max_encoded) - - # Now add the RFC chrome to each encoded chunk and glue the chunks - # together. BAW: should we be able to specify the leading whitespace in - # the joiner? - joiner = eol + ' ' - return joiner.join(['=?%s?q?%s?=' % (charset, line) for line in quoted]) - - - -def encode(body, binary=False, maxlinelen=76, eol=NL): - """Encode with quoted-printable, wrapping at maxlinelen characters. - - If binary is False (the default), end-of-line characters will be converted - to the canonical email end-of-line sequence \\r\\n. Otherwise they will - be left verbatim. - - Each line of encoded text will end with eol, which defaults to "\\n". Set - this to "\\r\\n" if you will be using the result of this function directly - in an email. - - Each line will be wrapped at, at most, maxlinelen characters (defaults to - 76 characters). Long lines will have the `soft linefeed' quoted-printable - character "=" appended to them, so the decoded text will be identical to - the original text. - """ - if not body: - return body - - if not binary: - body = fix_eols(body) - - # BAW: We're accumulating the body text by string concatenation. That - # can't be very efficient, but I don't have time now to rewrite it. It - # just feels like this algorithm could be more efficient. - encoded_body = '' - lineno = -1 - # Preserve line endings here so we can check later to see an eol needs to - # be added to the output later. - lines = body.splitlines(1) - for line in lines: - # But strip off line-endings for processing this line. - if line.endswith(CRLF): - line = line[:-2] - elif line[-1] in CRLF: - line = line[:-1] - - lineno += 1 - encoded_line = '' - prev = None - linelen = len(line) - # Now we need to examine every character to see if it needs to be - # quopri encoded. BAW: again, string concatenation is inefficient. - for j in range(linelen): - c = line[j] - prev = c - if bqre.match(c): - c = quote(c) - elif j+1 == linelen: - # Check for whitespace at end of line; special case - if c not in ' \t': - encoded_line += c - prev = c - continue - # Check to see to see if the line has reached its maximum length - if len(encoded_line) + len(c) >= maxlinelen: - encoded_body += encoded_line + '=' + eol - encoded_line = '' - encoded_line += c - # Now at end of line.. - if prev and prev in ' \t': - # Special case for whitespace at end of file - if lineno + 1 == len(lines): - prev = quote(prev) - if len(encoded_line) + len(prev) > maxlinelen: - encoded_body += encoded_line + '=' + eol + prev - else: - encoded_body += encoded_line + prev - # Just normal whitespace at end of line - else: - encoded_body += encoded_line + prev + '=' + eol - encoded_line = '' - # Now look at the line we just finished and it has a line ending, we - # need to add eol to the end of the line. - if lines[lineno].endswith(CRLF) or lines[lineno][-1] in CRLF: - encoded_body += encoded_line + eol - else: - encoded_body += encoded_line - encoded_line = '' - return encoded_body - - -# For convenience and backwards compatibility w/ standard base64 module -body_encode = encode -encodestring = encode - - - -# BAW: I'm not sure if the intent was for the signature of this function to be -# the same as base64MIME.decode() or not... -def decode(encoded, eol=NL): - """Decode a quoted-printable string. - - Lines are separated with eol, which defaults to \\n. - """ - if not encoded: - return encoded - # BAW: see comment in encode() above. Again, we're building up the - # decoded string with string concatenation, which could be done much more - # efficiently. - decoded = '' - - for line in encoded.splitlines(): - line = line.rstrip() - if not line: - decoded += eol - continue - - i = 0 - n = len(line) - while i < n: - c = line[i] - if c <> '=': - decoded += c - i += 1 - # Otherwise, c == "=". Are we at the end of the line? If so, add - # a soft line break. - elif i+1 == n: - i += 1 - continue - # Decode if in form =AB - elif i+2 < n and line[i+1] in hexdigits and line[i+2] in hexdigits: - decoded += unquote(line[i:i+3]) - i += 3 - # Otherwise, not in form =AB, pass literally - else: - decoded += c - i += 1 - - if i == n: - decoded += eol - # Special case if original string did not end with eol - if not encoded.endswith(eol) and decoded.endswith(eol): - decoded = decoded[:-1] - return decoded - - -# For convenience and backwards compatibility w/ standard base64 module -body_decode = decode -decodestring = decode - - - -def _unquote_match(match): - """Turn a match in the form =AB to the ASCII character with value 0xab""" - s = match.group(0) - return unquote(s) - - -# Header decoding is done a bit differently -def header_decode(s): - """Decode a string encoded with RFC 2045 MIME header `Q' encoding. - - This function does not parse a full MIME header value encoded with - quoted-printable (like =?iso-8895-1?q?Hello_World?=) -- please use - the high level email.Header class for that functionality. - """ - s = s.replace('_', ' ') - return re.sub(r'=\w{2}', _unquote_match, s) Modified: sandbox/trunk/emailpkg/3.1/email/test/test_email.py ============================================================================== --- sandbox/trunk/emailpkg/3.1/email/test/test_email.py (original) +++ sandbox/trunk/emailpkg/3.1/email/test/test_email.py Thu Feb 9 04:04:02 2006 @@ -2058,13 +2058,19 @@ module = __import__('email') all = module.__all__ all.sort() - self.assertEqual(all, ['Charset', 'Encoders', 'Errors', 'Generator', - 'Header', 'Iterators', 'MIMEAudio', 'MIMEBase', - 'MIMEImage', 'MIMEMessage', 'MIMEMultipart', - 'MIMENonMultipart', 'MIMEText', 'Message', - 'Parser', 'Utils', 'base64MIME', - 'message_from_file', 'message_from_string', - 'quopriMIME']) + self.assertEqual(all, [ + # Old names + 'Charset', 'Encoders', 'Errors', 'Generator', + 'Header', 'Iterators', 'MIMEAudio', 'MIMEBase', + 'MIMEImage', 'MIMEMessage', 'MIMEMultipart', + 'MIMENonMultipart', 'MIMEText', 'Message', + 'Parser', 'Utils', 'base64MIME', + # new names + 'base64mime', 'charset', 'encoders', 'errors', 'generator', + 'header', 'iterators', 'message', 'message_from_file', + 'message_from_string', 'mime', 'parser', + 'quopriMIME', 'quoprimime', 'utils', + ]) def test_formatdate(self): now = time.time() Copied: sandbox/trunk/emailpkg/3.1/email/test/test_email_codecs_renamed.py (from r42272, python/trunk/Lib/email/test/test_email_codecs.py) ============================================================================== --- python/trunk/Lib/email/test/test_email_codecs.py (original) +++ sandbox/trunk/emailpkg/3.1/email/test/test_email_codecs_renamed.py Thu Feb 9 04:04:02 2006 @@ -6,9 +6,9 @@ from test.test_support import TestSkipped, run_unittest from email.test.test_email import TestEmailBase -from email.Charset import Charset -from email.Header import Header, decode_header -from email.Message import Message +from email.charset import Charset +from email.header import Header, decode_header +from email.message import Message Copied: sandbox/trunk/emailpkg/3.1/email/test/test_email_renamed.py (from r42272, python/trunk/Lib/email/test/test_email.py) ============================================================================== --- python/trunk/Lib/email/test/test_email.py (original) +++ sandbox/trunk/emailpkg/3.1/email/test/test_email_renamed.py Thu Feb 9 04:04:02 2006 @@ -13,23 +13,23 @@ import email -from email.Charset import Charset -from email.Header import Header, decode_header, make_header -from email.Parser import Parser, HeaderParser -from email.Generator import Generator, DecodedGenerator -from email.Message import Message -from email.MIMEAudio import MIMEAudio -from email.MIMEText import MIMEText -from email.MIMEImage import MIMEImage -from email.MIMEBase import MIMEBase -from email.MIMEMessage import MIMEMessage -from email.MIMEMultipart import MIMEMultipart -from email import Utils -from email import Errors -from email import Encoders -from email import Iterators -from email import base64MIME -from email import quopriMIME +from email.charset import Charset +from email.header import Header, decode_header, make_header +from email.parser import Parser, HeaderParser +from email.generator import Generator, DecodedGenerator +from email.message import Message +from email.mime.audio import MIMEAudio +from email.mime.text import MIMEText +from email.mime.image import MIMEImage +from email.mime.base import MIMEBase +from email.mime.message import MIMEMessage +from email.mime.multipart import MIMEMultipart +from email import utils +from email import errors +from email import encoders +from email import iterators +from email import base64mime +from email import quoprimime from test.test_support import findfile, run_unittest from email.test import __file__ as landmark @@ -179,7 +179,7 @@ eq(value, 'multipart/mixed; boundary="BOUNDARY"') # And this one has no Content-Type: header at all. msg = self._msgobj('msg_03.txt') - self.assertRaises(Errors.HeaderParseError, + self.assertRaises(errors.HeaderParseError, msg.set_boundary, 'BOUNDARY') def test_get_decoded_payload(self): @@ -494,7 +494,7 @@ -# Test the email.Encoders module +# Test the email.encoders module class TestEncoders(unittest.TestCase): def test_encode_empty_payload(self): eq = self.assertEqual @@ -1296,7 +1296,7 @@ # parts. msg = self._msgobj('msg_38.txt') sfp = StringIO() - Iterators._structure(msg, sfp) + iterators._structure(msg, sfp) eq(sfp.getvalue(), """\ multipart/mixed multipart/mixed @@ -1314,7 +1314,7 @@ # parsed is closest to the spirit of RFC 2046 msg = self._msgobj('msg_39.txt') sfp = StringIO() - Iterators._structure(msg, sfp) + iterators._structure(msg, sfp) eq(sfp.getvalue(), """\ multipart/mixed multipart/mixed @@ -1391,16 +1391,16 @@ unless(hasattr(inner, 'defects')) self.assertEqual(len(inner.defects), 1) unless(isinstance(inner.defects[0], - Errors.StartBoundaryNotFoundDefect)) + errors.StartBoundaryNotFoundDefect)) def test_multipart_no_boundary(self): unless = self.failUnless msg = self._msgobj('msg_25.txt') unless(isinstance(msg.get_payload(), str)) self.assertEqual(len(msg.defects), 2) - unless(isinstance(msg.defects[0], Errors.NoBoundaryInMultipartDefect)) + unless(isinstance(msg.defects[0], errors.NoBoundaryInMultipartDefect)) unless(isinstance(msg.defects[1], - Errors.MultipartInvariantViolationDefect)) + errors.MultipartInvariantViolationDefect)) def test_invalid_content_type(self): eq = self.assertEqual @@ -1456,9 +1456,9 @@ msg = self._msgobj('msg_41.txt') unless(hasattr(msg, 'defects')) self.assertEqual(len(msg.defects), 2) - unless(isinstance(msg.defects[0], Errors.NoBoundaryInMultipartDefect)) + unless(isinstance(msg.defects[0], errors.NoBoundaryInMultipartDefect)) unless(isinstance(msg.defects[1], - Errors.MultipartInvariantViolationDefect)) + errors.MultipartInvariantViolationDefect)) def test_missing_start_boundary(self): outer = self._msgobj('msg_42.txt') @@ -1473,7 +1473,7 @@ bad = outer.get_payload(1).get_payload(0) self.assertEqual(len(bad.defects), 1) self.failUnless(isinstance(bad.defects[0], - Errors.StartBoundaryNotFoundDefect)) + errors.StartBoundaryNotFoundDefect)) @@ -1546,7 +1546,7 @@ msg2 = Message() msg2['Subject'] = 'subpart 2' r = MIMEMessage(msg1) - self.assertRaises(Errors.MultipartConversionError, r.attach, msg2) + self.assertRaises(errors.MultipartConversionError, r.attach, msg2) def test_generate(self): # First craft the message to be encapsulated @@ -2056,111 +2056,115 @@ def test__all__(self): module = __import__('email') - all = module.__all__ - all.sort() - self.assertEqual(all, ['Charset', 'Encoders', 'Errors', 'Generator', - 'Header', 'Iterators', 'MIMEAudio', 'MIMEBase', - 'MIMEImage', 'MIMEMessage', 'MIMEMultipart', - 'MIMENonMultipart', 'MIMEText', 'Message', - 'Parser', 'Utils', 'base64MIME', - 'message_from_file', 'message_from_string', - 'quopriMIME']) + self.assertEqual(sorted(module.__all__), [ + # Old names + 'Charset', 'Encoders', 'Errors', 'Generator', + 'Header', 'Iterators', 'MIMEAudio', 'MIMEBase', + 'MIMEImage', 'MIMEMessage', 'MIMEMultipart', + 'MIMENonMultipart', 'MIMEText', 'Message', + 'Parser', 'Utils', 'base64MIME', + # new names + 'base64mime', 'charset', 'encoders', 'errors', 'generator', + 'header', 'iterators', 'message', 'message_from_file', + 'message_from_string', 'mime', 'parser', + 'quopriMIME', 'quoprimime', 'utils', + ]) def test_formatdate(self): now = time.time() - self.assertEqual(Utils.parsedate(Utils.formatdate(now))[:6], + self.assertEqual(utils.parsedate(utils.formatdate(now))[:6], time.gmtime(now)[:6]) def test_formatdate_localtime(self): now = time.time() self.assertEqual( - Utils.parsedate(Utils.formatdate(now, localtime=True))[:6], + utils.parsedate(utils.formatdate(now, localtime=True))[:6], time.localtime(now)[:6]) def test_formatdate_usegmt(self): now = time.time() self.assertEqual( - Utils.formatdate(now, localtime=False), + utils.formatdate(now, localtime=False), time.strftime('%a, %d %b %Y %H:%M:%S -0000', time.gmtime(now))) self.assertEqual( - Utils.formatdate(now, localtime=False, usegmt=True), + utils.formatdate(now, localtime=False, usegmt=True), time.strftime('%a, %d %b %Y %H:%M:%S GMT', time.gmtime(now))) def test_parsedate_none(self): - self.assertEqual(Utils.parsedate(''), None) + self.assertEqual(utils.parsedate(''), None) def test_parsedate_compact(self): # The FWS after the comma is optional - self.assertEqual(Utils.parsedate('Wed,3 Apr 2002 14:58:26 +0800'), - Utils.parsedate('Wed, 3 Apr 2002 14:58:26 +0800')) + self.assertEqual(utils.parsedate('Wed,3 Apr 2002 14:58:26 +0800'), + utils.parsedate('Wed, 3 Apr 2002 14:58:26 +0800')) def test_parsedate_no_dayofweek(self): eq = self.assertEqual - eq(Utils.parsedate_tz('25 Feb 2003 13:47:26 -0800'), + eq(utils.parsedate_tz('25 Feb 2003 13:47:26 -0800'), (2003, 2, 25, 13, 47, 26, 0, 1, 0, -28800)) def test_parsedate_compact_no_dayofweek(self): eq = self.assertEqual - eq(Utils.parsedate_tz('5 Feb 2003 13:47:26 -0800'), + eq(utils.parsedate_tz('5 Feb 2003 13:47:26 -0800'), (2003, 2, 5, 13, 47, 26, 0, 1, 0, -28800)) def test_parsedate_acceptable_to_time_functions(self): eq = self.assertEqual - timetup = Utils.parsedate('5 Feb 2003 13:47:26 -0800') + timetup = utils.parsedate('5 Feb 2003 13:47:26 -0800') t = int(time.mktime(timetup)) eq(time.localtime(t)[:6], timetup[:6]) eq(int(time.strftime('%Y', timetup)), 2003) - timetup = Utils.parsedate_tz('5 Feb 2003 13:47:26 -0800') + timetup = utils.parsedate_tz('5 Feb 2003 13:47:26 -0800') t = int(time.mktime(timetup[:9])) eq(time.localtime(t)[:6], timetup[:6]) eq(int(time.strftime('%Y', timetup[:9])), 2003) def test_parseaddr_empty(self): - self.assertEqual(Utils.parseaddr('<>'), ('', '')) - self.assertEqual(Utils.formataddr(Utils.parseaddr('<>')), '') + self.assertEqual(utils.parseaddr('<>'), ('', '')) + self.assertEqual(utils.formataddr(utils.parseaddr('<>')), '') def test_noquote_dump(self): self.assertEqual( - Utils.formataddr(('A Silly Person', 'person at dom.ain')), + utils.formataddr(('A Silly Person', 'person at dom.ain')), 'A Silly Person ') def test_escape_dump(self): self.assertEqual( - Utils.formataddr(('A (Very) Silly Person', 'person at dom.ain')), + utils.formataddr(('A (Very) Silly Person', 'person at dom.ain')), r'"A \(Very\) Silly Person" ') a = r'A \(Special\) Person' b = 'person at dom.ain' - self.assertEqual(Utils.parseaddr(Utils.formataddr((a, b))), (a, b)) + self.assertEqual(utils.parseaddr(utils.formataddr((a, b))), (a, b)) def test_escape_backslashes(self): self.assertEqual( - Utils.formataddr(('Arthur \Backslash\ Foobar', 'person at dom.ain')), + utils.formataddr(('Arthur \Backslash\ Foobar', 'person at dom.ain')), r'"Arthur \\Backslash\\ Foobar" ') a = r'Arthur \Backslash\ Foobar' b = 'person at dom.ain' - self.assertEqual(Utils.parseaddr(Utils.formataddr((a, b))), (a, b)) + self.assertEqual(utils.parseaddr(utils.formataddr((a, b))), (a, b)) def test_name_with_dot(self): x = 'John X. Doe ' y = '"John X. Doe" ' a, b = ('John X. Doe', 'jxd at example.com') - self.assertEqual(Utils.parseaddr(x), (a, b)) - self.assertEqual(Utils.parseaddr(y), (a, b)) + self.assertEqual(utils.parseaddr(x), (a, b)) + self.assertEqual(utils.parseaddr(y), (a, b)) # formataddr() quotes the name if there's a dot in it - self.assertEqual(Utils.formataddr((a, b)), y) + self.assertEqual(utils.formataddr((a, b)), y) def test_quote_dump(self): self.assertEqual( - Utils.formataddr(('A Silly; Person', 'person at dom.ain')), + utils.formataddr(('A Silly; Person', 'person at dom.ain')), r'"A Silly; Person" ') def test_fix_eols(self): eq = self.assertEqual - eq(Utils.fix_eols('hello'), 'hello') - eq(Utils.fix_eols('hello\n'), 'hello\r\n') - eq(Utils.fix_eols('hello\r'), 'hello\r\n') - eq(Utils.fix_eols('hello\r\n'), 'hello\r\n') - eq(Utils.fix_eols('hello\n\r'), 'hello\r\n\r\n') + eq(utils.fix_eols('hello'), 'hello') + eq(utils.fix_eols('hello\n'), 'hello\r\n') + eq(utils.fix_eols('hello\r'), 'hello\r\n') + eq(utils.fix_eols('hello\r\n'), 'hello\r\n') + eq(utils.fix_eols('hello\n\r'), 'hello\r\n\r\n') def test_charset_richcomparisons(self): eq = self.assertEqual @@ -2184,18 +2188,18 @@ def test_getaddresses(self): eq = self.assertEqual - eq(Utils.getaddresses(['aperson at dom.ain (Al Person)', + eq(utils.getaddresses(['aperson at dom.ain (Al Person)', 'Bud Person ']), [('Al Person', 'aperson at dom.ain'), ('Bud Person', 'bperson at dom.ain')]) def test_getaddresses_nasty(self): eq = self.assertEqual - eq(Utils.getaddresses(['foo: ;']), [('', '')]) - eq(Utils.getaddresses( + eq(utils.getaddresses(['foo: ;']), [('', '')]) + eq(utils.getaddresses( ['[]*-- =~$']), [('', ''), ('', ''), ('', '*--')]) - eq(Utils.getaddresses( + eq(utils.getaddresses( ['foo: ;', '"Jason R. Mastaler" ']), [('', ''), ('Jason R. Mastaler', 'jason at dom.ain')]) @@ -2231,7 +2235,7 @@ eq(len(charsets), 1) eq(charsets[0], 'us-ascii') charset = Charset(charsets[0]) - eq(charset.get_body_encoding(), Encoders.encode_7or8bit) + eq(charset.get_body_encoding(), encoders.encode_7or8bit) msg.set_payload('hello world', charset=charset) eq(msg.get_payload(), 'hello world') eq(msg['content-transfer-encoding'], '7bit') @@ -2249,7 +2253,7 @@ # unreadline() of NeedMoreData. msg = self._msgobj('msg_43.txt') sfp = StringIO() - Iterators._structure(msg, sfp) + iterators._structure(msg, sfp) eq(sfp.getvalue(), """\ multipart/report text/plain @@ -2292,13 +2296,13 @@ neq = self.ndiffAssertEqual # First a simple non-multipart message msg = self._msgobj('msg_01.txt') - it = Iterators.body_line_iterator(msg) + it = iterators.body_line_iterator(msg) lines = list(it) eq(len(lines), 6) neq(EMPTYSTRING.join(lines), msg.get_payload()) # Now a more complicated multipart msg = self._msgobj('msg_02.txt') - it = Iterators.body_line_iterator(msg) + it = iterators.body_line_iterator(msg) lines = list(it) eq(len(lines), 43) fp = openfile('msg_19.txt') @@ -2310,7 +2314,7 @@ def test_typed_subpart_iterator(self): eq = self.assertEqual msg = self._msgobj('msg_04.txt') - it = Iterators.typed_subpart_iterator(msg, 'text') + it = iterators.typed_subpart_iterator(msg, 'text') lines = [] subparts = 0 for subpart in it: @@ -2327,7 +2331,7 @@ def test_typed_subpart_iterator_default_type(self): eq = self.assertEqual msg = self._msgobj('msg_03.txt') - it = Iterators.typed_subpart_iterator(msg, 'text', 'plain') + it = iterators.typed_subpart_iterator(msg, 'text', 'plain') lines = [] subparts = 0 for subpart in it: @@ -2493,8 +2497,8 @@ class TestBase64(unittest.TestCase): def test_len(self): eq = self.assertEqual - eq(base64MIME.base64_len('hello'), - len(base64MIME.encode('hello', eol=''))) + eq(base64mime.base64_len('hello'), + len(base64mime.encode('hello', eol=''))) for size in range(15): if size == 0 : bsize = 0 elif size <= 3 : bsize = 4 @@ -2502,31 +2506,31 @@ elif size <= 9 : bsize = 12 elif size <= 12: bsize = 16 else : bsize = 20 - eq(base64MIME.base64_len('x'*size), bsize) + eq(base64mime.base64_len('x'*size), bsize) def test_decode(self): eq = self.assertEqual - eq(base64MIME.decode(''), '') - eq(base64MIME.decode('aGVsbG8='), 'hello') - eq(base64MIME.decode('aGVsbG8=', 'X'), 'hello') - eq(base64MIME.decode('aGVsbG8NCndvcmxk\n', 'X'), 'helloXworld') + eq(base64mime.decode(''), '') + eq(base64mime.decode('aGVsbG8='), 'hello') + eq(base64mime.decode('aGVsbG8=', 'X'), 'hello') + eq(base64mime.decode('aGVsbG8NCndvcmxk\n', 'X'), 'helloXworld') def test_encode(self): eq = self.assertEqual - eq(base64MIME.encode(''), '') - eq(base64MIME.encode('hello'), 'aGVsbG8=\n') + eq(base64mime.encode(''), '') + eq(base64mime.encode('hello'), 'aGVsbG8=\n') # Test the binary flag - eq(base64MIME.encode('hello\n'), 'aGVsbG8K\n') - eq(base64MIME.encode('hello\n', 0), 'aGVsbG8NCg==\n') + eq(base64mime.encode('hello\n'), 'aGVsbG8K\n') + eq(base64mime.encode('hello\n', 0), 'aGVsbG8NCg==\n') # Test the maxlinelen arg - eq(base64MIME.encode('xxxx ' * 20, maxlinelen=40), """\ + eq(base64mime.encode('xxxx ' * 20, maxlinelen=40), """\ eHh4eCB4eHh4IHh4eHggeHh4eCB4eHh4IHh4eHgg eHh4eCB4eHh4IHh4eHggeHh4eCB4eHh4IHh4eHgg eHh4eCB4eHh4IHh4eHggeHh4eCB4eHh4IHh4eHgg eHh4eCB4eHh4IA== """) # Test the eol argument - eq(base64MIME.encode('xxxx ' * 20, maxlinelen=40, eol='\r\n'), """\ + eq(base64mime.encode('xxxx ' * 20, maxlinelen=40, eol='\r\n'), """\ eHh4eCB4eHh4IHh4eHggeHh4eCB4eHh4IHh4eHgg\r eHh4eCB4eHh4IHh4eHggeHh4eCB4eHh4IHh4eHgg\r eHh4eCB4eHh4IHh4eHggeHh4eCB4eHh4IHh4eHgg\r @@ -2535,7 +2539,7 @@ def test_header_encode(self): eq = self.assertEqual - he = base64MIME.header_encode + he = base64mime.header_encode eq(he('hello'), '=?iso-8859-1?b?aGVsbG8=?=') eq(he('hello\nworld'), '=?iso-8859-1?b?aGVsbG8NCndvcmxk?=') # Test the charset option @@ -2577,20 +2581,20 @@ def test_header_quopri_check(self): for c in self.hlit: - self.failIf(quopriMIME.header_quopri_check(c)) + self.failIf(quoprimime.header_quopri_check(c)) for c in self.hnon: - self.failUnless(quopriMIME.header_quopri_check(c)) + self.failUnless(quoprimime.header_quopri_check(c)) def test_body_quopri_check(self): for c in self.blit: - self.failIf(quopriMIME.body_quopri_check(c)) + self.failIf(quoprimime.body_quopri_check(c)) for c in self.bnon: - self.failUnless(quopriMIME.body_quopri_check(c)) + self.failUnless(quoprimime.body_quopri_check(c)) def test_header_quopri_len(self): eq = self.assertEqual - hql = quopriMIME.header_quopri_len - enc = quopriMIME.header_encode + hql = quoprimime.header_quopri_len + enc = quoprimime.header_encode for s in ('hello', 'h at e@l at l@o@'): # Empty charset and no line-endings. 7 == RFC chrome eq(hql(s), len(enc(s, charset='', eol=''))-7) @@ -2601,7 +2605,7 @@ def test_body_quopri_len(self): eq = self.assertEqual - bql = quopriMIME.body_quopri_len + bql = quoprimime.body_quopri_len for c in self.blit: eq(bql(c), 1) for c in self.bnon: @@ -2610,11 +2614,11 @@ def test_quote_unquote_idempotent(self): for x in range(256): c = chr(x) - self.assertEqual(quopriMIME.unquote(quopriMIME.quote(c)), c) + self.assertEqual(quoprimime.unquote(quoprimime.quote(c)), c) def test_header_encode(self): eq = self.assertEqual - he = quopriMIME.header_encode + he = quoprimime.header_encode eq(he('hello'), '=?iso-8859-1?q?hello?=') eq(he('hello\nworld'), '=?iso-8859-1?q?hello=0D=0Aworld?=') # Test the charset option @@ -2640,29 +2644,29 @@ def test_decode(self): eq = self.assertEqual - eq(quopriMIME.decode(''), '') - eq(quopriMIME.decode('hello'), 'hello') - eq(quopriMIME.decode('hello', 'X'), 'hello') - eq(quopriMIME.decode('hello\nworld', 'X'), 'helloXworld') + eq(quoprimime.decode(''), '') + eq(quoprimime.decode('hello'), 'hello') + eq(quoprimime.decode('hello', 'X'), 'hello') + eq(quoprimime.decode('hello\nworld', 'X'), 'helloXworld') def test_encode(self): eq = self.assertEqual - eq(quopriMIME.encode(''), '') - eq(quopriMIME.encode('hello'), 'hello') + eq(quoprimime.encode(''), '') + eq(quoprimime.encode('hello'), 'hello') # Test the binary flag - eq(quopriMIME.encode('hello\r\nworld'), 'hello\nworld') - eq(quopriMIME.encode('hello\r\nworld', 0), 'hello\nworld') + eq(quoprimime.encode('hello\r\nworld'), 'hello\nworld') + eq(quoprimime.encode('hello\r\nworld', 0), 'hello\nworld') # Test the maxlinelen arg - eq(quopriMIME.encode('xxxx ' * 20, maxlinelen=40), """\ + eq(quoprimime.encode('xxxx ' * 20, maxlinelen=40), """\ xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx= xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxx= x xxxx xxxx xxxx xxxx=20""") # Test the eol argument - eq(quopriMIME.encode('xxxx ' * 20, maxlinelen=40, eol='\r\n'), """\ + eq(quoprimime.encode('xxxx ' * 20, maxlinelen=40, eol='\r\n'), """\ xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx=\r xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxx=\r x xxxx xxxx xxxx xxxx=20""") - eq(quopriMIME.encode("""\ + eq(quoprimime.encode("""\ one line two line"""), """\ @@ -2675,7 +2679,7 @@ # Test the Charset class class TestCharset(unittest.TestCase): def tearDown(self): - from email import Charset as CharsetModule + from email import charset as CharsetModule try: del CharsetModule.CHARSETS['fake'] except KeyError: @@ -2718,7 +2722,7 @@ # Testing SF bug #625509, which we have to fake, since there are no # built-in encodings where the header encoding is QP but the body # encoding is not. - from email import Charset as CharsetModule + from email import charset as CharsetModule CharsetModule.add_charset('fake', CharsetModule.QP, None) c = Charset('fake') eq('hello w\xf6rld', c.body_encode('hello w\xf6rld')) @@ -2884,7 +2888,7 @@ def test_broken_base64_header(self): raises = self.assertRaises s = 'Subject: =?EUC-KR?B?CSixpLDtKSC/7Liuvsax4iC6uLmwMcijIKHaILzSwd/H0SC8+LCjwLsgv7W/+Mj3IQ?=' - raises(Errors.HeaderParseError, decode_header, s) + raises(errors.HeaderParseError, decode_header, s) From python-checkins at python.org Thu Feb 9 04:06:24 2006 From: python-checkins at python.org (barry.warsaw) Date: Thu, 9 Feb 2006 04:06:24 +0100 (CET) Subject: [Python-checkins] r42277 - sandbox/trunk/emailpkg/3.1/testall.py Message-ID: <20060209030624.882ED1E4007@bag.python.org> Author: barry.warsaw Date: Thu Feb 9 04:06:24 2006 New Revision: 42277 Modified: sandbox/trunk/emailpkg/3.1/testall.py Log: Add the parallel tests using the PEP 8 names. Modified: sandbox/trunk/emailpkg/3.1/testall.py ============================================================================== --- sandbox/trunk/emailpkg/3.1/testall.py (original) +++ sandbox/trunk/emailpkg/3.1/testall.py Thu Feb 9 04:06:24 2006 @@ -17,6 +17,7 @@ import getopt from email.test import test_email +from email.test import test_email_renamed from test.test_support import TestSkipped try: @@ -27,6 +28,7 @@ # See if we have the Japanese codecs package installed try: from email.test import test_email_codecs + from email.test import test_email_codecs_renamed except TestSkipped: test_email_codecs = None @@ -35,8 +37,10 @@ def suite(): suite = unittest.TestSuite() suite.addTest(test_email.suite()) + suite.addTest(test_email_renamed.suite()) if test_email_codecs is not None: suite.addTest(test_email_codecs.suite()) + suite.addTest(test_email_codecs_renamed.suite()) if test_email_torture is not None: suite.addTest(test_email_torture.suite()) return suite From python-checkins at python.org Thu Feb 9 04:59:47 2006 From: python-checkins at python.org (neal.norwitz) Date: Thu, 9 Feb 2006 04:59:47 +0100 (CET) Subject: [Python-checkins] r42278 - peps/trunk/pep-0356.txt Message-ID: <20060209035947.932E01E4002@bag.python.org> Author: neal.norwitz Date: Thu Feb 9 04:59:44 2006 New Revision: 42278 Modified: peps/trunk/pep-0356.txt Log: Add note about the new cProfile module Modified: peps/trunk/pep-0356.txt ============================================================================== --- peps/trunk/pep-0356.txt (original) +++ peps/trunk/pep-0356.txt Thu Feb 9 04:59:44 2006 @@ -55,6 +55,9 @@ - new hashlib module add support for SHA-224, -256, -384, and -512 (replaces old md5 and sha modules) + - new cProfile module suitable for profiling long running applications + with minimal overhead + Planned features for 2.5 From python-checkins at python.org Thu Feb 9 05:03:24 2006 From: python-checkins at python.org (barry.warsaw) Date: Thu, 9 Feb 2006 05:03:24 +0100 (CET) Subject: [Python-checkins] r42279 - in python/branches/release23-maint/Lib/email: Message.py _compat21.py _compat22.py test/test_email.py Message-ID: <20060209040324.F3C271E4002@bag.python.org> Author: barry.warsaw Date: Thu Feb 9 05:03:22 2006 New Revision: 42279 Modified: python/branches/release23-maint/Lib/email/Message.py python/branches/release23-maint/Lib/email/_compat21.py python/branches/release23-maint/Lib/email/_compat22.py python/branches/release23-maint/Lib/email/test/test_email.py Log: Resolve SF bug 1409403: email.Message should supress warning from uu.decode. However, the patch in that tracker item is elaborated such that the newly included unit test pass on Python 2.1 through 2.5. Note that Python 2.1's uu.decode() does not have a 'quiet' argument, so we have to be sneaky. Will port to email 3.0 (although without the backward compatible sneakiness). Modified: python/branches/release23-maint/Lib/email/Message.py ============================================================================== --- python/branches/release23-maint/Lib/email/Message.py (original) +++ python/branches/release23-maint/Lib/email/Message.py Thu Feb 9 05:03:22 2006 @@ -23,6 +23,12 @@ True = 1 False = 0 +try: + from email._compat22 import quiet_uu_decode +except SyntaxError: + from email._compat21 import quiet_uu_decode + + # Regular expression used to split header parameters. BAW: this may be too # simple. It isn't strictly RFC 2045 (section 5.1) compliant, but it catches # most headers found in the wild. We may eventually need a full fledged @@ -220,7 +226,7 @@ elif cte in ('x-uuencode', 'uuencode', 'uue', 'x-uue'): sfp = StringIO() try: - uu.decode(StringIO(payload+'\n'), sfp) + quiet_uu_decode(StringIO(payload+'\n'), sfp, quiet=True) payload = sfp.getvalue() except uu.Error: # Some decoding problem Modified: python/branches/release23-maint/Lib/email/_compat21.py ============================================================================== --- python/branches/release23-maint/Lib/email/_compat21.py (original) +++ python/branches/release23-maint/Lib/email/_compat21.py Thu Feb 9 05:03:22 2006 @@ -1,8 +1,10 @@ -# Copyright (C) 2002 Python Software Foundation -# Author: barry at zope.com +# Copyright (C) 2002-2006 Python Software Foundation +# Author: barry at python.org -"""Module containing compatibility functions for Python 2.1. -""" +"""Module containing compatibility functions for Python 2.1.""" + +import uu +import sys from cStringIO import StringIO from types import StringType, UnicodeType @@ -67,3 +69,14 @@ if subtype is None or subpart.get_content_subtype() == subtype: parts.append(subpart) return parts + + + +def quiet_uu_decode(in_file, out_file, quiet): + # In Python 2.1, uu.decode() does not support the quiet flag. Cheat. + old_stderr = sys.stderr + try: + sys.stderr = StringIO() + uu.decode(in_file, out_file) + finally: + sys.stderr = old_stderr Modified: python/branches/release23-maint/Lib/email/_compat22.py ============================================================================== --- python/branches/release23-maint/Lib/email/_compat22.py (original) +++ python/branches/release23-maint/Lib/email/_compat22.py Thu Feb 9 05:03:22 2006 @@ -1,7 +1,8 @@ -# Copyright (C) 2002 Python Software Foundation -# Author: barry at zope.com +# Copyright (C) 2002-2006 Python Software Foundation +# Author: barry at python.org -"""Module containing compatibility functions for Python 2.2. +"""Module containing compatibility functions for Python 2.2 (and possibly +beyond. """ from __future__ import generators @@ -9,6 +10,8 @@ from cStringIO import StringIO from types import StringTypes +import uu + # Python 2.2.x where x < 1 lacks True/False try: True, False @@ -68,3 +71,8 @@ if subpart.get_content_maintype() == maintype: if subtype is None or subpart.get_content_subtype() == subtype: yield subpart + + + +def quiet_uu_decode(in_file, out_file, quiet): + uu.decode(in_file, out_file, quiet=quiet) Modified: python/branches/release23-maint/Lib/email/test/test_email.py ============================================================================== --- python/branches/release23-maint/Lib/email/test/test_email.py (original) +++ python/branches/release23-maint/Lib/email/test/test_email.py Thu Feb 9 05:03:22 2006 @@ -222,6 +222,19 @@ msg.set_payload('foo') eq(msg.get_payload(decode=True), 'foo') + def test_decode_bogus_uu_payload_quietly(self): + msg = Message() + msg.set_payload('begin 664 foo.txt\n% Author: barry.warsaw Date: Thu Feb 9 05:10:03 2006 New Revision: 42280 Modified: python/branches/release24-maint/Lib/email/Message.py python/branches/release24-maint/Lib/email/test/test_email.py Log: Port of r42279 to email 3.0, but without the Python 2.1 backward compatible nonsense. Resolve SF bug 1409403: email.Message should supress warning from uu.decode. Modified: python/branches/release24-maint/Lib/email/Message.py ============================================================================== --- python/branches/release24-maint/Lib/email/Message.py (original) +++ python/branches/release24-maint/Lib/email/Message.py Thu Feb 9 05:10:03 2006 @@ -198,7 +198,7 @@ elif cte in ('x-uuencode', 'uuencode', 'uue', 'x-uue'): sfp = StringIO() try: - uu.decode(StringIO(payload+'\n'), sfp) + uu.decode(StringIO(payload+'\n'), sfp, quiet=True) payload = sfp.getvalue() except uu.Error: # Some decoding problem Modified: python/branches/release24-maint/Lib/email/test/test_email.py ============================================================================== --- python/branches/release24-maint/Lib/email/test/test_email.py (original) +++ python/branches/release24-maint/Lib/email/test/test_email.py Thu Feb 9 05:10:03 2006 @@ -211,6 +211,19 @@ msg.set_payload('foo') eq(msg.get_payload(decode=True), 'foo') + def test_decode_bogus_uu_payload_quietly(self): + msg = Message() + msg.set_payload('begin 664 foo.txt\n% Author: barry.warsaw Date: Thu Feb 9 05:11:13 2006 New Revision: 42281 Modified: sandbox/trunk/emailpkg/3.0/ (props changed) Log: In anticipation of email 3.1 for Python 2.5, switch the sandbox's email 3.0 external to the release24-maint branch's copy. From python-checkins at python.org Thu Feb 9 05:15:47 2006 From: python-checkins at python.org (barry.warsaw) Date: Thu, 9 Feb 2006 05:15:47 +0100 (CET) Subject: [Python-checkins] r42282 - in sandbox/trunk/emailpkg/3.1/email: message.py test/test_email.py Message-ID: <20060209041547.921031E4002@bag.python.org> Author: barry.warsaw Date: Thu Feb 9 05:15:45 2006 New Revision: 42282 Modified: sandbox/trunk/emailpkg/3.1/email/message.py sandbox/trunk/emailpkg/3.1/email/test/test_email.py Log: Port of r42279 to the sandbox email 3.1 branch, but without the backward compatible nonsense. Resolve SF bug 1409403: email.Message should supress warning from uu.decode. Modified: sandbox/trunk/emailpkg/3.1/email/message.py ============================================================================== --- sandbox/trunk/emailpkg/3.1/email/message.py (original) +++ sandbox/trunk/emailpkg/3.1/email/message.py Thu Feb 9 05:15:45 2006 @@ -198,7 +198,7 @@ elif cte in ('x-uuencode', 'uuencode', 'uue', 'x-uue'): sfp = StringIO() try: - uu.decode(StringIO(payload+'\n'), sfp) + uu.decode(StringIO(payload+'\n'), sfp, quiet=True) payload = sfp.getvalue() except uu.Error: # Some decoding problem Modified: sandbox/trunk/emailpkg/3.1/email/test/test_email.py ============================================================================== --- sandbox/trunk/emailpkg/3.1/email/test/test_email.py (original) +++ sandbox/trunk/emailpkg/3.1/email/test/test_email.py Thu Feb 9 05:15:45 2006 @@ -211,6 +211,19 @@ msg.set_payload('foo') eq(msg.get_payload(decode=True), 'foo') + def test_decode_bogus_uu_payload_quietly(self): + msg = Message() + msg.set_payload('begin 664 foo.txt\n% Author: fred.drake Date: Thu Feb 9 05:21:35 2006 New Revision: 42283 Modified: sandbox/trunk/emailpkg/3.1/email/__init__.py Log: be a little lazier than before Modified: sandbox/trunk/emailpkg/3.1/email/__init__.py ============================================================================== --- sandbox/trunk/emailpkg/3.1/email/__init__.py (original) +++ sandbox/trunk/emailpkg/3.1/email/__init__.py Thu Feb 9 05:21:35 2006 @@ -73,11 +73,11 @@ class LazyImporter(object): def __init__(self, module_name): - self.__module_name = 'email.' + module_name + self.__name__ = 'email.' + module_name def __getattr__(self, name): - __import__(self.__module_name) - mod = sys.modules[self.__module_name] + __import__(self.__name__) + mod = sys.modules[self.__name__] self.__dict__.update(mod.__dict__) return getattr(mod, name) From nnorwitz at gmail.com Thu Feb 9 05:24:51 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 8 Feb 2006 20:24:51 -0800 Subject: [Python-checkins] r42269 - in python/trunk: Doc/lib/lib.tex Doc/lib/libhotshot.tex Doc/lib/libprofile.tex Lib/cProfile.py Lib/pstats.py Lib/test/output/test_cProfile Lib/test/output/test_profile Lib/test/test_cProfile.py Lib/test/test_profile.py Message-ID: On 2/8/06, armin.rigo wrote: > Added: python/trunk/Lib/cProfile.py > > +# Backwards compatibility. > +def help(): > + print "Documentation for the profile/cProfile modules can be found " > + print "in the Python Library Reference, section 'The Python Profiler'." Should this generate a warning? When should support for help be removed? > +def main(): > + import os, sys > + from optparse import OptionParser > + usage = "cProfile.py [-o output_file_path] [-s sort] scriptfile [arg] ..." > + parser = OptionParser(usage=usage) > + parser.allow_interspersed_args = False > + parser.add_option('-o', '--outfile', dest="outfile", > + help="Save stats to ", default=None) > + parser.add_option('-s', '--sort', dest="sort", > + help="Sort order when printing to stdout, based on pstats.Stats class", default=-1) > + > + if not sys.argv[1:]: > + parser.print_usage() > + sys.exit(2) > + > + (options, args) = parser.parse_args() > + sys.argv[:] = args > + > + if (len(sys.argv) > 0): > + sys.path.insert(0, os.path.dirname(sys.argv[0])) > + run('execfile(%r)' % (sys.argv[0],), options.outfile, options.sort) > + else: > + parser.print_usage() > + return parser Why does main() return a parser? Is that useful? > Added: python/trunk/Modules/_lsprof.c > +static void > +ptrace_enter_call(PyObject *self, void *key, PyObject *userObj) > +{ > + /* entering a call to the function identified by 'key' > + (which can be a PyCodeObject or a PyMethodDef pointer) */ > + ProfilerObject *pObj = (ProfilerObject*)self; > + ProfilerEntry *profEntry; > + ProfilerContext *pContext; > + > + profEntry = getEntry(pObj, key); > + if (profEntry == NULL) { > + profEntry = newProfilerEntry(pObj, key, userObj); > + if (profEntry == NULL) Should you do pObj->flags |= POF_NOMEMORY; like below? Why don't ptrace_enter_call() and ptrace_leave_call() return an error, would that be easier? > + return; > + } > + /* grab a ProfilerContext out of the free list */ > + pContext = pObj->freelistProfilerContext; > + if (pContext) { > + pObj->freelistProfilerContext = pContext->previous; > + } > + else { > + /* free list exhausted, allocate a new one */ > + pContext = (ProfilerContext*) > + malloc(sizeof(ProfilerContext)); > + if (pContext == NULL) { > + pObj->flags |= POF_NOMEMORY; > + return; > + } > + } > + initContext(pObj, pContext, profEntry); > +} > + > +static void > +ptrace_leave_call(PyObject *self, void *key) > +{ > + /* leaving a call to the function identified by 'key' */ > + ProfilerObject *pObj = (ProfilerObject*)self; > + ProfilerEntry *profEntry; > + ProfilerContext *pContext; > + > + pContext = pObj->currentProfilerContext; > + if (pContext == NULL) Should you do pObj->flags |= POF_NOMEMORY; like above? > + return; > + profEntry = getEntry(pObj, key); > + if (profEntry) { > + Stop(pObj, pContext, profEntry); > + } > + else { > + pObj->currentProfilerContext = pContext->previous; > + } > + /* put pContext into the free list */ > + pContext->previous = pObj->freelistProfilerContext; > + pObj->freelistProfilerContext = pContext; > +} > +PyMODINIT_FUNC > +init_lsprof(void) > +{ > + PyObject *module, *d; > + module = Py_InitModule3("_lsprof", moduleMethods, "Fast profiler"); > + d = PyModule_GetDict(module); need to check return result of module and d, since they could fail. Sorry, this is out of order, but already deleted it: +static int +pending_exception(ProfilerObject *pObj) +{ + if (pObj->flags & POF_NOMEMORY) { + pObj->flags -= POF_NOMEMORY; The -= looks funny b/c it's the only place that is done. Everywhere else you do &= ~POF_NOMEMORY. n From nnorwitz at gmail.com Thu Feb 9 05:58:43 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 8 Feb 2006 20:58:43 -0800 Subject: [Python-checkins] r42254 - in python/branches/release24-maint: Misc/NEWS Modules/_ssl.c Modules/socketmodule.c In-Reply-To: References: <20060207071739.920331E4007@bag.python.org> Message-ID: On 2/7/06, Jim Jewett wrote: > Is there any other way that a file descriptor could be invalid? Not that I know of. > I keep wanting to see SOCKET_TOO_LARGE instead of > SOCKET_INVALID since that is all it checks. That makes sense and is a better name. > Also, if I am understanding correctly, the problem isn't > with the size of the socket, it is with the total number of > file descriptors, and this socket just happened to get one > outside the valid set (which is presumably numbered > sequentially). Basically yes. If FD_SETSIZE is 4k and you get socket descriptor 5000, regardless of how many sockets are opened, 5000 can't be passed into select(). You would get a buffer overrun when trying to set the descriptor. > Maybe SOCKET_INVALID_FD ? What name do you prefer: SOCKET_INVALID_FD or SOCKET_TOO_LARGE or something else? It's literally SOCKET_TOO_LARGE_FOR_SELECT. Maybe that? > (Also, would a negative number or float or something cause the > same problems, except that they presumably get weeded out > earlier?) Yes, negative descriptors are already checked for everywhere I saw (unless I missed some places). n From python-checkins at python.org Thu Feb 9 06:08:57 2006 From: python-checkins at python.org (neal.norwitz) Date: Thu, 9 Feb 2006 06:08:57 +0100 (CET) Subject: [Python-checkins] r42284 - python/trunk/Misc/build.sh Message-ID: <20060209050857.1BC001E4002@bag.python.org> Author: neal.norwitz Date: Thu Feb 9 06:08:56 2006 New Revision: 42284 Modified: python/trunk/Misc/build.sh Log: Be a little less sensitive to failures. Only check for the result from regrtest.py. If we grep for just "fail", that finds bsddb deadlock messages, which presumably are just warnings. They don't cause a test failure. Modified: python/trunk/Misc/build.sh ============================================================================== --- python/trunk/Misc/build.sh (original) +++ python/trunk/Misc/build.sh Thu Feb 9 06:08:56 2006 @@ -144,7 +144,7 @@ F=make-test.out start=`current_time` make test >& build/$F - NUM_FAILURES=`grep -ic fail build/$F` + NUM_FAILURES=`grep -ic " test failed:" build/$F` update_status "Testing basics ($NUM_FAILURES failures)" "$F" $start ## FIXME: should mail since -uall below should find same problems mail_on_failure "basics" build/$F From barry at python.org Thu Feb 9 06:13:31 2006 From: barry at python.org (Barry Warsaw) Date: Thu, 9 Feb 2006 00:13:31 -0500 Subject: [Python-checkins] r42283 - sandbox/trunk/emailpkg/3.1/email/__init__.py In-Reply-To: <20060209042139.9F1201E4002@bag.python.org> References: <20060209042139.9F1201E4002@bag.python.org> Message-ID: <8BA5A9D4-736C-419E-832A-9ECFF455C715@python.org> On Feb 8, 2006, at 11:21 PM, fred.drake wrote: > Author: fred.drake > Date: Thu Feb 9 05:21:35 2006 > New Revision: 42283 > > Modified: > sandbox/trunk/emailpkg/3.1/email/__init__.py > Log: > be a little lazier than before > > Modified: sandbox/trunk/emailpkg/3.1/email/__init__.py > ====================================================================== > ======== > --- sandbox/trunk/emailpkg/3.1/email/__init__.py (original) > +++ sandbox/trunk/emailpkg/3.1/email/__init__.py Thu Feb 9 > 05:21:35 2006 > @@ -73,11 +73,11 @@ > > class LazyImporter(object): > def __init__(self, module_name): > - self.__module_name = 'email.' + module_name > + self.__name__ = 'email.' + module_name > > def __getattr__(self, name): > - __import__(self.__module_name) > - mod = sys.modules[self.__module_name] > + __import__(self.__name__) > + mod = sys.modules[self.__name__] > self.__dict__.update(mod.__dict__) > return getattr(mod, name) Nice! -Barry From python-checkins at python.org Thu Feb 9 07:19:10 2006 From: python-checkins at python.org (neal.norwitz) Date: Thu, 9 Feb 2006 07:19:10 +0100 (CET) Subject: [Python-checkins] r42285 - peps/trunk/pep-0000.txt Message-ID: <20060209061910.368C71E4002@bag.python.org> Author: neal.norwitz Date: Thu Feb 9 07:19:07 2006 New Revision: 42285 Modified: peps/trunk/pep-0000.txt Log: PEPs 310 and 325 were rejected, but not noted in numerical index Modified: peps/trunk/pep-0000.txt ============================================================================== --- peps/trunk/pep-0000.txt (original) +++ peps/trunk/pep-0000.txt Thu Feb 9 07:19:07 2006 @@ -361,7 +361,7 @@ SF 307 Extensions to the pickle protocol GvR, Peters SA 308 Conditional Expressions GvR, Hettinger SF 309 Partial Function Application Harris - S 310 Reliable Acquisition/Release Pairs Hudson, Moore + SR 310 Reliable Acquisition/Release Pairs Hudson, Moore SF 311 Simplified GIL Acquisition for Extensions Hammond SD 312 Simple Implicit Lambda Suzi, Martelli SR 313 Adding Roman Numeral Literals to Python Meyer @@ -376,7 +376,7 @@ SF 322 Reverse Iteration Hettinger S 323 Copyable Iterators Martelli SF 324 subprocess - New POSIX process module Astrand - S 325 Resource-Release Support for Generators Pedroni + SR 325 Resource-Release Support for Generators Pedroni SR 326 A Case for Top and Bottom Values Carlson, Reedy SF 327 Decimal Data Type Batista SA 328 Imports: Multi-Line and Absolute/Relative Aahz From python-checkins at python.org Thu Feb 9 09:22:58 2006 From: python-checkins at python.org (neal.norwitz) Date: Thu, 9 Feb 2006 09:22:58 +0100 (CET) Subject: [Python-checkins] r42286 - peps/trunk/pep-0356.txt Message-ID: <20060209082258.64C3C1E4002@bag.python.org> Author: neal.norwitz Date: Thu Feb 9 09:22:57 2006 New Revision: 42286 Modified: peps/trunk/pep-0356.txt Log: make a well formed url Modified: peps/trunk/pep-0356.txt ============================================================================== --- peps/trunk/pep-0356.txt (original) +++ peps/trunk/pep-0356.txt Thu Feb 9 09:22:57 2006 @@ -48,7 +48,7 @@ - AST-based compiler - - Add support for reading shadow passwords (www.python.org/sf/579435) + - Add support for reading shadow passwords (http://python.org/sf/579435) - any()/all() builtin truth functions From python-checkins at python.org Thu Feb 9 09:25:24 2006 From: python-checkins at python.org (neal.norwitz) Date: Thu, 9 Feb 2006 09:25:24 +0100 (CET) Subject: [Python-checkins] r42287 - peps/trunk/pep-0200.txt peps/trunk/pep-0206.txt peps/trunk/pep-0207.txt peps/trunk/pep-0214.txt peps/trunk/pep-0231.txt Message-ID: <20060209082524.1C4D51E401F@bag.python.org> Author: neal.norwitz Date: Thu Feb 9 09:25:22 2006 New Revision: 42287 Modified: peps/trunk/pep-0200.txt peps/trunk/pep-0206.txt peps/trunk/pep-0207.txt peps/trunk/pep-0214.txt peps/trunk/pep-0231.txt Log: Add missing PEP type Modified: peps/trunk/pep-0200.txt ============================================================================== --- peps/trunk/pep-0200.txt (original) +++ peps/trunk/pep-0200.txt Thu Feb 9 09:25:22 2006 @@ -3,6 +3,7 @@ Version: $Revision$ Author: Jeremy Hylton Python-Version: 2.0 +Type: Informational Status: Final Modified: peps/trunk/pep-0206.txt ============================================================================== --- peps/trunk/pep-0206.txt (original) +++ peps/trunk/pep-0206.txt Thu Feb 9 09:25:22 2006 @@ -2,6 +2,7 @@ Title: Python Advanced Library Version: $Revision$ Author: A.M. Kuchling +Type: Informational Status: Withdrawn Modified: peps/trunk/pep-0207.txt ============================================================================== --- peps/trunk/pep-0207.txt (original) +++ peps/trunk/pep-0207.txt Thu Feb 9 09:25:22 2006 @@ -3,6 +3,7 @@ Version: $Revision$ Author: guido at python.org (Guido van Rossum), DavidA at ActiveState.com (David Ascher) Python-Version: 2.1 +Type: Standards Status: Final Modified: peps/trunk/pep-0214.txt ============================================================================== --- peps/trunk/pep-0214.txt (original) +++ peps/trunk/pep-0214.txt Thu Feb 9 09:25:22 2006 @@ -3,6 +3,7 @@ Version: $Revision$ Author: barry at python.org (Barry A. Warsaw) Python-Version: 2.0 +Type: Standards Status: Final Created: 24-Jul-2000 Post-History: 16-Aug-2000 Modified: peps/trunk/pep-0231.txt ============================================================================== --- peps/trunk/pep-0231.txt (original) +++ peps/trunk/pep-0231.txt Thu Feb 9 09:25:22 2006 @@ -4,6 +4,7 @@ Author: barry at python.org (Barry A. Warsaw) Python-Version: 2.1 Status: Draft +Type: Standards Created: 30-Nov-2000 Post-History: From python-checkins at python.org Thu Feb 9 09:27:56 2006 From: python-checkins at python.org (neal.norwitz) Date: Thu, 9 Feb 2006 09:27:56 +0100 (CET) Subject: [Python-checkins] r42289 - peps/trunk/pep-0000.txt Message-ID: <20060209082756.D5AAD1E4008@bag.python.org> Author: neal.norwitz Date: Thu Feb 9 09:27:55 2006 New Revision: 42289 Modified: peps/trunk/pep-0000.txt Log: PEPs are in SVN now. Also update PEP 1s title Modified: peps/trunk/pep-0000.txt ============================================================================== --- peps/trunk/pep-0000.txt (original) +++ peps/trunk/pep-0000.txt Thu Feb 9 09:27:55 2006 @@ -14,7 +14,7 @@ The PEP contains the index of all Python Enhancement Proposals, known as PEPs. PEP numbers are assigned by the PEP Editor, and - once assigned are never changed. The CVS history[1] of the PEP + once assigned are never changed. The SVN history[1] of the PEP texts represent their historical record. The BDFL maintains his own Pronouncements page[2] at @@ -30,7 +30,7 @@ Meta-PEPs (PEPs about PEPs or Process) I 0 Index of Python Enhancement Proposals Goodger, Warsaw - P 1 PEP Guidelines Warsaw, Hylton, Goodger + P 1 PEP Purpose and Guidelines Warsaw, Hylton, Goodger I 2 Procedure for Adding New Modules Faassen I 3 Guidelines for Handling Bug Reports Hylton I 4 Deprecation of Standard Modules von Loewis @@ -232,7 +232,7 @@ num title owner --- ----- ----- I 0 Index of Python Enhancement Proposals Goodger, Warsaw - P 1 PEP Guidelines Warsaw, Hylton, Goodger + P 1 PEP Purpose and Guidelines Warsaw, Hylton, Goodger I 2 Procedure for Adding New Modules Faassen I 3 Guidelines for Handling Bug Reports Hylton I 4 Deprecation of Standard Modules von Loewis @@ -523,7 +523,7 @@ References [1] View PEP history online - http://cvs.sf.net/cgi-bin/viewcvs.cgi/python/python/nondist/peps/ + http://svn.python.org/projects/peps/trunk/ [2] The Benevolent Dictator For Life's Parade of PEPs http://www.python.org/doc/essays/pepparade.html From python-checkins at python.org Thu Feb 9 09:31:04 2006 From: python-checkins at python.org (vinay.sajip) Date: Thu, 9 Feb 2006 09:31:04 +0100 (CET) Subject: [Python-checkins] r42290 - python/trunk/Lib/test/test_logging.py Message-ID: <20060209083104.495C91E40E3@bag.python.org> Author: vinay.sajip Date: Thu Feb 9 09:31:00 2006 New Revision: 42290 Modified: python/trunk/Lib/test/test_logging.py Log: Added lock acquisition/release around shared data structure manipulation Modified: python/trunk/Lib/test/test_logging.py ============================================================================== --- python/trunk/Lib/test/test_logging.py (original) +++ python/trunk/Lib/test/test_logging.py Thu Feb 9 09:31:00 2006 @@ -466,9 +466,13 @@ conf = globals()['config%d' % i] sys.stdout.write('config%d: ' % i) loggerDict = logging.getLogger().manager.loggerDict - saved_handlers = logging._handlers.copy() - saved_handler_list = logging._handlerList[:] - saved_loggers = loggerDict.copy() + logging._acquireLock() + try: + saved_handlers = logging._handlers.copy() + saved_handler_list = logging._handlerList[:] + saved_loggers = loggerDict.copy() + finally: + logging._releaseLock() try: fn = tempfile.mktemp(".ini") f = open(fn, "w") @@ -483,12 +487,16 @@ message('ok.') os.remove(fn) finally: - logging._handlers.clear() - logging._handlers.update(saved_handlers) - logging._handlerList = saved_handler_list - loggerDict = logging.getLogger().manager.loggerDict - loggerDict.clear() - loggerDict.update(saved_loggers) + logging._acquireLock() + try: + logging._handlers.clear() + logging._handlers.update(saved_handlers) + logging._handlerList = saved_handler_list + loggerDict = logging.getLogger().manager.loggerDict + loggerDict.clear() + loggerDict.update(saved_loggers) + finally: + logging._releaseLock() #---------------------------------------------------------------------------- # Test 5 @@ -527,9 +535,13 @@ def test5(): loggerDict = logging.getLogger().manager.loggerDict - saved_handlers = logging._handlers.copy() - saved_handler_list = logging._handlerList[:] - saved_loggers = loggerDict.copy() + logging._acquireLock() + try: + saved_handlers = logging._handlers.copy() + saved_handler_list = logging._handlerList[:] + saved_loggers = loggerDict.copy() + finally: + logging._releaseLock() try: fn = tempfile.mktemp(".ini") f = open(fn, "w") @@ -542,13 +554,16 @@ logging.exception("just testing") os.remove(fn) finally: - logging._handlers.clear() - logging._handlers.update(saved_handlers) - logging._handlerList = saved_handler_list - loggerDict = logging.getLogger().manager.loggerDict - loggerDict.clear() - loggerDict.update(saved_loggers) - + logging._acquireLock() + try: + logging._handlers.clear() + logging._handlers.update(saved_handlers) + logging._handlerList = saved_handler_list + loggerDict = logging.getLogger().manager.loggerDict + loggerDict.clear() + loggerDict.update(saved_loggers) + finally: + logging._releaseLock() #---------------------------------------------------------------------------- From python-checkins at python.org Thu Feb 9 09:34:16 2006 From: python-checkins at python.org (vinay.sajip) Date: Thu, 9 Feb 2006 09:34:16 +0100 (CET) Subject: [Python-checkins] r42291 - python/trunk/Lib/logging/__init__.py Message-ID: <20060209083416.A6EBC1E400F@bag.python.org> Author: vinay.sajip Date: Thu Feb 9 09:34:14 2006 New Revision: 42291 Modified: python/trunk/Lib/logging/__init__.py Log: Propagate exceptions from shutdown() if raiseExceptions is not set. Added 'extra' keyword argument handling to logging calls, as discussed on python-dev. Modified: python/trunk/Lib/logging/__init__.py ============================================================================== --- python/trunk/Lib/logging/__init__.py (original) +++ python/trunk/Lib/logging/__init__.py Thu Feb 9 09:34:14 2006 @@ -1053,14 +1053,20 @@ continue return filename, f.f_lineno, co.co_name - def makeRecord(self, name, level, fn, lno, msg, args, exc_info): + def makeRecord(self, name, level, fn, lno, msg, args, exc_info, extra=None): """ A factory method which can be overridden in subclasses to create specialized LogRecords. """ - return LogRecord(name, level, fn, lno, msg, args, exc_info) + rv = LogRecord(name, level, fn, lno, msg, args, exc_info) + if extra: + for key in extra: + if (key in ["message", "asctime"]) or (key in rv.__dict__): + raise KeyError("Attempt to overwrite %r in LogRecord" % key) + rv.__dict__[key] = extra[key] + return rv - def _log(self, level, msg, args, exc_info=None): + def _log(self, level, msg, args, exc_info=None, extra=None): """ Low-level logging routine which creates a LogRecord and then calls all the handlers of this logger to handle the record. @@ -1072,7 +1078,7 @@ if exc_info: if type(exc_info) != types.TupleType: exc_info = sys.exc_info() - record = self.makeRecord(self.name, level, fn, lno, msg, args, exc_info) + record = self.makeRecord(self.name, level, fn, lno, msg, args, exc_info, extra) self.handle(record) def handle(self, record): @@ -1324,12 +1330,14 @@ """ for h in _handlerList[:]: # was _handlers.keys(): #errors might occur, for example, if files are locked - #we just ignore them + #we just ignore them if raiseExceptions is not set try: h.flush() h.close() except: - pass + if raiseExceptions: + raise + #else, swallow #Let's try and shutdown automatically on application exit... try: From python-checkins at python.org Thu Feb 9 09:48:38 2006 From: python-checkins at python.org (vinay.sajip) Date: Thu, 9 Feb 2006 09:48:38 +0100 (CET) Subject: [Python-checkins] r42292 - python/trunk/Lib/logging/__init__.py Message-ID: <20060209084838.3C6ED1E4002@bag.python.org> Author: vinay.sajip Date: Thu Feb 9 09:48:36 2006 New Revision: 42292 Modified: python/trunk/Lib/logging/__init__.py Log: Added function name to LogRecord. Modified: python/trunk/Lib/logging/__init__.py ============================================================================== --- python/trunk/Lib/logging/__init__.py (original) +++ python/trunk/Lib/logging/__init__.py Thu Feb 9 09:48:36 2006 @@ -203,7 +203,8 @@ the source line where the logging call was made, and any exception information to be logged. """ - def __init__(self, name, level, pathname, lineno, msg, args, exc_info): + def __init__(self, name, level, pathname, lineno, + msg, args, exc_info, func): """ Initialize a logging record with interesting information. """ @@ -238,6 +239,7 @@ self.exc_info = exc_info self.exc_text = None # used to cache the traceback text self.lineno = lineno + self.funcName = func self.created = ct self.msecs = (ct - long(ct)) * 1000 self.relativeCreated = (self.created - _startTime) * 1000 @@ -283,7 +285,7 @@ a socket connection (which is sent as a dictionary) into a LogRecord instance. """ - rv = LogRecord(None, None, "", 0, "", (), None) + rv = LogRecord(None, None, "", 0, "", (), None, None) rv.__dict__.update(dict) return rv @@ -318,6 +320,7 @@ %(module)s Module (name portion of filename) %(lineno)d Source line number where the logging call was issued (if available) + %(funcName)s Function name %(created)f Time when the LogRecord was created (time.time() return value) %(asctime)s Textual time when the LogRecord was created @@ -1053,12 +1056,12 @@ continue return filename, f.f_lineno, co.co_name - def makeRecord(self, name, level, fn, lno, msg, args, exc_info, extra=None): + def makeRecord(self, name, level, fn, lno, msg, args, exc_info, func=None, extra=None): """ A factory method which can be overridden in subclasses to create specialized LogRecords. """ - rv = LogRecord(name, level, fn, lno, msg, args, exc_info) + rv = LogRecord(name, level, fn, lno, msg, args, exc_info, func) if extra: for key in extra: if (key in ["message", "asctime"]) or (key in rv.__dict__): @@ -1078,7 +1081,7 @@ if exc_info: if type(exc_info) != types.TupleType: exc_info = sys.exc_info() - record = self.makeRecord(self.name, level, fn, lno, msg, args, exc_info, extra) + record = self.makeRecord(self.name, level, fn, lno, msg, args, exc_info, func, extra) self.handle(record) def handle(self, record): From python-checkins at python.org Thu Feb 9 09:54:12 2006 From: python-checkins at python.org (vinay.sajip) Date: Thu, 9 Feb 2006 09:54:12 +0100 (CET) Subject: [Python-checkins] r42293 - python/trunk/Doc/lib/liblogging.tex Message-ID: <20060209085412.CA2381E401E@bag.python.org> Author: vinay.sajip Date: Thu Feb 9 09:54:11 2006 New Revision: 42293 Modified: python/trunk/Doc/lib/liblogging.tex Log: Added information on function name added to LogRecord, and the 'extra' keyword parameter. Modified: python/trunk/Doc/lib/liblogging.tex ============================================================================== --- python/trunk/Doc/lib/liblogging.tex (original) +++ python/trunk/Doc/lib/liblogging.tex Thu Feb 9 09:54:11 2006 @@ -183,12 +183,52 @@ \begin{funcdesc}{debug}{msg\optional{, *args\optional{, **kwargs}}} Logs a message with level \constant{DEBUG} on the root logger. The \var{msg} is the message format string, and the \var{args} are the -arguments which are merged into \var{msg}. The only keyword argument in -\var{kwargs} which is inspected is \var{exc_info} which, if it does not -evaluate as false, causes exception information to be added to the logging -message. If an exception tuple (in the format returned by -\function{sys.exc_info()}) is provided, it is used; otherwise, -\function{sys.exc_info()} is called to get the exception information. +arguments which are merged into \var{msg} using the string formatting +operator. (Note that this means that you can use keywords in the +format string, together with a single dictionary argument.) + +There are two keyword arguments in \var{kwargs} which are inspected: +\var{exc_info} which, if it does not evaluate as false, causes exception +information to be added to the logging message. If an exception tuple (in the +format returned by \function{sys.exc_info()}) is provided, it is used; +otherwise, \function{sys.exc_info()} is called to get the exception +information. + +The other optional keyword argument is \var{extra} which can be used to pass +a dictionary which is used to populate the __dict__ of the LogRecord created +for the logging event with user-defined attributes. These custom attributes +can then be used as you like. For example, they could be incorporated into +logged messages. For example: + +\begin{verbatim} + FORMAT = "%(asctime)-15s %(clientip)s %(user)-8s %(message)s" + logging.basicConfig(format=FORMAT) + dict = { 'clientip' : '192.168.0.1', 'user' : 'fbloggs' } + logging.warning("Protocol problem: %s", "connection reset", extra=d) +\end{verbatim} + +would print something like +\begin{verbatim} +2006-02-08 22:20:02,165 192.168.0.1 fbloggs Protocol problem: connection reset +\end{verbatim} + +The keys in the dictionary passed in \var{extra} should not clash with the keys +used by the logging system. (See the \class{Formatter} documentation for more +information on which keys are used by the logging system.) + +If you choose to use these attributes in logged messages, you need to exercise +some care. In the above example, for instance, the \class{Formatter} has been +set up with a format string which expects 'clientip' and 'user' in the +attribute dictionary of the LogRecord. If these are missing, the message will +not be logged because a string formatting exception will occur. So in this +case, you always need to pass the \var{extra} dictionary with these keys. + +While this might be annoying, this feature is intended for use in specialized +circumstances, such as multi-threaded servers where the same code executes +in many contexts, and interesting conditions which arise are dependent on this +context (such as remote client IP address and authenticated user name, in the +above example). In such circumstances, it is likely that specialized +\class{Formatter}s would be used with particular \class{Handler}s. \end{funcdesc} \begin{funcdesc}{info}{msg\optional{, *args\optional{, **kwargs}}} @@ -367,12 +407,53 @@ \begin{methoddesc}{debug}{msg\optional{, *args\optional{, **kwargs}}} Logs a message with level \constant{DEBUG} on this logger. The \var{msg} is the message format string, and the \var{args} are the -arguments which are merged into \var{msg}. The only keyword argument in -\var{kwargs} which is inspected is \var{exc_info} which, if it does not -evaluate as false, causes exception information to be added to the logging -message. If an exception tuple (as provided by \function{sys.exc_info()}) -is provided, it is used; otherwise, \function{sys.exc_info()} is called -to get the exception information. +arguments which are merged into \var{msg} using the string formatting +operator. (Note that this means that you can use keywords in the +format string, together with a single dictionary argument.) + +There are two keyword arguments in \var{kwargs} which are inspected: +\var{exc_info} which, if it does not evaluate as false, causes exception +information to be added to the logging message. If an exception tuple (in the +format returned by \function{sys.exc_info()}) is provided, it is used; +otherwise, \function{sys.exc_info()} is called to get the exception +information. + +The other optional keyword argument is \var{extra} which can be used to pass +a dictionary which is used to populate the __dict__ of the LogRecord created +for the logging event with user-defined attributes. These custom attributes +can then be used as you like. For example, they could be incorporated into +logged messages. For example: + +\begin{verbatim} + FORMAT = "%(asctime)-15s %(clientip)s %(user)-8s %(message)s" + logging.basicConfig(format=FORMAT) + dict = { 'clientip' : '192.168.0.1', 'user' : 'fbloggs' } + logger = logging.getLogger("tcpserver") + logger.warning("Protocol problem: %s", "connection reset", extra=d) +\end{verbatim} + +would print something like +\begin{verbatim} +2006-02-08 22:20:02,165 192.168.0.1 fbloggs Protocol problem: connection reset +\end{verbatim} + +The keys in the dictionary passed in \var{extra} should not clash with the keys +used by the logging system. (See the \class{Formatter} documentation for more +information on which keys are used by the logging system.) + +If you choose to use these attributes in logged messages, you need to exercise +some care. In the above example, for instance, the \class{Formatter} has been +set up with a format string which expects 'clientip' and 'user' in the +attribute dictionary of the LogRecord. If these are missing, the message will +not be logged because a string formatting exception will occur. So in this +case, you always need to pass the \var{extra} dictionary with these keys. + +While this might be annoying, this feature is intended for use in specialized +circumstances, such as multi-threaded servers where the same code executes +in many contexts, and interesting conditions which arise are dependent on this +context (such as remote client IP address and authenticated user name, in the +above example). In such circumstances, it is likely that specialized +\class{Formatter}s would be used with particular \class{Handler}s. \end{methoddesc} \begin{methoddesc}{info}{msg\optional{, *args\optional{, **kwargs}}} @@ -441,7 +522,8 @@ \method{filter()}. \end{methoddesc} -\begin{methoddesc}{makeRecord}{name, lvl, fn, lno, msg, args, exc_info} +\begin{methoddesc}{makeRecord}{name, lvl, fn, lno, msg, args, exc_info, + func, extra} This is a factory method which can be overridden in subclasses to create specialized \class{LogRecord} instances. \end{methoddesc} @@ -1305,6 +1387,7 @@ call was issued (if available).} \lineii{\%(filename)s} {Filename portion of pathname.} \lineii{\%(module)s} {Module (name portion of filename).} +\lineii{\%(funcName)s} {Name of function containing the logging call.} \lineii{\%(lineno)d} {Source line number where the logging call was issued (if available).} \lineii{\%(created)f} {Time when the \class{LogRecord} was created (as From python-checkins at python.org Thu Feb 9 20:09:54 2006 From: python-checkins at python.org (guido.van.rossum) Date: Thu, 9 Feb 2006 20:09:54 +0100 (CET) Subject: [Python-checkins] r42294 - peps/trunk/pep-0000.txt peps/trunk/pep-0357.txt Message-ID: <20060209190954.40E491E4006@bag.python.org> Author: guido.van.rossum Date: Thu Feb 9 20:09:50 2006 New Revision: 42294 Added: peps/trunk/pep-0357.txt Modified: peps/trunk/pep-0000.txt Log: Add PEP 357, Allowing Any Object to be Used for Slicing (by Travis Oliphant). Modified: peps/trunk/pep-0000.txt ============================================================================== --- peps/trunk/pep-0000.txt (original) +++ peps/trunk/pep-0000.txt Thu Feb 9 20:09:50 2006 @@ -109,6 +109,7 @@ S 353 Using ssize_t as the index type von Loewis S 354 Enumerations in Python Finney S 355 Path - Object oriented filesystem paths Lindqvist + S 357 Allowing Any Object to be Used for Slicing Oliphant S 754 IEEE 754 Floating Point Special Values Warnes Finished PEPs (done, implemented in Subversion) @@ -408,6 +409,7 @@ S 354 Enumerations in Python Finney S 355 Path - Object oriented filesystem paths Lindqvist I 356 Python 2.5 Release Schedule Norwitz, et al + S 357 Allowing Any Object to be Used for Slicing Oliphant SR 666 Reject Foolish Indentation Creighton S 754 IEEE 754 Floating Point Special Values Warnes I 3000 Python 3.0 Plans Kuchling, Cannon Added: peps/trunk/pep-0357.txt ============================================================================== --- (empty file) +++ peps/trunk/pep-0357.txt Thu Feb 9 20:09:50 2006 @@ -0,0 +1,71 @@ +PEP: 357 +Title: Allowing Any Object to be Used for Slicing +Version: $Revision$ +Last Modified: $Date$ +Author: Travis Oliphant +Status: Draft +Type: Standards Track +Created: 09-Feb-2006 +Python-Version: 2.5 + +Abstract + + This PEP proposes adding an sq_index slot in PySequenceMethods and + an __index__ special method so that arbitrary objects can be used + in slice syntax. + +Rationale + + Currently integers and long integers play a special role in slice + notation in that they are the only objects allowed in slice + syntax. In other words, if X is an object implementing the sequence + protocol, then X[obj1:obj2] is only valid if obj1 and obj2 are both + integers or long integers. There is no way for obj1 and obj2 to + tell Python that they could be reasonably used as indexes into a + sequence. This is an unnecessary limitation. + + In NumPy, for example, there are 8 different integer scalars + corresponding to unsigned and signed integers of 8, 16, 32, and 64 + bits. These type-objects could reasonably be used as indexes into + a sequence if there were some way for their typeobjects to tell + Python what integer value to use. + +Proposal + + Add a sq_index slot to PySequenceMethods, and a corresponding + __index__ special method. Objects could define a function to + place in the sq_index slot that returns an C-integer for use in + PySequence_GetSlice, PySequence_SetSlice, and PySequence_DelSlice. + +Implementation Plan + + 1) Add the slots + + 2) Change the ISINT macro in ceval.c to accomodate objects with the + index slot defined. + + 3) Change the _PyEval_SliceIndex function to accomodate objects + with the index slot defined. + +Possible Concerns + + Speed: + + Implementation should not slow down Python because integers and long + integers used as indexes will complete in the same number of + instructions. The only change will be that what used to generate + an error will now be acceptable. + + Why not use nb_int which is already there? + + The nb_int, nb_oct, and nb_hex methods are used for coercion. + Floats have these methods defined and floats should not be used in + slice notation. + +Reference Implementation + + Available on PEP acceptance. + +Copyright + + This document is placed in the public domain From python-checkins at python.org Thu Feb 9 20:55:42 2006 From: python-checkins at python.org (guido.van.rossum) Date: Thu, 9 Feb 2006 20:55:42 +0100 (CET) Subject: [Python-checkins] r42295 - peps/trunk/pep-0357.txt Message-ID: <20060209195542.50A471E4007@bag.python.org> Author: guido.van.rossum Date: Thu Feb 9 20:55:41 2006 New Revision: 42295 Modified: peps/trunk/pep-0357.txt Log: Attempt to make $Date$ and $Revision$ work. Modified: peps/trunk/pep-0357.txt ============================================================================== --- peps/trunk/pep-0357.txt (original) +++ peps/trunk/pep-0357.txt Thu Feb 9 20:55:41 2006 @@ -4,9 +4,9 @@ Last Modified: $Date$ Author: Travis Oliphant Status: Draft -Type: Standards Track -Created: 09-Feb-2006 -Python-Version: 2.5 +Type: Standards Track +Created: 09-Feb-2006 +Python-Version: 2.5 Abstract From python-checkins at python.org Thu Feb 9 21:07:18 2006 From: python-checkins at python.org (guido.van.rossum) Date: Thu, 9 Feb 2006 21:07:18 +0100 (CET) Subject: [Python-checkins] r42296 - peps/trunk/pep-0353.txt peps/trunk/pep-0356.txt peps/trunk/pep-0357.txt Message-ID: <20060209200718.B86501E4007@bag.python.org> Author: guido.van.rossum Date: Thu Feb 9 21:07:13 2006 New Revision: 42296 Modified: peps/trunk/pep-0353.txt (props changed) peps/trunk/pep-0356.txt (props changed) peps/trunk/pep-0357.txt (props changed) Log: Change some properties (eol-style, native) on PEPs that didn't have them yet. From python-checkins at python.org Thu Feb 9 21:34:33 2006 From: python-checkins at python.org (brett.cannon) Date: Thu, 9 Feb 2006 21:34:33 +0100 (CET) Subject: [Python-checkins] r42297 - peps/trunk/pep-3000.txt Message-ID: <20060209203433.B0B801E4007@bag.python.org> Author: brett.cannon Date: Thu Feb 9 21:34:33 2006 New Revision: 42297 Modified: peps/trunk/pep-3000.txt Log: Lambda's death had been prematurely reported. Modified: peps/trunk/pep-3000.txt ============================================================================== --- peps/trunk/pep-3000.txt (original) +++ peps/trunk/pep-3000.txt Thu Feb 9 21:34:33 2006 @@ -71,7 +71,6 @@ To be removed: -* The ``lambda`` statement: use nested or named functions [1]_, [9]_ * String exceptions: use instances of an Exception class [2]_ * ``raise Exception, "message"``: use ``raise Exception("message")`` [14]_ * ```x```: use ``repr(x)`` [2]_ From python-checkins at python.org Thu Feb 9 23:26:02 2006 From: python-checkins at python.org (guido.van.rossum) Date: Thu, 9 Feb 2006 23:26:02 +0100 (CET) Subject: [Python-checkins] r42298 - peps/trunk/pep-0357.txt Message-ID: <20060209222602.18DFB1E4007@bag.python.org> Author: guido.van.rossum Date: Thu Feb 9 23:26:00 2006 New Revision: 42298 Modified: peps/trunk/pep-0357.txt Log: Update from Travis -- move to make it a numeric slot. Modified: peps/trunk/pep-0357.txt ============================================================================== --- peps/trunk/pep-0357.txt (original) +++ peps/trunk/pep-0357.txt Thu Feb 9 23:26:00 2006 @@ -10,7 +10,7 @@ Abstract - This PEP proposes adding an sq_index slot in PySequenceMethods and + This PEP proposes adding an nb_as_index slot in PyNumberMethods and an __index__ special method so that arbitrary objects can be used in slice syntax. @@ -32,20 +32,21 @@ Proposal - Add a sq_index slot to PySequenceMethods, and a corresponding + Add a nb_index slot to PyNumberMethods, and a corresponding __index__ special method. Objects could define a function to - place in the sq_index slot that returns an C-integer for use in - PySequence_GetSlice, PySequence_SetSlice, and PySequence_DelSlice. + place in the sq_index slot that returns an appropriate + C-integer for use as ilow or ihigh in PySequence_GetSlice, + PySequence_SetSlice, and PySequence_DelSlice. Implementation Plan 1) Add the slots - 2) Change the ISINT macro in ceval.c to accomodate objects with the - index slot defined. + 2) Change the ISINT macro in ceval.c to ISINDEX and alter it to + accomodate objects with the index slot defined. 3) Change the _PyEval_SliceIndex function to accomodate objects - with the index slot defined. + with the index slot defined. Possible Concerns From python-checkins at python.org Fri Feb 10 02:25:21 2006 From: python-checkins at python.org (phillip.eby) Date: Fri, 10 Feb 2006 02:25:21 +0100 (CET) Subject: [Python-checkins] r42299 - sandbox/trunk/setuptools/setuptools/command/easy_install.py Message-ID: <20060210012521.3A5971E4007@bag.python.org> Author: phillip.eby Date: Fri Feb 10 02:25:20 2006 New Revision: 42299 Modified: sandbox/trunk/setuptools/setuptools/command/easy_install.py Log: Tweak site_dirs detection so that distros with weird layouts (e.g. /usr/lib64 patches on 64-bit Fedora) will have a better chance of working "out of the box". Modified: sandbox/trunk/setuptools/setuptools/command/easy_install.py ============================================================================== --- sandbox/trunk/setuptools/setuptools/command/easy_install.py (original) +++ sandbox/trunk/setuptools/setuptools/command/easy_install.py Fri Feb 10 02:25:20 2006 @@ -934,12 +934,12 @@ sys.version[:3], 'site-packages')) + site_lib = get_python_lib(prefix=prefix or None) + if site_lib not in sitedirs: sitedirs.append(site_lib) + sitedirs = filter(os.path.isdir, sitedirs) sitedirs = map(normalize_path, sitedirs) - return sitedirs or [normalize_path(get_python_lib())] # ensure at least one - - - + return sitedirs # ensure at least one def expand_paths(inputs): """Yield sys.path directories that might contain "old-style" packages""" From python-checkins at python.org Fri Feb 10 02:26:23 2006 From: python-checkins at python.org (phillip.eby) Date: Fri, 10 Feb 2006 02:26:23 +0100 (CET) Subject: [Python-checkins] r42300 - sandbox/trunk/setuptools/setuptools/command/easy_install.py Message-ID: <20060210012623.4469B1E4007@bag.python.org> Author: phillip.eby Date: Fri Feb 10 02:26:22 2006 New Revision: 42300 Modified: sandbox/trunk/setuptools/setuptools/command/easy_install.py Log: Oops, bad indentation. Modified: sandbox/trunk/setuptools/setuptools/command/easy_install.py ============================================================================== --- sandbox/trunk/setuptools/setuptools/command/easy_install.py (original) +++ sandbox/trunk/setuptools/setuptools/command/easy_install.py Fri Feb 10 02:26:22 2006 @@ -934,8 +934,8 @@ sys.version[:3], 'site-packages')) - site_lib = get_python_lib(prefix=prefix or None) - if site_lib not in sitedirs: sitedirs.append(site_lib) + site_lib = get_python_lib(prefix=prefix or None) + if site_lib not in sitedirs: sitedirs.append(site_lib) sitedirs = filter(os.path.isdir, sitedirs) sitedirs = map(normalize_path, sitedirs) From python-checkins at python.org Fri Feb 10 02:34:25 2006 From: python-checkins at python.org (phillip.eby) Date: Fri, 10 Feb 2006 02:34:25 +0100 (CET) Subject: [Python-checkins] r42301 - sandbox/trunk/setuptools/setuptools/command/easy_install.py Message-ID: <20060210013425.8356D1E400D@bag.python.org> Author: phillip.eby Date: Fri Feb 10 02:34:24 2006 New Revision: 42301 Modified: sandbox/trunk/setuptools/setuptools/command/easy_install.py Log: Ugh. Rereading the Fedora patch shows my previous hack won't actually accomplish anything useful. This one should, but it needs testing by someone who actually has a Fedora 64-bit x86 setup. Modified: sandbox/trunk/setuptools/setuptools/command/easy_install.py ============================================================================== --- sandbox/trunk/setuptools/setuptools/command/easy_install.py (original) +++ sandbox/trunk/setuptools/setuptools/command/easy_install.py Fri Feb 10 02:34:24 2006 @@ -933,8 +933,8 @@ 'Python', sys.version[:3], 'site-packages')) - - site_lib = get_python_lib(prefix=prefix or None) + for plat_specific in (0,1): + site_lib = get_python_lib(plat_specific) if site_lib not in sitedirs: sitedirs.append(site_lib) sitedirs = filter(os.path.isdir, sitedirs) From python-checkins at python.org Fri Feb 10 02:49:05 2006 From: python-checkins at python.org (phillip.eby) Date: Fri, 10 Feb 2006 02:49:05 +0100 (CET) Subject: [Python-checkins] r42302 - sandbox/trunk/setuptools/site.py Message-ID: <20060210014905.A6FBF1E4006@bag.python.org> Author: phillip.eby Date: Fri Feb 10 02:49:04 2006 New Revision: 42302 Modified: sandbox/trunk/setuptools/site.py Log: New version of site.py hack, for better compatibility with distros that patch the stdlib site.py. This version runs the stdlib site.py, then tries to hack sys.path back to something resembling what the old version did. Unfortunately, this is complex since site.py and .pth files can munge the path in rather arbitrary ways, and the initial setup of sys.path is dependent on the platform and Python version. This code has been tested on Linux, cygwin, and Windows Python, versions 2.2, 2.3, and 2.4 (although not all versions on all platforms), and appears to perform as intended. Modified: sandbox/trunk/setuptools/site.py ============================================================================== --- sandbox/trunk/setuptools/site.py (original) +++ sandbox/trunk/setuptools/site.py Fri Feb 10 02:49:04 2006 @@ -1,408 +1,82 @@ -"""Append module search paths for third-party packages to sys.path. - -**************************************************************** -* This module is automatically imported during initialization, * -* if you add the setuptools egg to PYTHONPATH (to support the * -* simple non-root installation mode) * -**************************************************************** - -In earlier versions of Python (up to 1.5a3), scripts or modules that -needed to use site-specific modules would place ``import site'' -somewhere near the top of their code. Because of the automatic -import, this is no longer necessary (but code that does it still -works). - -This will append site-specific paths to the module search path. On -Unix, it starts with sys.prefix and sys.exec_prefix (if different) and -appends lib/python/site-packages as well as lib/site-python. -On other platforms (mainly Mac and Windows), it uses just sys.prefix -(and sys.exec_prefix, if different, but this is unlikely). The -resulting directories, if they exist, are appended to sys.path, and -also inspected for path configuration files. - -A path configuration file is a file whose name has the form -.pth; its contents are additional directories (one per line) -to be added to sys.path. Non-existing directories (or -non-directories) are never added to sys.path; no directory is added to -sys.path more than once. Blank lines and lines beginning with -'#' are skipped. Lines starting with 'import' are executed. - -For example, suppose sys.prefix and sys.exec_prefix are set to -/usr/local and there is a directory /usr/local/lib/python1.5/site-packages -with three subdirectories, foo, bar and spam, and two path -configuration files, foo.pth and bar.pth. Assume foo.pth contains the -following: - - # foo package configuration - foo - bar - bletch - -and bar.pth contains: - - # bar package configuration - bar - -Then the following directories are added to sys.path, in this order: - - /usr/local/lib/python1.5/site-packages/bar - /usr/local/lib/python1.5/site-packages/foo - -Note that bletch is omitted because it doesn't exist; bar precedes foo -because bar.pth comes alphabetically before foo.pth; and spam is -omitted because it is not mentioned in either path configuration file. - -After these path manipulations, an attempt is made to import a module -named sitecustomize, which can perform arbitrary additional -site-specific customizations. If this import fails with an -ImportError exception, it is silently ignored. - -""" - -import sys -import os -import __builtin__ - - -def makepath(*paths): - dir = os.path.abspath(os.path.join(*paths)) - return dir, os.path.normcase(dir) - -def abs__file__(): - """Set all module' __file__ attribute to an absolute path""" - for m in sys.modules.values(): - try: - m.__file__ = os.path.abspath(m.__file__) - except AttributeError: - continue - -def removeduppaths(): - """ Remove duplicate entries from sys.path along with making them - absolute""" - # This ensures that the initial path provided by the interpreter contains - # only absolute pathnames, even if we're running from the build directory. - L = [] - known_paths = {} - for dir in sys.path: - # Filter out duplicate paths (on case-insensitive file systems also - # if they only differ in case); turn relative paths into absolute - # paths. - dir, dircase = makepath(dir) - if not dircase in known_paths: - L.append(dir) - known_paths[dircase] = 1 - sys.path[:] = L - return known_paths - -# XXX This should not be part of site.py, since it is needed even when -# using the -S option for Python. See http://www.python.org/sf/586680 -def addbuilddir(): - """Append ./build/lib. in case we're running in the build dir - (especially for Guido :-)""" - from distutils.util import get_platform - s = "build/lib.%s-%.3s" % (get_platform(), sys.version) - s = os.path.join(os.path.dirname(sys.path[-1]), s) - sys.path.append(s) - -def _init_pathinfo(): - """Return a set containing all existing directory entries from sys.path""" - d = {} - for dir in sys.path: - try: - if os.path.isdir(dir): - dir, dircase = makepath(dir) - d[dircase] = 1 - except TypeError: - continue - return d - -def addpackage(sitedir, name, known_paths): - """Add a new path to known_paths by combining sitedir and 'name' or execute - sitedir if it starts with 'import'""" - if known_paths is None: - known_paths = _init_pathinfo() - reset = 1 +def __boot(): + import sys, imp, os, os.path + PYTHONPATH = os.environ.get('PYTHONPATH') + if PYTHONPATH is None or (sys.platform=='win32' and not PYTHONPATH): + PYTHONPATH = [] else: - reset = 0 - fullname = os.path.join(sitedir, name) - try: - f = open(fullname, "rU") - except IOError: - return - try: - for line in f: - if line.startswith("#"): + PYTHONPATH = PYTHONPATH.split(os.pathsep) + + pic = getattr(sys,'path_importer_cache',{}) + stdpath = sys.path[len(PYTHONPATH):] + mydir = os.path.dirname(__file__) + #print "searching",stdpath,sys.path + + for item in stdpath: + if item==mydir or not item: + continue # skip if current dir. on Windows, or my own directory + importer = pic.get(item) + if importer is not None: + loader = importer.find_module('site') + if loader is not None: + # This should actually reload the current module + loader.load_module('site') + break + else: + try: + stream, path, descr = imp.find_module('site',[item]) + except ImportError: continue - if line.startswith("import"): - exec line + if stream is None: continue - line = line.rstrip() - dir, dircase = makepath(sitedir, line) - if not dircase in known_paths and os.path.exists(dir): - sys.path.append(dir) - known_paths[dircase] = 1 - finally: - f.close() - if reset: - known_paths = None - return known_paths - -def addsitedir(sitedir, known_paths=None): - """Add 'sitedir' argument to sys.path if missing and handle .pth files in - 'sitedir'""" - if known_paths is None: - known_paths = _init_pathinfo() - reset = 1 - else: - reset = 0 - sitedir, sitedircase = makepath(sitedir) - if not sitedircase in known_paths: - sys.path.append(sitedir) # Add path component - try: - names = os.listdir(sitedir) - except os.error: - return - names.sort() - for name in names: - if name.endswith(os.extsep + "pth"): - addpackage(sitedir, name, known_paths) - if reset: - known_paths = None - return known_paths - -def addsitepackages(known_paths): - """Add site-packages (and possibly site-python) to sys.path""" - prefixes = [sys.prefix] - if sys.exec_prefix != sys.prefix: - prefixes.append(sys.exec_prefix) - for prefix in prefixes: - if prefix: - if sys.platform in ('os2emx', 'riscos'): - sitedirs = [os.path.join(prefix, "Lib", "site-packages")] - elif os.sep == '/': - sitedirs = [os.path.join(prefix, - "lib", - "python" + sys.version[:3], - "site-packages"), - os.path.join(prefix, "lib", "site-python")] - else: - sitedirs = [prefix, os.path.join(prefix, "lib", "site-packages")] - if sys.platform == 'darwin': - # for framework builds *only* we add the standard Apple - # locations. Currently only per-user, but /Library and - # /Network/Library could be added too - if 'Python.framework' in prefix: - home = os.environ.get('HOME') - if home: - sitedirs.append( - os.path.join(home, - 'Library', - 'Python', - sys.version[:3], - 'site-packages')) - for sitedir in sys.path+sitedirs: - if sitedir and os.path.isdir(sitedir): - addsitedir(sitedir, known_paths) - return None - - -def setBEGINLIBPATH(): - """The OS/2 EMX port has optional extension modules that do double duty - as DLLs (and must use the .DLL file extension) for other extensions. - The library search path needs to be amended so these will be found - during module import. Use BEGINLIBPATH so that these are at the start - of the library search path. - - """ - dllpath = os.path.join(sys.prefix, "Lib", "lib-dynload") - libpath = os.environ['BEGINLIBPATH'].split(';') - if libpath[-1]: - libpath.append(dllpath) + try: + # This should actually reload the current module + imp.load_module('site',stream,path,descr) + finally: + stream.close() + break else: - libpath[-1] = dllpath - os.environ['BEGINLIBPATH'] = ';'.join(libpath) - + raise ImportError("Couldn't find the real 'site' module") -def setquit(): - """Define new built-ins 'quit' and 'exit'. - These are simply strings that display a hint on how to exit. - - """ - if os.sep == ':': - exit = 'Use Cmd-Q to quit.' - elif os.sep == '\\': - exit = 'Use Ctrl-Z plus Return to exit.' - else: - exit = 'Use Ctrl-D (i.e. EOF) to exit.' - __builtin__.quit = __builtin__.exit = exit + #print "loaded", __file__ + known_paths = dict([(makepath(item)[1],1) for item in sys.path]) # 2.2 comp -class _Printer(object): - """interactive prompt objects for printing the license text, a list of - contributors and the copyright notice.""" - - MAXLINES = 23 - - def __init__(self, name, data, files=(), dirs=()): - self.__name = name - self.__data = data - self.__files = files - self.__dirs = dirs - self.__lines = None - - def __setup(self): - if self.__lines: - return - data = None - for dir in self.__dirs: - for filename in self.__files: - filename = os.path.join(dir, filename) - try: - fp = file(filename, "rU") - data = fp.read() - fp.close() - break - except IOError: - pass - if data: - break - if not data: - data = self.__data - self.__lines = data.split('\n') - self.__linecnt = len(self.__lines) - - def __repr__(self): - self.__setup() - if len(self.__lines) <= self.MAXLINES: - return "\n".join(self.__lines) + for item in PYTHONPATH: + addsitedir(item) + + d,nd = makepath(stdpath[0]) + insert_at = None + skipped = [] + new_path = [] + + for item in sys.path: + p,np = makepath(item) + + if np==nd and insert_at is None: + # We've hit the first 'system' path entry, so added entries go here + new_path.extend(skipped) + insert_at = len(new_path) + skipped = [] + + if np in known_paths: + # Old path, just copy + new_path.append(item) + elif insert_at is None: + # New path before the insert point, buffer it + skipped.append(item) else: - return "Type %s() to see the full %s text" % ((self.__name,)*2) + # new path after the insert point, back-insert it + new_path.insert(insert_at, item) + insert_at += 1 + + new_path.extend(skipped) + sys.path[:] = new_path + +if __name__=='site': + __boot() + del __boot + + + - def __call__(self): - self.__setup() - prompt = 'Hit Return for more, or q (and Return) to quit: ' - lineno = 0 - while 1: - try: - for i in range(lineno, lineno + self.MAXLINES): - print self.__lines[i] - except IndexError: - break - else: - lineno += self.MAXLINES - key = None - while key is None: - key = raw_input(prompt) - if key not in ('', 'q'): - key = None - if key == 'q': - break - -def setcopyright(): - """Set 'copyright' and 'credits' in __builtin__""" - __builtin__.copyright = _Printer("copyright", sys.copyright) - if sys.platform[:4] == 'java': - __builtin__.credits = _Printer( - "credits", - "Jython is maintained by the Jython developers (www.jython.org).") - else: - __builtin__.credits = _Printer("credits", """\ - Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands - for supporting Python development. See www.python.org for more information.""") - here = os.path.dirname(os.__file__) - __builtin__.license = _Printer( - "license", "See http://www.python.org/%.3s/license.html" % sys.version, - ["LICENSE.txt", "LICENSE"], - [os.path.join(here, os.pardir), here, os.curdir]) - - -class _Helper(object): - """Define the built-in 'help'. - This is a wrapper around pydoc.help (with a twist). - - """ - - def __repr__(self): - return "Type help() for interactive help, " \ - "or help(object) for help about object." - def __call__(self, *args, **kwds): - import pydoc - return pydoc.help(*args, **kwds) - -def sethelper(): - __builtin__.help = _Helper() - -def aliasmbcs(): - """On Windows, some default encodings are not provided by Python, - while they are always available as "mbcs" in each locale. Make - them usable by aliasing to "mbcs" in such a case.""" - if sys.platform == 'win32': - import locale, codecs - enc = locale.getdefaultlocale()[1] - if enc.startswith('cp'): # "cp***" ? - try: - codecs.lookup(enc) - except LookupError: - import encodings - encodings._cache[enc] = encodings._unknown - encodings.aliases.aliases[enc] = 'mbcs' - -def setencoding(): - """Set the string encoding used by the Unicode implementation. The - default is 'ascii', but if you're willing to experiment, you can - change this.""" - encoding = "ascii" # Default value set by _PyUnicode_Init() - if 0: - # Enable to support locale aware default string encodings. - import locale - loc = locale.getdefaultlocale() - if loc[1]: - encoding = loc[1] - if 0: - # Enable to switch off string to Unicode coercion and implicit - # Unicode to string conversion. - encoding = "undefined" - if encoding != "ascii": - # On Non-Unicode builds this will raise an AttributeError... - sys.setdefaultencoding(encoding) # Needs Python Unicode build ! - - -def execsitecustomize(): - """Run custom site specific code, if available.""" - try: - import sitecustomize - except ImportError: - pass - - -def main(): - abs__file__() - paths_in_sys = removeduppaths() - if (os.name == "posix" and sys.path and - os.path.basename(sys.path[-1]) == "Modules"): - addbuilddir() - paths_in_sys = addsitepackages(paths_in_sys) - if sys.platform == 'os2emx': - setBEGINLIBPATH() - setquit() - setcopyright() - sethelper() - aliasmbcs() - setencoding() - execsitecustomize() - # Remove sys.setdefaultencoding() so that users cannot change the - # encoding after initialization. The test for presence is needed when - # this module is run as a script, because this code is executed twice. - if hasattr(sys, "setdefaultencoding"): - del sys.setdefaultencoding - -main() - -def _test(): - print "sys.path = [" - for dir in sys.path: - print " %r," % (dir,) - print "]" -if __name__ == '__main__': - _test() From python-checkins at python.org Fri Feb 10 14:19:53 2006 From: python-checkins at python.org (armin.rigo) Date: Fri, 10 Feb 2006 14:19:53 +0100 (CET) Subject: [Python-checkins] r42303 - python/trunk/Modules/_lsprof.c Message-ID: <20060210131953.E003C1E406B@bag.python.org> Author: armin.rigo Date: Fri Feb 10 14:19:53 2006 New Revision: 42303 Modified: python/trunk/Modules/_lsprof.c Log: The default timer unit was incorrectly measured in milliseconds instead of seconds, producing numbers 1000 times too large. It would be nice to write a test for this, but how... (thanks mwh) Modified: python/trunk/Modules/_lsprof.c ============================================================================== --- python/trunk/Modules/_lsprof.c (original) +++ python/trunk/Modules/_lsprof.c Fri Feb 10 14:19:53 2006 @@ -27,9 +27,9 @@ { LARGE_INTEGER li; if (QueryPerformanceFrequency(&li)) - return 1000.0 / li.QuadPart; + return 1.0 / li.QuadPart; else - return 0.001; /* unlikely */ + return 0.000001; /* unlikely */ } #else /* !MS_WINDOWS */ @@ -63,7 +63,7 @@ static double hpTimerUnit(void) { - return 0.001; + return 0.000001; } #endif /* MS_WINDOWS */ From python-checkins at python.org Fri Feb 10 17:17:25 2006 From: python-checkins at python.org (jack.jansen) Date: Fri, 10 Feb 2006 17:17:25 +0100 (CET) Subject: [Python-checkins] r42304 - python/trunk/Tools/bgen/bgen/bgenBuffer.py python/trunk/Tools/bgen/bgen/bgenHeapBuffer.py python/trunk/Tools/bgen/bgen/bgenType.py python/trunk/Tools/bgen/bgen/bgenVariable.py Message-ID: <20060210161725.8779F1E4007@bag.python.org> Author: jack.jansen Date: Fri Feb 10 17:17:24 2006 New Revision: 42304 Modified: python/trunk/Tools/bgen/bgen/bgenBuffer.py python/trunk/Tools/bgen/bgen/bgenHeapBuffer.py python/trunk/Tools/bgen/bgen/bgenType.py python/trunk/Tools/bgen/bgen/bgenVariable.py Log: For overriding C++ methods we also need to know whether a parameter is an output parameter or not. Added support for that. Modified: python/trunk/Tools/bgen/bgen/bgenBuffer.py ============================================================================== --- python/trunk/Tools/bgen/bgen/bgenBuffer.py (original) +++ python/trunk/Tools/bgen/bgen/bgenBuffer.py Fri Feb 10 17:17:24 2006 @@ -38,15 +38,15 @@ self.sizeformat = sizeformat or type2format[sizetype] self.label_needed = 0 - def getArgDeclarations(self, name, reference=False, constmode=False): + def getArgDeclarations(self, name, reference=False, constmode=False, outmode=False): if reference: raise RuntimeError, "Cannot pass buffer types by reference" return (self.getBufferDeclarations(name, constmode) + - self.getSizeDeclarations(name)) + self.getSizeDeclarations(name, outmode)) - def getBufferDeclarations(self, name, constmode=False): + def getBufferDeclarations(self, name, constmode=False, outmode=False): return self.getInputBufferDeclarations(name, constmode) + \ - self.getOutputBufferDeclarations(name, constmode) + self.getOutputBufferDeclarations(name, constmode, outmode) def getInputBufferDeclarations(self, name, constmode=False): if constmode: @@ -55,13 +55,21 @@ const = "" return ["%s%s *%s__in__" % (const, self.datatype, name)] - def getOutputBufferDeclarations(self, name, constmode=False): + def getOutputBufferDeclarations(self, name, constmode=False, outmode=False): if constmode: raise RuntimeError, "Cannot use const output buffer" - return ["%s %s__out__[%s]" % (self.datatype, name, self.size)] + if outmode: + out = "*" + else: + out = "" + return ["%s%s %s__out__[%s]" % (self.datatype, out, name, self.size)] - def getSizeDeclarations(self, name): - return ["%s %s__len__" %(self.sizetype, name)] + def getSizeDeclarations(self, name, outmode=False): + if outmode: + out = "*" + else: + out = "" + return ["%s%s %s__len__" %(self.sizetype, out, name)] def getAuxDeclarations(self, name): return ["int %s__in_len__" %(name)] @@ -112,7 +120,7 @@ class InputOnlyBufferMixIn(InputOnlyMixIn): - def getOutputBufferDeclarations(self, name, constmode=False): + def getOutputBufferDeclarations(self, name, constmode=False, outmode=False): return [] @@ -200,16 +208,20 @@ const = "" return ["%s%s *%s__in__" % (const, self.type, name)] - def getSizeDeclarations(self, name): + def getSizeDeclarations(self, name, outmode=False): return [] def getAuxDeclarations(self, name): return ["int %s__in_len__" % (name)] - def getOutputBufferDeclarations(self, name, constmode=False): + def getOutputBufferDeclarations(self, name, constmode=False, outmode=False): if constmode: raise RuntimeError, "Cannot use const output buffer" - return ["%s %s__out__" % (self.type, name)] + if outmode: + out = "*" + else: + out = "" + return ["%s%s %s__out__" % (self.type, out, name)] def getargsArgs(self, name): return "(char **)&%s__in__, &%s__in_len__" % (name, name) @@ -262,7 +274,7 @@ Instantiate with the struct type as parameter. """ - def getSizeDeclarations(self, name): + def getSizeDeclarations(self, name, outmode=False): return [] def getAuxDeclarations(self, name): @@ -279,7 +291,7 @@ Instantiate with the struct type as parameter. """ - def getSizeDeclarations(self, name): + def getSizeDeclarations(self, name, outmode=False): return [] def getAuxDeclarations(self, name): Modified: python/trunk/Tools/bgen/bgen/bgenHeapBuffer.py ============================================================================== --- python/trunk/Tools/bgen/bgen/bgenHeapBuffer.py (original) +++ python/trunk/Tools/bgen/bgen/bgenHeapBuffer.py Fri Feb 10 17:17:24 2006 @@ -16,10 +16,14 @@ def __init__(self, datatype = 'char', sizetype = 'int', sizeformat = None): FixedInputOutputBufferType.__init__(self, "0", datatype, sizetype, sizeformat) - def getOutputBufferDeclarations(self, name, constmode=False): + def getOutputBufferDeclarations(self, name, constmode=False, outmode=False): if constmode: raise RuntimeError, "Cannot use const output buffer" - return ["%s *%s__out__" % (self.datatype, name)] + if outmode: + out = "*" + else: + out = "" + return ["%s%s *%s__out__" % (self.datatype, out, name)] def getargsCheck(self, name): Output("if ((%s__out__ = malloc(%s__in_len__)) == NULL)", name, name) Modified: python/trunk/Tools/bgen/bgen/bgenType.py ============================================================================== --- python/trunk/Tools/bgen/bgen/bgenType.py (original) +++ python/trunk/Tools/bgen/bgen/bgenType.py Fri Feb 10 17:17:24 2006 @@ -29,7 +29,7 @@ for decl in self.getAuxDeclarations(name): Output("%s;", decl) - def getArgDeclarations(self, name, reference=False, constmode=False): + def getArgDeclarations(self, name, reference=False, constmode=False, outmode=False): """Return the main part of the declarations for this type: the items that will be passed as arguments in the C/C++ function call.""" if reference: @@ -40,7 +40,11 @@ const = "const " else: const = "" - return ["%s%s%s %s" % (const, self.typeName, ref, name)] + if outmode: + out = "*" + else: + out = "" + return ["%s%s%s%s %s" % (const, self.typeName, ref, out, name)] def getAuxDeclarations(self, name): """Return any auxiliary declarations needed for implementing this @@ -213,7 +217,7 @@ self.substitute = substitute self.typeName = None # Don't show this argument in __doc__ string - def getArgDeclarations(self, name, reference=False, constmode=False): + def getArgDeclarations(self, name, reference=False, constmode=False, outmode=False): return [] def getAuxDeclarations(self, name, reference=False): Modified: python/trunk/Tools/bgen/bgen/bgenVariable.py ============================================================================== --- python/trunk/Tools/bgen/bgen/bgenVariable.py (original) +++ python/trunk/Tools/bgen/bgen/bgenVariable.py Fri Feb 10 17:17:24 2006 @@ -45,12 +45,15 @@ elif self.flags != SelfMode: self.type.declare(self.name) - def getArgDeclarations(self, constmode=False): + def getArgDeclarations(self, fullmodes=False): refmode = (self.flags & RefMode) - if constmode: + constmode = False + outmode = False + if fullmodes: constmode = (self.flags & ConstMode) + outmode = (self.flags & OutMode) return self.type.getArgDeclarations(self.name, - reference=refmode, constmode=constmode) + reference=refmode, constmode=constmode, outmode=outmode) def getAuxDeclarations(self): return self.type.getAuxDeclarations(self.name) From python-checkins at python.org Fri Feb 10 18:46:22 2006 From: python-checkins at python.org (guido.van.rossum) Date: Fri, 10 Feb 2006 18:46:22 +0100 (CET) Subject: [Python-checkins] r42305 - peps/trunk/pep-0357.txt Message-ID: <20060210174622.A57101E4005@bag.python.org> Author: guido.van.rossum Date: Fri Feb 10 18:46:20 2006 New Revision: 42305 Modified: peps/trunk/pep-0357.txt Log: New version from Travis. Modified: peps/trunk/pep-0357.txt ============================================================================== --- peps/trunk/pep-0357.txt (original) +++ peps/trunk/pep-0357.txt Fri Feb 10 18:46:20 2006 @@ -10,63 +10,84 @@ Abstract - This PEP proposes adding an nb_as_index slot in PyNumberMethods and - an __index__ special method so that arbitrary objects can be used - in slice syntax. + This PEP proposes adding an nb_index slot in PyNumberMethods and an + __index__ special method so that arbitrary objects can be used + whenever only integers are called for in Python, such as in slice + syntax (from which the slot gets its name). Rationale - Currently integers and long integers play a special role in slice - notation in that they are the only objects allowed in slice - syntax. In other words, if X is an object implementing the sequence - protocol, then X[obj1:obj2] is only valid if obj1 and obj2 are both - integers or long integers. There is no way for obj1 and obj2 to - tell Python that they could be reasonably used as indexes into a - sequence. This is an unnecessary limitation. - - In NumPy, for example, there are 8 different integer scalars - corresponding to unsigned and signed integers of 8, 16, 32, and 64 - bits. These type-objects could reasonably be used as indexes into - a sequence if there were some way for their typeobjects to tell - Python what integer value to use. + Currently integers and long integers play a special role in slice + notation in that they are the only objects allowed in slice + syntax. In other words, if X is an object implementing the sequence + protocol, then X[obj1:obj2] is only valid if obj1 and obj2 are both + integers or long integers. There is no way for obj1 and obj2 to + tell Python that they could be reasonably used as indexes into a + sequence. This is an unnecessary limitation. + + In NumPy, for example, there are 8 different integer scalars + corresponding to unsigned and signed integers of 8, 16, 32, and 64 + bits. These type-objects could reasonably be used as integers in + many places where Python expects true integers. There should be + some way to be able to tell Python that an object can behave like + an integer. + + It is not possible to use the nb_int (and __int__ special method) + for this purpose because that method is used to *coerce* objects to + integers. It would be inappropriate to allow every object that can + be coerced to an integer to be used as an integer everywhere Python + expects a true integer. Proposal - - Add a nb_index slot to PyNumberMethods, and a corresponding - __index__ special method. Objects could define a function to - place in the sq_index slot that returns an appropriate - C-integer for use as ilow or ihigh in PySequence_GetSlice, - PySequence_SetSlice, and PySequence_DelSlice. + + Add a nb_index slot to PyNumberMethods, and a corresponding + __index__ special method. Objects could define a function to place + in the nb_index slot that returns an appropriate C-integer for use + as ilow or ihigh in PySequence_GetSlice, PySequence_SetSlice, and + PySequence_DelSlice. Implementation Plan - 1) Add the slots + 1) Add the nb_index slot in object.h and modify typeobject.c to + create the __index__ method. + + 2) Change the ISINT macro in ceval.c to ISINDEX and alter it to + accomodate objects with the index slot defined. + + 3) Change the _PyEval_SliceIndex function to accomodate objects + with the index slot defined. - 2) Change the ISINT macro in ceval.c to ISINDEX and alter it to - accomodate objects with the index slot defined. + 4) Change all builtin objects that use the subscript form and + special-check for integers to check for the slot as well - 3) Change the _PyEval_SliceIndex function to accomodate objects - with the index slot defined. + 5) Add PyNumber_Index C-API to return an integer from any + Python Object that has the nb_index slot. + + 6) Add an operator.index(x) function that calls x.__index__() Possible Concerns - Speed: + Speed: - Implementation should not slow down Python because integers and long - integers used as indexes will complete in the same number of - instructions. The only change will be that what used to generate - an error will now be acceptable. - - Why not use nb_int which is already there? - - The nb_int, nb_oct, and nb_hex methods are used for coercion. - Floats have these methods defined and floats should not be used in - slice notation. + Implementation should not slow down Python because integers and long + integers used as indexes will complete in the same number of + instructions. The only change will be that what used to generate + an error will now be acceptable. + + Why not use nb_int which is already there?: + + The nb_int method is used for coercion and so means something + fundamentally different than what is requested here. This PEP + proposes a method for something that *can* already be thought of as + an integer communicate that information to Python when it needs an + integer. The biggest example of why using nb_int would be a bad + thing is that float objects already define the nb_int method, but + float objects *should not* be used as indexes in a sequence. Reference Implementation - - Available on PEP acceptance. + + Submitted as a patch to SourceForge. Copyright - This document is placed in the public domain + This document is placed in the public domain From python-checkins at python.org Fri Feb 10 20:48:39 2006 From: python-checkins at python.org (guido.van.rossum) Date: Fri, 10 Feb 2006 20:48:39 +0100 (CET) Subject: [Python-checkins] r42306 - peps/trunk/pep-0000.txt peps/trunk/pep-0352.txt Message-ID: <20060210194839.5AB5A1E4005@bag.python.org> Author: guido.van.rossum Date: Fri Feb 10 20:48:38 2006 New Revision: 42306 Modified: peps/trunk/pep-0000.txt peps/trunk/pep-0352.txt Log: Tweak and accept PEP 352 -- new exception hierarchy. Modified: peps/trunk/pep-0000.txt ============================================================================== --- peps/trunk/pep-0000.txt (original) +++ peps/trunk/pep-0000.txt Fri Feb 10 20:48:38 2006 @@ -68,6 +68,7 @@ SA 308 Conditional Expressions GvR, Hettinger SA 328 Imports: Multi-Line and Absolute/Relative Aahz SA 343 The "with" Statement GvR, Coghlan + SA 352 Required Superclass for Exceptions GvR, Cannon Open PEPs (under consideration) @@ -105,7 +106,6 @@ S 345 Metadata for Python Software Packages 1.2 Jones I 350 Codetags Elliott S 351 The freeze protocol Warsaw - S 352 Required Superclass for Exceptions GvR, Cannon S 353 Using ssize_t as the index type von Loewis S 354 Enumerations in Python Finney S 355 Path - Object oriented filesystem paths Lindqvist @@ -404,7 +404,7 @@ SD 349 Allow str() to return unicode strings Schemenauer I 350 Codetags Elliott S 351 The freeze protocol Warsaw - S 352 Required Superclass for Exceptions GvR, Cannon + SA 352 Required Superclass for Exceptions GvR, Cannon S 353 Using ssize_t as the index type von Loewis S 354 Enumerations in Python Finney S 355 Path - Object oriented filesystem paths Lindqvist Modified: peps/trunk/pep-0352.txt ============================================================================== --- peps/trunk/pep-0352.txt (original) +++ peps/trunk/pep-0352.txt Fri Feb 10 20:48:38 2006 @@ -3,7 +3,7 @@ Version: $Revision$ Last-Modified: $Date$ Author: Brett Cannon , Guido van Rossum -Status: Draft +Status: Accepted Type: Standards Track Content-Type: text/x-rst Created: 27-Oct-2005 @@ -34,12 +34,15 @@ to rearrange the exception hierarchy slightly for the better. As it currently stands, all exceptions in the built-in namespace inherit from Exception. This is a problem since this includes two exceptions -(KeyboardInterrupt and SystemExit) that are usually meant to signal -that the interpreter should be shut down. Changing it so that these -two exceptions inherit from the common superclass instead of Exception -will make it easy for people to write ``except`` clauses that are not -overreaching and not catch exceptions that should propagate up and -terminate the interpreter. +(KeyboardInterrupt and SystemExit) that often need to be excepted from +the application's exception handling: the default behavior of shutting +the interpreter down with resp. without a traceback is usually more +desirable than whatever the application might do (with the possible +exception of applications that emulate Python's interactive command +loop with ``>>>`` prompt). Changing it so that these two exceptions +inherit from the common superclass instead of Exception will make it +easy for people to write ``except`` clauses that are not overreaching +and not catch exceptions that should propagate up. This PEP is based on previous work done for PEP 348 [#pep348]_. @@ -81,9 +84,13 @@ else self.args) def __repr__(self): - if (len(self.args) <= 1): - return "%s(%r)" % (self.__class__.__name__, self.message) - return "%s%r" % (self.__class__.__name__, self.args) + if not self.args: + argss = "()" + elif len(self.args) <= 1: + argss = "(%s)" % repr(self.message) + else: + argss = repr(self.args) + return self.__class__.__name__ + argss def __getitem__(self, index): """Index into arguments passed in during instantiation. @@ -201,11 +208,11 @@ * Python 2.5 - - introduce BaseException + - allow exceptions to be new-style classes - + allow exceptions to be new-style classes + - all standard exceptions become new-style classes - + all standard exceptions become new-style classes + - introduce BaseException - Exception, KeyboardInterrupt, and SystemExit inherit from BaseException @@ -229,7 +236,13 @@ * Python 3.0 - - drop ``args`` and ``__getitem__`` + - drop everything that was deprecated above: + + + drop string exceptions (could do this sooner?) + + + all exceptions must inherit from BaseException + + + drop ``args`` and ``__getitem__`` Implementation From python-checkins at python.org Fri Feb 10 21:49:31 2006 From: python-checkins at python.org (martin.v.loewis) Date: Fri, 10 Feb 2006 21:49:31 +0100 (CET) Subject: [Python-checkins] r42307 - python/trunk/configure python/trunk/configure.in Message-ID: <20060210204931.D14B81E4005@bag.python.org> Author: martin.v.loewis Date: Fri Feb 10 21:49:30 2006 New Revision: 42307 Modified: python/trunk/configure python/trunk/configure.in Log: Avoid linking python with readline. Modified: python/trunk/configure ============================================================================== --- python/trunk/configure (original) +++ python/trunk/configure Fri Feb 10 21:49:30 2006 @@ -1,5 +1,5 @@ #! /bin/sh -# From configure.in Revision: 42046 . +# From configure.in Revision: 42199 . # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.59 for python 2.5. # @@ -20016,6 +20016,8 @@ fi # check where readline lives +# save the value of LIBS so we don't actually link Python with readline +LIBS_no_readline=$LIBS echo "$as_me:$LINENO: checking for readline in -lreadline" >&5 echo $ECHO_N "checking for readline in -lreadline... $ECHO_C" >&6 @@ -20506,6 +20508,9 @@ fi +# End of readline checks: restore LIBS +LIBS=$LIBS_no_readline + echo "$as_me:$LINENO: checking for broken nice()" >&5 echo $ECHO_N "checking for broken nice()... $ECHO_C" >&6 if test "${ac_cv_broken_nice+set}" = set; then Modified: python/trunk/configure.in ============================================================================== --- python/trunk/configure.in (original) +++ python/trunk/configure.in Fri Feb 10 21:49:30 2006 @@ -2898,6 +2898,8 @@ fi # check where readline lives +# save the value of LIBS so we don't actually link Python with readline +LIBS_no_readline=$LIBS AC_CHECK_LIB(readline, readline) if test "$ac_cv_have_readline_readline" = no then @@ -2941,6 +2943,9 @@ [Define if you can turn off readline's signal handling.]), ) fi +# End of readline checks: restore LIBS +LIBS=$LIBS_no_readline + AC_MSG_CHECKING(for broken nice()) AC_CACHE_VAL(ac_cv_broken_nice, [ AC_TRY_RUN([ From python-checkins at python.org Fri Feb 10 22:09:12 2006 From: python-checkins at python.org (phillip.eby) Date: Fri, 10 Feb 2006 22:09:12 +0100 (CET) Subject: [Python-checkins] r42308 - in sandbox/trunk/setuptools: EasyInstall.txt setuptools/command/develop.py setuptools/command/easy_install.py Message-ID: <20060210210912.ABB7E1E4005@bag.python.org> Author: phillip.eby Date: Fri Feb 10 22:09:12 2006 New Revision: 42308 Modified: sandbox/trunk/setuptools/EasyInstall.txt sandbox/trunk/setuptools/setuptools/command/develop.py sandbox/trunk/setuptools/setuptools/command/easy_install.py Log: Implemented DWIM for PYTHONPATH. That is, ez_setup and easy_install should now "just work" if you're using a PYTHONPATH target, and if it can't "just work", you get helpful instructions and doc links. Modified: sandbox/trunk/setuptools/EasyInstall.txt ============================================================================== --- sandbox/trunk/setuptools/EasyInstall.txt (original) +++ sandbox/trunk/setuptools/EasyInstall.txt Fri Feb 10 22:09:12 2006 @@ -34,7 +34,9 @@ run it; this will download and install the appropriate ``setuptools`` egg for your Python version. (You will need at least Python 2.3.5, or if you are on a 64-bit platform, Python 2.4.) An ``easy_install`` script will be installed in -the normal location for Python scripts on your platform. +the normal location for Python scripts on your platform. (Windows users, don't +put ``ez_setup.py`` inside your Python installation; please put it in some +other directory before running it.) You may receive a message telling you about an obsolete version of setuptools being present; if so, you must be sure to delete it entirely, along with the @@ -185,11 +187,6 @@ the package. If you would like to be able to select which version to use at runtime, you should use the ``-m`` or ``--multi-version`` option. -Note, however, that installing to a directory other than ``site-packages`` -already implies the ``-m`` option, so if you cannot install to -``site-packages``, please see the `Command-Line Options`_ section below (under -``--multi-version``) to find out how to select packages at runtime. - Upgrading a Package ------------------- @@ -215,17 +212,16 @@ easy_install my_downloads/ExamplePackage-2.0.tgz -If you're using ``-m`` or ``--multi`` (or installing outside of -``site-packages``), using the ``require()`` function at runtime automatically -selects the newest installed version of a package that meets your version -criteria. So, installing a newer version is the only step needed to upgrade -such packages. - -If you're installing to Python's ``site-packages`` directory (and not -using ``-m``), installing a package automatically replaces any previous version -in the ``easy-install.pth`` file, so that Python will import the most-recently -installed version by default. So, again, installing the newer version is the -only upgrade step needed. +If you're using ``-m`` or ``--multi-version`` , using the ``require()`` +function at runtime automatically selects the newest installed version of a +package that meets your version criteria. So, installing a newer version is +the only step needed to upgrade such packages. + +If you're installing to a directory on PYTHONPATH, or a configured "site" +directory (and not using ``-m``), installing a package automatically replaces +any previous version in the ``easy-install.pth`` file, so that Python will +import the most-recently installed version by default. So, again, installing +the newer version is the only upgrade step needed. If you haven't suppressed script installation (using ``--exclude-scripts`` or ``-x``), then the upgraded version's scripts will be installed, and they will @@ -412,17 +408,16 @@ ----------------------- EasyInstall tries to install packages in zipped form, if it can. Zipping -packages can significantly increase Python's overall import performance if -you're installing to``site-packages`` and not using the ``--multi`` option, -because Python processes zipfile entries on ``sys.path`` much faster than it -does directories. +packages can improve Python's overall import performance if you're not using +the ``--multi-version`` option, because Python processes zipfile entries on +``sys.path`` much faster than it does directories. As of version 0.5a9, EasyInstall analyzes packages to determine whether they can be safely installed as a zipfile, and then acts on its analysis. (Previous versions would not install a package as a zipfile unless you used the ``--zip-ok`` option.) -The current analysis approach is very conservative; it currenly looks for: +The current analysis approach is fairly conservative; it currenly looks for: * Any use of the ``__file__`` or ``__path__`` variables (which should be replaced with ``pkg_resources`` API calls) @@ -538,11 +533,9 @@ versions and enabling optional dependencies, see the ``pkg_resources`` API doc.) - Note that if you install to a directory other than ``site-packages``, - this option is automatically in effect, because ``.pth`` files can only be - used in ``site-packages`` (at least in Python 2.3 and 2.4). So, if you use - the ``--install-dir`` or ``-d`` option (or they are set via configuration - file(s)) you must also use ``require()`` to enable packages at runtime. + Changed in 0.6a10: this option is no longer silently enabled when + installing to a non-PYTHONPATH, non-"site" directory. You must always + explicitly use this option if you want it to be active. ``--upgrade, -U`` (New in 0.5a4) By default, EasyInstall only searches online if a project/version @@ -958,21 +951,16 @@ install_lib = ~/py-lib install_scripts = ~/bin - [easy_install] - site_dirs = ~/py_lib - Be sure to do this *before* you try to run the ``ez_setup.py`` installation -script. Then, follow the standard `installation instructions`_, but take -careful note of the full pathname of the ``.egg`` file that gets installed, so -that you can add it to your ``PYTHONPATH``, along with ``~/py_lib``. - -You *must* add the setuptools egg file *and* ``~/py_lib`` to your -``PYTHONPATH`` environment variable manually, or it will not work, and neither -will any other packages you install with EasyInstall. You will not, however, -have to manually add any other packages to the ``PYTHONPATH``; EasyInstall will -take care of them for you by automatically editing -``~/py-lib/easy-install.pth``, as long as the setuptools egg is explicitly -listed in ``PYTHONPATH``. +script. Then, follow the standard `installation instructions`_, but make +sure that ``~/py-lib`` is listed in your ``PYTHONPATH`` environment variable. + +Your library installation directory *must* be in listed in ``PYTHONPATH``, +not only when you install packages with EasyInstall, but also when you use +any packages that are installed using EasyInstall. You will probably want to +edit your ``~/.profile`` or other configuration file(s) to ensure that it is +set, if you haven't already got this set up on your machine. + Release Notes/Change History @@ -983,6 +971,11 @@ time out or be missing a file. 0.6a10 + * Enhanced ``PYTHONPATH`` support so that you don't have to put any eggs on it + to make it work. ``--multi-version`` is no longer a silent default; you + must explicitly use it if installing to a non-PYTHONPATH, non-"site" + directory. + * Expand ``$variables`` used in the ``--site-dirs``, ``--build-directory``, ``--install-dir``, and ``--script-dir`` options, whether on the command line or in configuration files. Modified: sandbox/trunk/setuptools/setuptools/command/develop.py ============================================================================== --- sandbox/trunk/setuptools/setuptools/command/develop.py (original) +++ sandbox/trunk/setuptools/setuptools/command/develop.py Fri Feb 10 22:09:12 2006 @@ -67,6 +67,8 @@ self.reinitialize_command('build_ext', inplace=1) self.run_command('build_ext') + self.install_site_py() # ensure that target dir is site-safe + # create an .egg-link in the installation dir, pointing to our egg log.info("Creating %s (link to %s)", self.egg_link, self.egg_base) if not self.dry_run: @@ -78,8 +80,6 @@ # and handling requirements self.process_distribution(None, self.dist) - - def uninstall_link(self): if os.path.exists(self.egg_link): log.info("Removing %s (link to %s)", self.egg_link, self.egg_base) Modified: sandbox/trunk/setuptools/setuptools/command/easy_install.py ============================================================================== --- sandbox/trunk/setuptools/setuptools/command/easy_install.py (original) +++ sandbox/trunk/setuptools/setuptools/command/easy_install.py Fri Feb 10 22:09:12 2006 @@ -99,7 +99,7 @@ self.ignore_conflicts_at_my_risk = None self.site_dirs = None self.installed_projects = {} - + self.sitepy_installed = False # Always read easy_install options, even if we are subclassed, or have # an independent instance created. This ensures that defaults will # always come from the standard configuration file(s)' "easy_install" @@ -155,21 +155,20 @@ ) else: self.all_site_dirs.append(normalize_path(d)) - instdir = normalize_path(self.install_dir or self.all_site_dirs[-1]) + instdir = normalize_path(self.install_dir) if instdir in self.all_site_dirs: if self.pth_file is None: self.pth_file = PthDistributions( os.path.join(instdir,'easy-install.pth') ) - elif self.multi_version is None: - self.multi_version = True - elif not self.multi_version: - # explicit false set from Python code; raise an error - raise DistutilsArgError( - "Can't do single-version installs outside 'site-package' dirs" - ) + # Can't install non-multi to non-site dir + raise DistutilsError(self.no_default_version_msg()) + + if instdir in map(normalize_path, self.site_dirs or []): + # don't install site.py if install target is already a site dir + self.sitepy_installed = True self.install_dir = instdir self.index_url = self.index_url or "http://www.python.org/pypi" @@ -194,6 +193,7 @@ self.find_links = self.find_links.split() else: self.find_links = [] + self.package_index.add_find_links(self.find_links) self.set_undefined_options('install_lib', ('optimize','optimize')) if not isinstance(self.optimize,int): @@ -288,6 +288,7 @@ def easy_install(self, spec, deps=False): tmpdir = tempfile.mkdtemp(prefix="easy_install-") download = None + self.install_site_py() try: if not isinstance(spec,Requirement): @@ -325,7 +326,6 @@ if os.path.exists(tmpdir): rmtree(tmpdir) - def install_item(self, spec, download, tmpdir, deps, install_needed=False): # Installation is also needed if file in tmpdir or is not an egg @@ -687,7 +687,7 @@ if f: f.close() if filename not in blockers: blockers.append(filename) - elif ext in exts: + elif ext in exts and base!='site': # XXX ugh blockers.append(os.path.join(path,filename)) if blockers: @@ -900,9 +900,92 @@ + def no_default_version_msg(self): + return """ +----------------------------------------------------------------------- +CONFIGURATION PROBLEM: + +You are attempting to install a package to a directory that is not +on PYTHONPATH and is not registered as supporting Python ".pth" files +by default. Here are some of your options for correcting this: + +* You can choose a different installation directory, i.e., one that is + on PYTHONPATH or supports .pth files + +* You can add the installation directory to the PYTHONPATH environment + variable. (It must then also be on PYTHONPATH whenever you run + Python and want to use the package(s) you are installing.) + +* You can set up the installation directory to support ".pth" files, + and configure EasyInstall to recognize this, by using one of the + approaches described here: + + http://peak.telecommunity.com/EasyInstall.html#custom-installation-locations + +Please make the appropriate changes for your system and try again. +Thank you for your patience. +----------------------------------------------------------------------- +""" + + + + + + + + + + + + + + + + def install_site_py(self): + """Make sure there's a site.py in the target dir, if needed""" + + if self.sitepy_installed: + return # already did it, or don't need to + + sitepy = os.path.join(self.install_dir, "site.py") + source = resource_string(Requirement.parse("setuptools"), "site.py") + + if os.path.exists(sitepy): + log.debug("Checking existing site.py in %s", self.install_dir) + current = open(sitepy,'rb').read() + if current != source: + raise DistutilsError( + "%s is not a setuptools-generated site.py; please" + " remove it." % sitepy + ) + else: + log.info("Creating %s", sitepy) + if not self.dry_run: + f = open(sitepy,'wb') + f.write(source) + f.close() + self.byte_compile([sitepy]) + + self.sitepy_installed = True + + + + + + + + + + + + + + + + def get_site_dirs(): - # return a list of 'site' dirs, based on 'site' module's code to do this - sitedirs = [] + # return a list of 'site' dirs + sitedirs = filter(None,os.environ.get('PYTHONPATH','').split(os.pathsep)) prefixes = [sys.prefix] if sys.exec_prefix != sys.prefix: prefixes.append(sys.exec_prefix) @@ -939,7 +1022,7 @@ sitedirs = filter(os.path.isdir, sitedirs) sitedirs = map(normalize_path, sitedirs) - return sitedirs # ensure at least one + return sitedirs def expand_paths(inputs): """Yield sys.path directories that might contain "old-style" packages""" From python-checkins at python.org Fri Feb 10 22:39:55 2006 From: python-checkins at python.org (guido.van.rossum) Date: Fri, 10 Feb 2006 22:39:55 +0100 (CET) Subject: [Python-checkins] r42309 - peps/trunk/pep-0356.txt Message-ID: <20060210213955.79A1A1E4005@bag.python.org> Author: guido.van.rossum Date: Fri Feb 10 22:39:54 2006 New Revision: 42309 Modified: peps/trunk/pep-0356.txt Log: Sprinkle with question marks. Modified: peps/trunk/pep-0356.txt ============================================================================== --- peps/trunk/pep-0356.txt (original) +++ peps/trunk/pep-0356.txt Fri Feb 10 22:39:54 2006 @@ -1,7 +1,7 @@ PEP: 356 Title: Python 2.5 Release Schedule Version: $Revision$ -Author: Neal Norwitz +Author: Neal Norwitz, GvR Status: Draft Type: Informational Created: 07-Feb-2006 @@ -10,6 +10,9 @@ Abstract + (GvR: I'm sprinkling questions like this throughout this document. + I'll remove them again once the questions are answered.) + This document describes the development and release schedule for Python 2.5. The schedule primarily concerns itself with PEP-sized items. Small features may be added up to and including the first @@ -17,6 +20,7 @@ There will be at least two alpha releases, two beta releases, and one release candidate. The release date is planned 31 October 2006. + (GvR: perhaps one or two months earlier?) Release Manager @@ -30,6 +34,8 @@ Release Schedule + (GvR: perhaps one or even two months earlier? Perhaps three alphas?) + alpha 1: June 2006 [planned] alpha 2: July 2006 [planned] beta 1: August 2006 [planned] @@ -61,12 +67,22 @@ Planned features for 2.5 - PEP 308: Conditional Expressions + PEP 308: Conditional Expressions. + (GvR: who is volunteering?) + PEP 328: Absolute/Relative Imports + (GvR: who is volunteering?) + PEP 343: The "with" Statement + (GvR: who is volunteering? Is MWH's hack/patch available?) + PEP 352: Required Superclass for Exceptions + (GvR: who is volunteering? Maybe Brett?) + PEP 353: Using ssize_t as the index type + MvL expects this to be complete in March. + (GvR: I have a bunch more that could/would/should be added.) Deferred until 2.6: @@ -75,6 +91,8 @@ Ongoing tasks + (GvR: do we need all these items? This seems to be just filler.) + The following are ongoing TO-DO items which we should attempt to work on without hoping for completion by any particular date. @@ -110,6 +128,8 @@ Carryover features from Python 2.4 + (GvR: should we just drop this section and reject the PEPs/patches?) + Are any of these done or planned for 2.5? - Deprecate and/or remove the modules listed in PEP 4 (posixfile, @@ -131,6 +151,8 @@ Carryover features from Python 2.3 + (GvR: should we just drop this section and reject the PEPs/patches?) + - The import lock could use some redesign. (SF 683658.) - A nicer API to open text files, replacing the ugly (in some From barry at python.org Fri Feb 10 22:58:15 2006 From: barry at python.org (Barry Warsaw) Date: Fri, 10 Feb 2006 16:58:15 -0500 Subject: [Python-checkins] r42307 - python/trunk/configure python/trunk/configure.in In-Reply-To: <20060210204931.D14B81E4005@bag.python.org> References: <20060210204931.D14B81E4005@bag.python.org> Message-ID: <7C1AE0D9-DDF5-499B-B2E7-3B5BDF560595@python.org> On Feb 10, 2006, at 3:49 PM, martin.v.loewis wrote: > Author: martin.v.loewis > Date: Fri Feb 10 21:49:30 2006 > New Revision: 42307 > > Modified: > python/trunk/configure > python/trunk/configure.in > Log: > Avoid linking python with readline. Really? Dang, that's going to suck. -Barry From python-checkins at python.org Fri Feb 10 23:04:28 2006 From: python-checkins at python.org (phillip.eby) Date: Fri, 10 Feb 2006 23:04:28 +0100 (CET) Subject: [Python-checkins] r42310 - in sandbox/trunk/setuptools: EasyInstall.txt setuptools/command/easy_install.py Message-ID: <20060210220428.73A451E4005@bag.python.org> Author: phillip.eby Date: Fri Feb 10 23:04:27 2006 New Revision: 42310 Modified: sandbox/trunk/setuptools/EasyInstall.txt sandbox/trunk/setuptools/setuptools/command/easy_install.py Log: --prefix support for even more do-what-I-meanishness. :) Modified: sandbox/trunk/setuptools/EasyInstall.txt ============================================================================== --- sandbox/trunk/setuptools/EasyInstall.txt (original) +++ sandbox/trunk/setuptools/EasyInstall.txt Fri Feb 10 23:04:27 2006 @@ -971,10 +971,13 @@ time out or be missing a file. 0.6a10 + * Added ``--prefix`` option for more do-what-I-mean-ishness in the absence of + RTFM-ing. :) + * Enhanced ``PYTHONPATH`` support so that you don't have to put any eggs on it - to make it work. ``--multi-version`` is no longer a silent default; you - must explicitly use it if installing to a non-PYTHONPATH, non-"site" - directory. + manually to make it work. ``--multi-version`` is no longer a silent + default; you must explicitly use it if installing to a non-PYTHONPATH, + non-"site" directory. * Expand ``$variables`` used in the ``--site-dirs``, ``--build-directory``, ``--install-dir``, and ``--script-dir`` options, whether on the command line Modified: sandbox/trunk/setuptools/setuptools/command/easy_install.py ============================================================================== --- sandbox/trunk/setuptools/setuptools/command/easy_install.py (original) +++ sandbox/trunk/setuptools/setuptools/command/easy_install.py Fri Feb 10 23:04:27 2006 @@ -41,11 +41,11 @@ class easy_install(Command): """Manage a download/build/install process""" - description = "Find/get/install Python packages" command_consumes_arguments = True user_options = [ + ('prefix=', None, "installation prefix"), ("zip-ok", "z", "install package as a zipfile"), ("multi-version", "m", "make apps have to require() a version"), ("upgrade", "U", "force upgrade (searches PyPI for latest versions)"), @@ -90,7 +90,7 @@ self.optimize = self.record = None self.upgrade = self.always_copy = self.multi_version = None self.editable = self.no_deps = self.allow_hosts = None - self.root = None + self.root = self.prefix = None # Options not specifiable via command line self.package_index = None @@ -146,7 +146,7 @@ site_dirs = [ os.path.expanduser(s.strip()) for s in self.site_dirs.split(',') ] - for d in site_dirs: + for d in site_dirs: if not os.path.isdir(d): log.warn("%s (in --site-dirs) does not exist", d) elif normalize_path(d) not in normpath: @@ -317,7 +317,7 @@ raise DistutilsError(msg) elif dist.precedence==DEVELOP_DIST: # .egg-info dists don't need installing, just process deps - self.process_distribution(spec, dist, deps, "Using") + self.process_distribution(spec, dist, deps, "Using") return dist else: return self.install_item(spec, dist.location, tmpdir, deps) @@ -588,7 +588,7 @@ # Convert the .exe to an unpacked egg egg_path = dist.location = os.path.join(tmpdir, dist.egg_name()+'.egg') egg_tmp = egg_path+'.tmp' - egg_info = os.path.join(egg_tmp, 'EGG-INFO') + egg_info = os.path.join(egg_tmp, 'EGG-INFO') pkg_inf = os.path.join(egg_info, 'PKG-INFO') ensure_directory(pkg_inf) # make sure EGG-INFO dir exists dist._provider = PathMetadata(egg_tmp, egg_info) # XXX @@ -605,7 +605,7 @@ script_dir = os.path.join(egg_info,'scripts') self.delete_blockers( # delete entry-point scripts to avoid duping [os.path.join(script_dir,args[0]) for args in get_script_args(dist)] - ) + ) # Build .egg file from tmpdir bdist_egg.make_zipfile( egg_path, egg_tmp, verbose=self.verbose, dry_run=self.dry_run @@ -887,27 +887,34 @@ finally: log.set_verbosity(self.verbose) # restore original verbosity - def _expand(self, *attrs): - config_vars = self.get_finalized_command('install').config_vars - from distutils.util import subst_vars - for attr in attrs: - val = getattr(self, attr) - if val is not None: - if os.name == 'posix': - val = os.path.expanduser(val) - val = subst_vars(val, config_vars) - setattr(self, attr, val) + + + + + + + + + + def no_default_version_msg(self): - return """ ------------------------------------------------------------------------ -CONFIGURATION PROBLEM: + return """bad install directory or PYTHONPATH You are attempting to install a package to a directory that is not on PYTHONPATH and is not registered as supporting Python ".pth" files -by default. Here are some of your options for correcting this: +by default. The installation directory you specified (via --install-dir, +--prefix, or the distutils default setting) was: + + %s + +and your PYTHONPATH environment variable currently contains: + + %r + +Here are some of your options for correcting the problem: * You can choose a different installation directory, i.e., one that is on PYTHONPATH or supports .pth files @@ -922,16 +929,9 @@ http://peak.telecommunity.com/EasyInstall.html#custom-installation-locations -Please make the appropriate changes for your system and try again. -Thank you for your patience. ------------------------------------------------------------------------ -""" - - - - - - +Please make the appropriate changes for your system and try again.""" % ( + self.install_dir, os.environ.get('PYTHONPATH','') + ) @@ -961,6 +961,7 @@ else: log.info("Creating %s", sitepy) if not self.dry_run: + ensure_directory(sitepy) f = open(sitepy,'wb') f.write(source) f.close() @@ -981,6 +982,45 @@ + INSTALL_SCHEMES = dict( + posix = dict( + install_dir = '$base/lib/python$py_version_short/site-packages', + script_dir = '$base/bin', + ), + ) + + DEFAULT_SCHEME = dict( + install_dir = '$base/Lib/site-packages', + script_dir = '$base/Scripts', + ) + + def _expand(self, *attrs): + config_vars = self.get_finalized_command('install').config_vars + + if self.prefix: + # Set default install_dir/scripts from --prefix + config_vars = config_vars.copy() + config_vars['base'] = self.prefix + scheme = self.INSTALL_SCHEMES.get(os.name,self.DEFAULT_SCHEME) + for attr,val in scheme.items(): + if getattr(self,attr,None) is None: + setattr(self,attr,val) + + from distutils.util import subst_vars + for attr in attrs: + val = getattr(self, attr) + if val is not None: + val = subst_vars(val, config_vars) + if os.name == 'posix': + val = os.path.expanduser(val) + setattr(self, attr, val) + + + + + + + def get_site_dirs(): @@ -1018,12 +1058,12 @@ 'site-packages')) for plat_specific in (0,1): site_lib = get_python_lib(plat_specific) - if site_lib not in sitedirs: sitedirs.append(site_lib) + if site_lib not in sitedirs: sitedirs.append(site_lib) - sitedirs = filter(os.path.isdir, sitedirs) sitedirs = map(normalize_path, sitedirs) return sitedirs + def expand_paths(inputs): """Yield sys.path directories that might contain "old-style" packages""" From python-checkins at python.org Fri Feb 10 23:15:09 2006 From: python-checkins at python.org (jack.jansen) Date: Fri, 10 Feb 2006 23:15:09 +0100 (CET) Subject: [Python-checkins] r42311 - python/trunk/Tools/bgen/bgen/bgenBuffer.py Message-ID: <20060210221509.CD1901E4005@bag.python.org> Author: jack.jansen Date: Fri Feb 10 23:15:09 2006 New Revision: 42311 Modified: python/trunk/Tools/bgen/bgen/bgenBuffer.py Log: One more mod for support of C++ classes. Modified: python/trunk/Tools/bgen/bgen/bgenBuffer.py ============================================================================== --- python/trunk/Tools/bgen/bgen/bgenBuffer.py (original) +++ python/trunk/Tools/bgen/bgen/bgenBuffer.py Fri Feb 10 23:15:09 2006 @@ -41,7 +41,7 @@ def getArgDeclarations(self, name, reference=False, constmode=False, outmode=False): if reference: raise RuntimeError, "Cannot pass buffer types by reference" - return (self.getBufferDeclarations(name, constmode) + + return (self.getBufferDeclarations(name, constmode, outmode) + self.getSizeDeclarations(name, outmode)) def getBufferDeclarations(self, name, constmode=False, outmode=False): From python-checkins at python.org Fri Feb 10 23:23:48 2006 From: python-checkins at python.org (phillip.eby) Date: Fri, 10 Feb 2006 23:23:48 +0100 (CET) Subject: [Python-checkins] r42312 - in sandbox/trunk/setuptools: EasyInstall.txt setuptools/command/easy_install.py Message-ID: <20060210222348.B48C71E4005@bag.python.org> Author: phillip.eby Date: Fri Feb 10 23:23:48 2006 New Revision: 42312 Modified: sandbox/trunk/setuptools/EasyInstall.txt sandbox/trunk/setuptools/setuptools/command/easy_install.py Log: Fixed the annoying ``--help-commands`` wart, albeit in a most unfortunately kludgy fashion. Modified: sandbox/trunk/setuptools/EasyInstall.txt ============================================================================== --- sandbox/trunk/setuptools/EasyInstall.txt (original) +++ sandbox/trunk/setuptools/EasyInstall.txt Fri Feb 10 23:23:48 2006 @@ -1009,6 +1009,7 @@ package search was already going to go online due to a package not being available locally, or due to the use of the ``--update`` or ``-U`` option. + * Fixed the annoying ``--help-commands`` wart. 0.6a9 * Fixed ``.pth`` file processing picking up nested eggs (i.e. ones inside Modified: sandbox/trunk/setuptools/setuptools/command/easy_install.py ============================================================================== --- sandbox/trunk/setuptools/setuptools/command/easy_install.py (original) +++ sandbox/trunk/setuptools/setuptools/command/easy_install.py Fri Feb 10 23:23:48 2006 @@ -1254,12 +1254,6 @@ options = ' '+options return "#!%(executable)s%(options)s\n" % locals() -def main(argv=None, **kw): - from setuptools import setup - if argv is None: - argv = sys.argv[1:] - setup(script_args = ['-q','easy_install', '-v']+argv, **kw) - def auto_chmod(func, arg, exc): if func is os.remove and os.name=='nt': @@ -1269,6 +1263,12 @@ raise exc[0], (exc[1][0], exc[1][1] + (" %s %s" % (func,arg))) + + + + + + def get_script_args(dist, executable=sys_executable): """Yield write_script() argument tuples for a distribution's entrypoints""" spec = str(dist.as_requirement()) @@ -1351,3 +1351,44 @@ +def main(argv=None, **kw): + from setuptools import setup + from setuptools.dist import Distribution + import distutils.core + + USAGE = """\ +usage: %(script)s [options] requirement_or_url ... + or: %(script)s --help +""" + + def gen_usage (script_name): + script = os.path.basename(script_name) + return USAGE % vars() + + def with_ei_usage(f): + old_gen_usage = distutils.core.gen_usage + try: + distutils.core.gen_usage = gen_usage + return f() + finally: + distutils.core.gen_usage = old_gen_usage + + class DistributionWithoutHelpCommands(Distribution): + def _show_help(self,*args,**kw): + with_ei_usage(lambda: Distribution._show_help(self,*args,**kw)) + + if argv is None: + argv = sys.argv[1:] + + with_ei_usage(lambda: + setup( + script_args = ['-q','easy_install', '-v']+argv, + distclass=DistributionWithoutHelpCommands, **kw + ) + ) + + + + + + From thomas at xs4all.net Fri Feb 10 23:34:40 2006 From: thomas at xs4all.net (Thomas Wouters) Date: Fri, 10 Feb 2006 23:34:40 +0100 Subject: [Python-checkins] r42307 - python/trunk/configure python/trunk/configure.in In-Reply-To: <7C1AE0D9-DDF5-499B-B2E7-3B5BDF560595@python.org> References: <20060210204931.D14B81E4005@bag.python.org> <7C1AE0D9-DDF5-499B-B2E7-3B5BDF560595@python.org> Message-ID: <20060210223440.GM10226@xs4all.nl> On Fri, Feb 10, 2006 at 04:58:15PM -0500, Barry Warsaw wrote: > > On Feb 10, 2006, at 3:49 PM, martin.v.loewis wrote: > > > Author: martin.v.loewis > > Date: Fri Feb 10 21:49:30 2006 > > New Revision: 42307 > > > > Modified: > > python/trunk/configure > > python/trunk/configure.in > > Log: > > Avoid linking python with readline. > > Really? Dang, that's going to suck. Naw. The 'python' binary isn't linked to readline anymore, but Modules/main.c does import readline, so the interactive interpreter sees no difference. It just prevents linking against libreadline when it isn't necessary. -- Thomas Wouters Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From python-checkins at python.org Fri Feb 10 23:51:46 2006 From: python-checkins at python.org (thomas.wouters) Date: Fri, 10 Feb 2006 23:51:46 +0100 (CET) Subject: [Python-checkins] r42313 - python/trunk/Python/ceval.c Message-ID: <20060210225146.1BE891E4005@bag.python.org> Author: thomas.wouters Date: Fri Feb 10 23:51:45 2006 New Revision: 42313 Modified: python/trunk/Python/ceval.c Log: Explain the clearing of the stack in a comment in Python/ceval.c's call_function(), rather than commenting on the lack of an explanation in a comment. Modified: python/trunk/Python/ceval.c ============================================================================== --- python/trunk/Python/ceval.c (original) +++ python/trunk/Python/ceval.c Fri Feb 10 23:51:45 2006 @@ -3590,7 +3590,8 @@ Py_DECREF(func); } - /* What does this do? */ + /* Clear the stack of the function object and the arguments, + in case they weren't consumed already */ while ((*pp_stack) > pfunc) { w = EXT_POP(*pp_stack); Py_DECREF(w); From martin at v.loewis.de Fri Feb 10 23:52:24 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 10 Feb 2006 23:52:24 +0100 Subject: [Python-checkins] r42307 - python/trunk/configure python/trunk/configure.in In-Reply-To: <7C1AE0D9-DDF5-499B-B2E7-3B5BDF560595@python.org> References: <20060210204931.D14B81E4005@bag.python.org> <7C1AE0D9-DDF5-499B-B2E7-3B5BDF560595@python.org> Message-ID: <43ED1928.8030104@v.loewis.de> Barry Warsaw wrote: >>Avoid linking python with readline. > > > Really? Dang, that's going to suck. What do you mean? That python was linked to readline, or that it now no longer is? I'm not talking about the readline module - I'm talking about the interpreter binary proper. It linked to readline for no reason at all, but only because AC_CHECK_LIB not only checks, but also adds to LIBS. Regards, Martin From barry at python.org Sat Feb 11 00:24:50 2006 From: barry at python.org (Barry Warsaw) Date: Fri, 10 Feb 2006 18:24:50 -0500 Subject: [Python-checkins] r42307 - python/trunk/configure python/trunk/configure.in In-Reply-To: <20060210223440.GM10226@xs4all.nl> References: <20060210204931.D14B81E4005@bag.python.org> <7C1AE0D9-DDF5-499B-B2E7-3B5BDF560595@python.org> <20060210223440.GM10226@xs4all.nl> Message-ID: > > Naw. The 'python' binary isn't linked to readline anymore, but > Modules/main.c does import readline, so the interactive interpreter > sees no > difference. It just prevents linking against libreadline when it isn't > necessary. > nevermind! From python-checkins at python.org Sat Feb 11 10:27:05 2006 From: python-checkins at python.org (martin.v.loewis) Date: Sat, 11 Feb 2006 10:27:05 +0100 (CET) Subject: [Python-checkins] r42314 - in python/trunk: Modules/_ssl.c Modules/socketmodule.c PC/pyconfig.h Message-ID: <20060211092705.C0D251E4005@bag.python.org> Author: martin.v.loewis Date: Sat Feb 11 10:27:05 2006 New Revision: 42314 Modified: python/trunk/Modules/_ssl.c python/trunk/Modules/socketmodule.c python/trunk/PC/pyconfig.h Log: Introduce Py_SOCKET_FD_CAN_BE_GE_FD_SETSIZE. Proposed by Tim Peters. Modified: python/trunk/Modules/_ssl.c ============================================================================== --- python/trunk/Modules/_ssl.c (original) +++ python/trunk/Modules/_ssl.c Sat Feb 11 10:27:05 2006 @@ -377,8 +377,10 @@ return SOCKET_HAS_BEEN_CLOSED; /* Guard against socket too large for select*/ +#ifndef Py_SOCKET_FD_CAN_BE_GE_FD_SETSIZE if (s->sock_fd >= FD_SETSIZE) return SOCKET_INVALID; +#endif /* Construct the arguments to select */ tv.tv_sec = (int)s->sock_timeout; Modified: python/trunk/Modules/socketmodule.c ============================================================================== --- python/trunk/Modules/socketmodule.c (original) +++ python/trunk/Modules/socketmodule.c Sat Feb 11 10:27:05 2006 @@ -396,7 +396,14 @@ static PyTypeObject sock_type; /* Can we call select() with this socket without a buffer overrun? */ +#ifdef Py_SOCKET_FD_CAN_BE_GE_FD_SETSIZE +/* Platform can select file descriptors beyond FD_SETSIZE */ +#define IS_SELECTABLE(s) 1 +#else +/* POSIX says selecting file descriptors beyond FD_SETSIZE + has undefined behaviour. */ #define IS_SELECTABLE(s) ((s)->sock_fd < FD_SETSIZE) +#endif static PyObject* select_error(void) Modified: python/trunk/PC/pyconfig.h ============================================================================== --- python/trunk/PC/pyconfig.h (original) +++ python/trunk/PC/pyconfig.h Sat Feb 11 10:27:05 2006 @@ -572,4 +572,9 @@ /* Define if you have the thread library (-lthread). */ /* #undef HAVE_LIBTHREAD */ + +/* WinSock does not use a bitmask in select, and uses + socket handles greater than FD_SETSIZE */ +#define Py_SOCKET_FD_CAN_BE_GE_FD_SETSIZE + #endif /* !Py_CONFIG_H */ From python-checkins at python.org Sat Feb 11 11:00:01 2006 From: python-checkins at python.org (nick.coghlan) Date: Sat, 11 Feb 2006 11:00:01 +0100 (CET) Subject: [Python-checkins] r42315 - peps/trunk/pep-0338.txt Message-ID: <20060211100001.E94B51E4005@bag.python.org> Author: nick.coghlan Date: Sat Feb 11 11:00:00 2006 New Revision: 42315 Modified: peps/trunk/pep-0338.txt Log: Update PEP to fully support PEP 302 import semantics Modified: peps/trunk/pep-0338.txt ============================================================================== --- peps/trunk/pep-0338.txt (original) +++ peps/trunk/pep-0338.txt Sat Feb 11 11:00:00 2006 @@ -1,24 +1,30 @@ PEP: 338 -Title: Executing modules inside packages with '-m' +Title: Executing modules as scripts Version: $Revision$ Last-Modified: $Date$ -Author: Nick Coghlan +Author: Nick Coghlan Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 16-Oct-2004 Python-Version: 2.5 -Post-History: 8-Nov-2004 +Post-History: 8-Nov-2004, 11-Feb-2006 Abstract ======== -This PEP defines semantics for executing modules inside packages as -scripts with the ``-m`` command line switch. - -The proposed semantics are that the containing package be imported -prior to execution of the script. +This PEP defines semantics for executing any Python module as a +scripts, either with the ``-m`` command line switch, or by invoking +it via ``runpy.run_module(modulename)``. + +The ``-m`` switch implemented in Python 2.4 is quite limited. This +PEP proposes making use of the PEP 302 [4]_ import hooks to allow any +module which provides access to its code object to be executed. + +Additional functions are proposed to make the same convenience available +for other references to executable Python code (strings, code objects, +Python source files, Python compiled files). Rationale @@ -27,18 +33,34 @@ Python 2.4 adds the command line switch ``-m`` to allow modules to be located using the Python module namespace for execution as scripts. The motivating examples were standard library modules such as ``pdb`` -and ``profile``. +and ``profile``, and the Python 2.4 implementation is fine for this +limited purpose. A number of users and developers have requested extension of the feature to also support running modules located inside packages. One example provided is pychecker's ``pychecker.checker`` module. This capability was left out of the Python 2.4 implementation because the -appropriate semantics were not entirely clear. +implementation of this was significantly more complicated, and the most +appropriate strategy was not at all clear. The opinion on python-dev was that it was better to postpone the extension to Python 2.5, and go through the PEP process to help make sure we got it right. +Since that time, it has also been pointed out that the current version +of ``-m`` does not support ``zipimport`` or any other kind of +alternative import behaviour (such as frozen modules). + +Providing this functionality as a Python module is significantly easier +than writing it in C, and makes the functionality readily available to +all Python programs, rather than being specific to the CPython +interpreter. CPython's command line switch can then be rewritten to +make use of the new module. + +Scripts which execute other scripts (e.g. ``profile``, ``pdb``) also +have the option to use the new module to provide ``-m`` style support +for identifying the script to be executed. + Scope of this proposal ========================== @@ -46,30 +68,20 @@ In Python 2.4, a module located using ``-m`` is executed just as if its filename had been provided on the command line. The goal of this PEP is to get as close as possible to making that statement also hold -true for modules inside packages. +true for modules inside packages, or accessed via alternative import +mechanisms (such as ``zipimport``). Prior discussions suggest it should be noted that this PEP is **not** -about any of the following: - -- changing the idiom for making Python modules also useful as scripts - (see PEP 299 [1]_). - -- lifting the restriction of ``-m`` to modules of type PY_SOURCE or - PY_COMPILED (i.e. ``.py``, ``.pyc``, ``.pyo``, ``.pyw``). - -- addressing the problem of ``-m`` not understanding zip imports or - Python's sys.metapath. - -The issues listed above are considered orthogonal to the specific -feature addressed by this PEP. - +about changing the idiom for making Python modules also useful as +scripts (see PEP 299 [1]_). That issue is considered orthogonal to the +specific feature addressed by this PEP. Current Behaviour ================= Before describing the new semantics, it's worth covering the existing semantics for Python 2.4 (as they are currently defined only by the -source code). +source code and the command line help). When ``-m`` is used on the command line, it immediately terminates the option list (like ``-c``). The argument is interpreted as the name of @@ -91,20 +103,22 @@ ================== The semantics proposed are fairly simple: if ``-m`` is used to execute -a module inside a package as a script, then the containing package is -imported before executing the module in accordance with the semantics -for a top-level module. +a module the PEP 302 import mechanisms are used to locate the module and +retrieve its compiled code, before executing the module in accordance +with the semantics for a top-level module. The interpreter does this by +invoking a new standard library function ``runpy.run_module``. This is necessary due to the way Python's import machinery locates modules inside packages. A package may modify its own __path__ -variable during initialisation. In addition, paths may affected by -``*.pth`` files. Accordingly, the only way for Python to reliably +variable during initialisation. In addition, paths may be affected by +``*.pth`` files, and some packages will install custom loaders on +``sys.metapath``. Accordingly, the only way for Python to reliably locate the module is by importing the containing package and -inspecting its __path__ variable. +using the PEP 302 import hooks to gain access to the Python code. -Note that the package is *not* imported into the ``__main__`` module's -namespace. The effects of these semantics that will be visible to the -executed module are: +Note that the process of locating the module to be executed may require +importing the containing package. The effects of such a package import +that will be visible to the executed module are: - the containing package will be in sys.modules @@ -115,57 +129,164 @@ Reference Implementation ======================== -A reference implementation is available on SourceForge [2]_. In this -implementation, if the ``-m`` switch fails to locate the requested -module at the top level, it effectively reinterprets the command from -``python -m